id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
11047703
pes2o/s2orc
v3-fos-license
Survey of Italian pediatricians’ perspectives and knowledge about neonatal screening Background The goal of newborn screening is early identification of babies with a high risk for disorders that may not be clinically evident at birth, but have severe consequences if untreated. New insight into inherited diseases and the ability to test for numerous diseases using new technique such as tandem mass spectrometry have made it practical to greatly expand the number of conditions tested. The expanded neonatal screening is now available and relatively simple, but this represents only a part of the picture. Positive results require follow-up confirmation. Most disorders screened require confirmatory biochemical or genetic tests and specialist visits. An efficient system is needed for managing the care of affected newborns. Expanded newborn screening is not yet available in all Regions of Italy, but discussions aimed at organizing universal access are underway. If these are successful, the role of the pediatrician as the primary contact with the parents is expected to become even more important. Methods We have conducted a survey of Italian pediatricians to assess their familiarity and opinions on newborn screening in general and on expanded newborn screening. All members of the Italian Association of Pediatricians (n = 9000) were invited to compile a 10-item questionnaire online. Results The response rate was 10 %, corresponding to 605 of 6000 active members. Respondents were from all Regions of Italy, with the highest number of responses coming from Lombardy (138, 22.8 %), Campania and Puglia (n = 61; 10.1 %). Interestingly, expanded neonatal screening was not available in any of these Regions at the time of the survey. Regarding their understanding of neonatal screening in general, most respondents (n = 552; 91.1 %) considered that they had at least a sufficient level of knowledge; however, only 59.6 % thought they had sufficient knowledge of expanded newborn screening. Conclusions Successful implementation of a universal expanded NBS program will require efficient procedures for follow-up, diagnosis and treatment to prevent morbidity and mortality of infants and to reduce the period of uncertainty for unaffected families. Pediatricians may need additional training to allow them to fulfill their tasks of coordinating this process while keeping families informed and reassured. Electronic supplementary material The online version of this article (doi:10.1186/s13052-015-0147-1) contains supplementary material, which is available to authorized users. Background The goal of newborn screening (NBS) is early identification of babies with a high risk for disorders that may not be clinically evident at birth, but have severe consequences if undiagnosed and untreated. The prototypical condition is phenylketonuria [1], which is initially asymptomatic, but leads to severe irreversible mental impairment if not treated promptly. Alongside vaccination programs, NBS programs represent one of the great public health success stories, improving outcomes and reducing the burden to patients, families and society. In Italy, neonatal screening for phenylketonuria, congenital hypothyroidism, and cystic fibrosis has been mandatory by law since 1992. New insight into inherited diseases combined with the introduction of tandem mass spectrometry (MS-MS) and the ability to test for numerous diseases using a fast and reliable analysis has led to expansion of the number of conditions tested. Many of the diseases amenable to screening are rare inborn errors of metabolism resulting from a genetic defect that affects an enzyme active in one or more metabolic pathways. These defects cause pathological accumulation of substrates or deficiencies of essential products. Many are involved in the metabolism of fatty acids, organic acids or amino acids/urea cycle [2]. The majority of these disorders present clinical symptoms in the neonatal period and the conditions cause progressive irreversible damage. Criteria have been established to select disorders for inclusion (Additional file 1) [3,4]. Factors to be taken into consideration are disease morbidity, mortality, natural history and epidemiology; test reliability, precision, clinical validity, cost, and the existence of confirmatory tests; treatment availability, effectiveness and the availability of medical expertise. Continuous improvements in treatment, in particular the introduction of enzyme substitution therapies and the prospect of gene therapy may further increase the number of diseases included in screening panels. Discussion is currently ongoing on whether to include lysosomal storage diseases in routine screening [5,6]. At the international level, there is a high level of heterogeneity regarding the decision to conduct expanded NBS and the composition of screening panels [7][8][9][10], and there are also examples of considerable regional heterogeneity within countries. In Italy, newborn screening programs are managed at the regional level. Italian Law 104/1992, DPCM of July 9, 1999 mandates newborn screening for phenylketonuria, congenital hypothyroidism and cystic fibrosis, but it also allows the possibility for individual regions to organize screening programs for additional congenital diseases, and screening programs for endocrinopathies and inborn errors of metabolism are performed in some Italian Regions. Currently, there is substantial dishomogeneity among Italian Regions regarding the implementation of screening programs. Several Regions have expanded screening programs in place that cover 20 to 40 rare diseases (Tuscany, Umbria, Liguria, Sardinia, Emilia Romagna, Sicily, Veneto), while some other have experimental programs ongoing (Lazio). In all, about one third of newborns currently undergo expanded screening in Italy. Discussions and planning are underway to introduce it uniformly in all regions. There is need for shared criteria for selecting diseases to screen for and an analysis of possible synergies for the most efficient management of this task and the steps that must follow when a positive screening result is obtained. The screening itself is part of a program in which blood spot analysis represents only the initial part of the picture. To insure adequate analytical sensitivity, there is a consistent number of false positive results that require follow-up confirmation. Most of the disorders screened require confirmatory biochemical or genetic tests and specialist visits. When affected newborns are identified, there must be an efficient system for managing their condition. With the introduction of expanded NBS, there will be more positive results (both true and false) and the role of the pediatrician as the primary contact with the parents is expected to become even more important. Recommendations on expanded screening from the SISMME and SISN [11] indicate that primary care pediatricians should be familiar with the screening procedure and with diseases that are included in the panel. They should also maintain contacts with the local screening facility and the appropriate specialist care providers, and interact directly with the affected families to keep them informed. Pediatric specialists, on the other hand, should organize the follow-up of patients with positive screening results and share the results with the primary pediatrician. They should also coordinate diagnostic and therapeutic efforts of other specialists, and set up the long-term treatment strategy with the primary pediatrician and the family. It is clear that pediatricians have an important role in the process that will grow if universal coverage is established. We report the results of a survey conducted among pediatricians from all regions of Italy regarding their knowledge and impression of the concept of NBS in pediatrics, focusing on diagnosis, treatment and healthcare resource management. Methods An invitation to participate in the web-based survey was sent to all 9000 members of the Italian Pediatrics Society (SIP). The survey questionnaire was developed by the Authors and consisted of ten questions to gauge familiarity with the principles of NBS and how they apply to the pediatrician's Region (see below). Questions were included to confirm also the understanding of criteria for selecting diseases to include in the extended panel and the principle characteristics of extended screening. Questionnaire on the concept of neonatal screening in pediatrics Newborn screening Most respondents (99 %) indicated correctly that neonatal screening is performed in their Region. In fact, basic NBS is mandatory in Italy. Regarding their understanding of neonatal screening in general, most respondents considered that they had at least a sufficient level of knowledge (n = 552, 91.1 %). Regarding the criteria for including a disease in a mass screening panel, responses were consistent with the criteria introduced by Wilson and Jungner [11] and subsequently refined [12]. The most common responses were "diagnosable in the initial phase" (21.5 %), "treatment must be more efficacious when administered early" (30.9 %), "the cost of treating the disease in adults could be unsustainable for the healthcare system" (21.5 %). Expanded newborn screening Nearly 80 % of respondents thought that expanded NBS should be required by law. This would appear to reflect a strong belief in the benefit of this service. Responses to the question regarding the principle characteristics of expanded neonatal screening were consistent with a clear understanding of the subject. The two correct answers received a total of 88 % of the selections; however, 40.4 % of respondents felt that their knowledge of expanded NBS was insufficient or poor. Interestingly, many of these respondents may live in Regions that do not yet provide expanded screening; nonetheless, this indicates that there is a need for training and education so that these pediatricians will be prepared to inform parents in the event that a harmonized nationwide program is organized to offer screening to all newborns in Italy. Follow-up Finally, the question regarding the presence of reference Centers for metabolic diseases in the respondent's Region revealed that 537 (89.4 %) believed that there was such a centre in their Region. To some extent, this may reflect more on the percentage of respondents from Regions that have such Centers, but it could also reflect confusion over the definition of a specialist referral centre. Expanded NBS covers diseases that require highly trained metabolic pediatricians or geneticists. Moreover, the Center should have an intensive care unit and pediatricians/neonatologists should be aware that neonatal onset metabolic diseases like urea cycle disorders or organic acidemias may present clinically before screening results are available. Conclusions Primary care pediatricians in Italy are now familiar with NBS using the traditional panel, but some may not be fully aware of the diseases included in expanded NBS panels. A large majority of respondents believe that expanded NBS should be required by law in Italy. This is an important message from the survey. We need to work toward a uniform screening panel to avoid Regional disparities that might limit the benefit of the screening itself. Clearly, this would entail a substantial increase in the number of callbacks and a corresponding increase in interactions between pediatricians, specialists, laboratories and the families of affected newborns. The information exchanged would be more complex, involving lesser-known diseases. Some of these new diseases do not follow the clear pattern seen in phenylketonuria, where a complete absence of symptoms in the early months is followed by serious neuropsychological damage. Some conditions may even remain asymptomatic "mild cases", complicating the interpretation of results. Two in five survey respondents did not believe that they had sufficient knowledge about expanded NBS. Successful implementation of an expanded NBS program will require resources and efficient procedures for follow-up, diagnosis and treatment to prevent morbidity and mortality of infants and to reduce the period of uncertainty for unaffected families. Pediatricians may need additional training to allow them to fulfill the tasks of coordinating this process while keeping families informed and reassured. The Italian Pediatrics Society can play an important role. Collecting and analyzing results from all Centers will allow benchmarking and future optimization of the screening program in our Country.
2017-09-06T19:00:56.160Z
2015-05-29T00:00:00.000
{ "year": 2015, "sha1": "99437672b6b8f8f1235a9cb7b19bfdbb5e65c389", "oa_license": "CCBY", "oa_url": "https://ijponline.biomedcentral.com/track/pdf/10.1186/s13052-015-0147-1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "99437672b6b8f8f1235a9cb7b19bfdbb5e65c389", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
67789853
pes2o/s2orc
v3-fos-license
Pigs receiving daily tailored diets using precision-feeding techniques have different threonine requirements than pigs fed in conventional phase-feeding systems Background There is large variation in amino acids requirements among pigs, hence feeding pigs individually with daily tailored diets or in groups with a single feed may require different levels of nutrients. Thus, the response to different threonine levels (70%, 85%, 100%, 115%, and 130% of the ideal threonine:lysine protein ratio of 0.65) was studied in growing pigs raised in a conventional group phase-feeding (GPF) system or fed individually using individual precision-feeding (IPF) techniques. In a 21-day trial, 110 barrows (25 ± 0.80 kg body weight) were housed in the same room and fed using electronic feeders. Five pigs per treatment were slaughtered at the end of the trial. Results Threonine intake increased linearly for the IPF and GPF pigs (P < 0.05). Lysine intake was similar across the treatments. Average daily gain, gain:feed ratio, and protein deposition were affected linearly by threonine level (P < 0.05) in both feeding systems. Protein deposition in the GPF pigs was maximized at 150 g/d and a 0.65 threonine:lysine ratio, whereas protein deposition increased linearly in the IPF pigs. Plasma Met and serine levels were 11 and 7% higher, respectively, in the IPF pigs than in the GPF pigs (P < 0.05). Dietary threonine increased (P < 0.05) threonine concentration in the longissimus dorsi in a quadratic manner in the IPF pigs, whereas there was no effect in the GPF pigs. Longissimus dorsi collagen decreased as dietary threonine increased in the IPF and GPF pigs (P < 0.10). Carcass muscle crude protein was 2% higher in the GPF pigs than in the IPF pigs (P < 0.05). Conclusions Individual pigs are able to modulate growth and the composition of growth according to threonine intake. The average amino acid ratio value that is currently used for GPF cannot be used for IPF. Background Pigs are usually fed in groups with the same diet provided during each feeding phase, and the composition of the diet is adjusted to the estimated nutrient requirements of a representative animal in the group. These requirements are often estimated using factorial methods in which the average pig is taken as the reference for the population (e.g., National Research Council, 2012 [1]). However, pigs have different requirements, and these requirements change over time [2]. Optimal responses in conventional group phase-feeding (GPF) systems are, however, obtained with levels of nutrients that satisfy the requirements of the most demanding animals in the group, because for most nutrients, underfed pigs exhibit reduced growth performance, whereas overfed ones exhibit near optimal performance [2,3]. Indeed, most of the pigs receive more nutrients than they need to express their growth potential [2]. Feeding pigs with daily tailored diets using individual precision-feeding techniques (IPF) is proposed to alleviate the limitations of group-feeding systems [4,5]. Individual lysine (Lys) requirements are estimated in IPF systems according to each pig's daily feed intake, body weight (BW), and daily gain patterns [2]. Other amino acid (AA) requirements are established according to a recognized ideal AA profile using Lys as the reference AA. It has been demonstrated that, in relation to conventional GPF systems, precision feeding can reduce Lys intake by 26%, nitrogen excretion by 30%, and feeding costs by 10% [6,7]. The ability of the proposed method to estimate individual pig Lys requirement has been validated [8,9], but no validation of the method's estimation of other AA requirements, which today are estimated using a conventional ideal AA profile, has been performed. It has been recently observed, however, that pigs fed daily tailored diets might have higher methionine (Met):Lys ratios than pigs in GPF systems do [10]. Threonine (Thr) is often the second-limiting AA in conventional commercial diets, and feeding pigs AA deficient diets limit protein deposition (PD) and affects tissue protein composition [11,12]. Thus, Thr deficiency might lead to the synthesis of proteins with less Thr and a reduction of the Thr concentration in the overall body muscles [13]. Because IPF significantly reduces Lys intake, we hypothesized that the ideal AA profile may differ between IPF and GPF systems and that using the current AA recommendation may limit PD and change plasma and muscle AA concentrations in precision-fed pigs. The aim of this study was to evaluate metabolic changes due to feeding pigs with increasing levels of dietary Thr (70%, 85%, 100%, 115%, or 130% of the estimated ideal standardized ileal digestible [SID] Thr:Lys ratio of 0.65 [14]) on animal growth performance and on plasma and body protein AA concentrations in IPF and GPF systems. Animals, housing, and management Animals were cared for in accordance with a recommended code of practice [15] and the guidelines of the Canadian Council on Animal Care [16], and the animal trial was approved (Case No. 478) by the Ethical and Animal Welfare Committee of Agriculture and Agri-Food Canada's Sherbrooke Research and Development Centre (Sherbrooke, QC, Canada). A total of 110 healthy barrow pigs of the same highperformance genotype (Fertilis 25 × G-Performer 8.0; Geneticporc Inc., St-Gilbert, QC, Canada) were shipped to the swine complex at the Sherbrooke Research and Development Centre. The pigs were allocated to one of two 76-m 2 pens with concrete slat floors in the same mechanically ventilated room. The pigs each had an electronic chip placed in their ear to give them access to the feeders. Between their arrival and the start of the trial, the pigs were fed commercial growing diets. Water was provided with low-pressure nipple drinkers, and feed was provided individually ad libitum throughout the adaptation period (14 d) and experimental period (21 d) with 10 feeding stations (Automatic and Intelligent Precision Feeder; University of Lleida, Lleida, Spain). The temperature of the room was decreased gradually from 22°C when the piglets arrived to 18°C at the end of the experimental period to ensure thermoneutral conditions. The photoperiod consisted of 12 h of light and 12 h of darkness. The pigs' health status was checked daily. This check included daily observations of DFI records and monitoring for the presence of diarrhea and for other signs of health disorders. Body temperature was measured when distress conditions were observed, and pigs were treated in accordance with veterinarian recommendations when necessary. The pigs (25 ± 0.80 kg BW) were assigned randomly to the treatments in two complete blocks according to a 2 × 5 factorial arrangement, with the main factors being (1) two feeding systems (IPF or GPF), and (2) five Thr levels (70%, 85%, 100%, 115%, or 130% of the estimated ideal Thr:Lys ratio of 0.65 [14]). The experimental unit was the individual pig, and each treatment included 11 replicates. Each of the two complete blocks included 55 pigs, and the blocks started the experimental period one week apart. Pigs within each block were housed in the same pen. Individual transponder codes allowed the feeders to identify individual pigs, record feed intake data and the feeds to be provided to each pig according to the assigned feeding system and Thr level. In each single-space feeder, precision Archimedes screw conveyors delivered and simultaneously blended volumetric amounts of up to four feeds stored in independent containers located in the top of the feeder [17]. The feeder identified each pig when the feed demand was made, and the feeder read the specific treatment formula for that pig, mixed the feed in accordance with the assigned treatment, and dropped the feeds into the feeder tray. A time lag between services was set in accordance with the pig's BW and feed intake. All the feeders were designed to provide meals to all the animals, regardless of the treatment. Because of this feature, all the animals could be housed in the same pen [6,18] and each animal could be considered an experimental unit. Feeding programs, nutritional requirements, and diets Data from high-performance pigs from previous trials completed at the Sherbrooke Research and Development Centre were used as the reference population for calculating the pigs' Lys requirement to formulate the feeds (named A1, A2, B1, and B2) ( Table 1). The formulation of these feeds was performed using each ingredient's SID AA content obtained by determining the product of its tabulated total AA content [1] and the SID value in the INRA-AFZ tables [19]. The four experimental feeds were formulated to contain similar net energy concentrations and AA profiles for AA other than Thr. The AA were provided 10% above the ideal AA:Lys ratios: 30% for Met [13], 60% for Met + cysteine [13], 65% for Thr [14], 22% for tryptophan [20], 70% for valine (Val) [21], 51% for isoleucine (Iso) [22], 100% for leucine (Leu) and 32% for histidine (His) [22] and 42% for arginine (Arg) [1], whereas Lys was provided 10% under the estimated requirements [2]. Feeds A1 and A2 were formulated to satisfy the requirements for minerals and AAs other than Thr of the most demanding pigs in the reference population, and feeds B1 and B2 were formulated to satisfy the requirements for minerals and AAs other than Thr of the less demanding pigs in the reference population [2,6,7]. However, feeds A1 and B1 were formulated to provide 130% of the optimal Thr:Lys level, and feeds A2 and B2 were formulated to provide 70% of the optimal Thr:Lys level. Dietary phosphorus and calcium requirements were estimated according to the National Research Council [1]. Microbial phytase was not added, but the calcium:digestible phosphorus ratio was kept constant. Dietary treatments for the IPF and GPF pigs were obtained by blending the four experimental feeds in the required proportions. For the IPF pigs, the required daily concentration of SID Lys was estimated with a mathematical model using individual feed intake and weekly BW information [2]. With this historical information, the empirical component of the model estimated, for each pig, the expected BW, DFI, and weight gain for the starting day on which the pig would receive the calculated feed blend. Thereafter, the mechanistic component of the model used these three estimated variables to calculate, by means of a factorial method, the optimal concentration of Lys that should be offered that day to each pig in the herd to meet its requirements. This method of estimating nutrient requirements was described previously [2,6] and validated in three earlier studies [7][8][9]. The use of this model allowed each pig in the IPF system to receive, each day, a diet tailored to its Lys requirement. In the GPF system, Lys requirement was estimated by assuming that the population requirements were those of the 80 th -percentile pig in the group at the beginning (average of 3 d) of the phase [10,23] and maintained constant for all pigs through out the feeding phase. However, SID Lys supplies were decreased by 10% to ensure that Lys was the second-limiting AA [24], whereas the other AAs except Thr were provided 10% above the estimated levels. Threonine was provided at the assigned treatment level. The AA ratios were calculated in the same way in both feeding systems and kept constant throughout the experiment. Experimental measurements Performance The pigs were weighed at arrival and three times during the adaptation period to calibrate the model before the experimental protocol was applied. Animal performance was evaluated through average daily feed intake (ADFI) (kg/d), average daily gain (ADG) (kg/d), gain:feed ratio (G:F) (kg/kg), SID Lys intake (g/d), SID Thr intake (g/d), total body PD (g/d), PD in daily gain (%), and total body lipid deposition (LipD) (g/d). Total body fat and lean content were measured by dual X-ray absorptiometry (DXA) on d 1 and 21 of the trial with a densitometer device (GE Lunar Prodigy Advance, Madison, WI, USA). The pigs were scanned in the prone position using the total-body scanning mode of the manufacturer-provided software (Lunar enCORE Software, version 8.10.027). Anesthesia was induced with sevoflurane (7%) and maintained with isoflurane (5%) during the scans. Blood sampling Blood samples were taken on d 21 after 10 h of fasting. Samples from the jugular vein were collected in Vacutainer tubes with EDTA anticoagulant for enzymatic and biochemical analyses or with sodium heparin for the AA analysis. The time between sampling and centrifugation did not exceed 1 h, during which the samples were kept on ice. The blood samples were centrifuged for 15 min at 1000×g at 4°C. For AA analysis, 20 μL of standard enriched AAs was added to the samples within 30 min after centrifugation. All plasma samples were kept at − 20°C during the sampling day and stored at − 80°C at the end of the day. Organ and muscle sampling Five pigs per treatment were randomly chosen and slaughtered in a commercial slaughterhouse between d 22 and 28, and the treatments were maintained during this period. Each pig carcass was scalded and scraped, and the eviscerated carcass was split longitudinally, with the head and feet kept on it. The right side of the carcass was dissected, and the head and feet were discarded. The longissimus muscle was separated from the loin cut. The liver and the small intestine (washed and free of mesentery) were collected. All samples were sealed in separate vacuum plastic bags and stored for a maximum of 2 months at − 20°C until sampling. The liver and small intestinal tissue were ground twice and sampled. The pool of dissected muscles was cut into cubes and mixed for grinding. The longissimus dorsi and a pool of all the other muscles were ground four times and sampled. All the samples were freeze-dried and stored at − 80°C until analysis. Chemical and biochemical analyses Two replicates of each sample were analyzed using the Association of Official Analytical Chemists [25] Oakville, ON, Canada).). The samples were heated at 90°C for 35 min and transferred to vials for gas chromatography (Agilent 5182-0714 vials; Agilent Technologies, Saint-Laurent, QC, Canada). All AA samples were measured by gas chromatography-mass spectrometry (Agilent Technologies 7890B gas chromatograph system coupled to an Agilent Technologies 5977A mass selective detector). The immunoglobulin G (IgG) content was determined by means of enzyme-linked immunosorbent assay (ELISA) kits (Pig IgG ELISA Quantitation Set, ref. E100-104; Bethyl Laboratories, Inc., Montgomery, TX, USA). The biochemical and enzymatic analyses of plasma were performed with an automatic analyzer (Beckman DxC 600; Beckman Coulter, Mississauga, ON, Canada) by a dedicated external laboratory (Faculté de médecine vétérinaire, Université de Montréal, Saint-Hyacinthe, QC, Canada). Calculations and statistical analysis Total pig weight gain was calculated as the difference between the weight measured at the beginning of the trial and the weight measured at the end of the trial. The SID Lys, SID Thr, and CP intakes were obtained for each pig by tallying the daily amount of nutrients provided by each of the blended feeds that were served. Lysine retention and Thr retention were estimated by assuming that 6.9% of body protein is Lys [27] and 3.7% of body protein is Thr [28]. The availability of these AAs for protein synthesis was estimated by removing from the SID pool the amounts used for maintenance. Lysine and Thr maintenance requirements were estimated by adding together the basal endogenous losses, the losses related to desquamation in the digestive tract, and the losses related to the basal renewal of body proteins [29]. Lysine efficiency of utilization and Thr efficiency of utilization were calculated by dividing the corresponding retained amount by the available AA intake. The DXA body lean and fat masses were converted to their protein and lipid chemical equivalents [30]. Protein deposition in gain (%) was calculated by dividing the PD by the ADG. Nitrogen excretion values were obtained by subtracting the respective nutrient retention and intake values. Performance and carcass data were analyzed as a 2 × 5 factorial arrangement using a mixed model in SAS (version 9.4; SAS Institute Inc., Cary, NC, USA). The main effects were the feeding system, the Thr level, and their interaction, and the block was considered a random effect. The assumption of normal distribution of variables was checked using the Cramer-von Mises test within the UNIVARIATE procedure of SAS. The uncertainty in the estimate of the means of the data was expressed as the maximum standard error (MSE), and a P-value less than 0.05 was considered to be statistically significant, whereas a P-value less than 0.10 was considered a tendency. Differences between individual treatments were compared with polynomial contrasts. The optimal Thr:Lys ratio was estimated for each feeding program using the NLIN procedure of SAS. Results All but six of the pigs consumed feed and gained weight in accordance with the expected performance of the genetic line. Three of those six pigs had low feed intake, low ADG, and recurrent fever during the adaptation period. Three other pigs were removed from the trial, one because of a severe inflammatory foot problem and two because of respiratory problems unrelated to the trial. All those pigs were treated for their specific problem and isolated, and their data were not considered in the analysis. Thus, the performance data presented in this paper come from 10 pigs for the IPF treatments with 70%, 115%, and 130% of the ideal Thr:Lys ratio (0.65) and the GPF treatment with 85% of that ratio, 8 pigs for the IPF treatment with 85% of that ratio, and 11 pigs for all the other treatments. Growth performance, nutrient intake, and nitrogen balance During the trial, ADFI, SID Lys intake, CP intake, PD in gain, LipD, final BW, and nitrogen excretion were not affected by Thr levels or feeding system ( Table 2). Average daily gain, G:F, SID Thr intake, Lys efficiency of utilization, PD, and nitrogen retention increased linearly (P < 0.05) and Thr efficiency of utilization decreased linearly (P < 0.05) with the level of dietary Thr. However, growth performance, nutrient intake and N balance were not affected by feeding system. No interactions between Thr level and feeding system were observed. Estimation of optimal Thr:Lys ratio Protein deposition, ADG, and G:F were the criterion responses used to estimate the optimal levels of dietary Thr in pigs fed in the IPF and GPF systems (Table 3). These variable responses were preferred because they are directly affected by the AA supply. Increasing the Thr:Lys ratio in the IPF pigs increased the response variables under study, which prevented identification of the optimal ratio. For the pigs raised in the GPF system, however, the breakpoint of the linear-plateau model was observed at Thr:Lys ratios of 60.2%, 64.9%, and 68.6% for PD, ADG, and G:F, respectively, whereas the breakpoint of the quadratic-plateau model was observed at Thr:Lys ratios of 68.2%, 71.1%, and 70.6% (Fig. 1). Thus, in relation to the optimal Thr:Lys ratios obtained with the linear-plateau models for maximum PD, the ideal ratio increased by 8% when ADG was optimized and by 15% when G:F was optimized. These increases on requirements were of 4% when the quadratic-plateau were compared to linear-plateau model in both maximal ADG and G:F. A large variation was found within treatment, and in IPF only 24% (R 2 = 0.24) and in GPF only 20% (R 2 = 0.20) of the variability in the data is explained by the AA ratio. Biochemical and enzymatic responses in plasma Plasma creatinine (μmol/L), IgG (μg/mL), and creatine kinase (CK) (U/L) were not affected by feeding system or Thr level (P > 0.10) ( Table 4). Plasma albumin (g/L) increased (P < 0.05) linearly within IPF and it was not affected in the GPF pigs. Plasma total protein (g/L) increased linearly with the increase in Thr levels (P < 0.05) but were not affected by feeding system. C-reactive protein (CRP) (μg/mL) increased (P < 0.05) in a linear manner in the IPF pigs and in a quadratic manner in the GPF pigs. Alanine aminotransferase (ALT) (U/L) increased (P < 0.05) linearly in the IPF pigs and showed a cubic increase in the GPF pigs. Aspartate aminotransferase (AST) (U/L) tended (P < 0.10) to increase linearly as dietary Thr increased and tended (P < 0.10) to be 8% higher in the IPF pigs than in the GPF pigs. Lactic acid dehydrogenase (LDH) (U/L) tended to be 9% higher in the IPF pigs than in the GPF pigs. Urea (μmol/L) decreased (P < 0.05) in a quadratic manner in both feeding systems. Free AAs in plasma The dietary essential AAs (EAAs) His, Lys, and Thr (Table 5) were affected in a cubic, quadratic, and linear manner, respectively, by dietary Thr level (P < 0.05) but were not affected by feeding system. Methionine was not affected by dietary Thr level but was 11% higher in the IPF pigs than in the GPF pigs (P < 0.05). The other EAAs were not affected by dietary Thr level or feeding system. The dietary non-essential AAs (NEAAs) glutamine (Glu) tended (P < 0.10) to increase in a quadratic manner as a function of dietary Thr level, whereas the NEAAs glycine (Gly), proline (Pro), and homocysteine tended (P < 0.10) to increase linearly with the increase in dietary Thr level. Serine (Ser) increased but tyrosine (Tyr) decreased linearly with the increase in dietary Thr level (P < 0.05). Serine was 7% higher in the IPF pigs than in the GPF pigs (P < 0.05). The NEAAs Glu, glutamate, Gly, homocysteine, Pro, Ser, and Tyr increased in a linear manner as dietary Thr level increased, but only Ser was affected by the feeding system, being 4% lower in the IPF pigs than in the GPF pigs. Table 2 Initial and final animal body composition, growth performance, and nutrient efficiency of growing barrows (25 to 42 kg body weight) fed different levels of threonine (70%, 85%, 100%, 115%, and 130% of the ideal threonine:lysine ratio of 0.65) in an individual precision-feeding (IPF) system or a group phase-feeding (GPF) system Thr, level of threonine; FS, feeding system; L × Thr, interaction between level of threonine and feeding system; † linear effect for Thr; ‡ tendency for a linear effect for Thr Liver AAs and chemical composition In this growth trial (Table 6), Thr (tendency; P < 0.10) and Ser (P < 0.05) concentrations (g AA/100 g CP) in the liver were 1 and 2% higher, respectively, in the IPF pigs than in the GPF pigs. The other EAAs and NEAAs, DM, CP, fat, and ash were not affected by Thr level or feeding system or their interaction during the growing phase. Intestine AAs and chemical composition Asparagine (Asp) and Ser showed a feeding system × Thr level interaction with no effect on intestine AA composition in the IPF pigs and a cubic effect tendency (P < 0.10) in the GPF pigs (Table 7). Methionine tended (P < 0.10) to be 10% lower in the small intestinal tissue in the IPF pigs in comparison with the GPF pigs. The other EAAs and NEAAs, DM, CP, fat, and ash were not affected by Thr level or feeding system or their interaction during the growing phase. Longissimus dorsi AAs and chemical composition. Histidine decreased linearly in the longissimus dorsi as dietary Thr level increased (P < 0.05), independent of feeding system (Table 8). Isoleucine (tendency; P < 0.10) and Leu decreased P < 0.05 linearly in the IPF pigs and in a quadratic manner in the GPF pigs. Lysine (P < 0.10), glutamate (P < 0.10), Thr (P < 0.05), and alanine (Ala) (P < 0.05) increased in a quadratic manner in the IPF pigs as dietary Thr level increased, but those AA were not affected in the GPF pigs. Cysteine tended to decrease (P < 0.10) linearly in the IPF pigs, whereas it tended to increase linearly in the GPF pigs. Glycine tended to be 1.4% higher (P < 0.10) in the GPF pigs than in the IPF pigs. Collagen in the longissimus dorsi decreased (P < 0.05) with the increase in dietary Thr level, independent of feeding system. The other EAAs and NEAAs, DM, CP, fat, and ash were not affected by Thr level or feeding system or their interaction during the growing phase. Table 3 Non-linear model parameters between the independent response variables (protein deposition, average daily gain, and gain:feed ratio) and the threonine:lysine ratio in an individual precision-feeding (IPF) system and a group phase-feeding (GPF) system estimated with a linear-plateau model and a quadratic-plateau model Pool of carcass muscle AAs and chemical composition In the pool from the right half of the carcass, the EAAs Arg, Iso, Leu, phenylalanine, Thr, and Val and the NEAAs Ser and Tyr showed an interaction between dietary Thr level and feeding system (P < 0.05), decreasing in a cubic manner in the IPF pigs and increasing in a cubic manner in the GPF pigs ( Table 9). The EAAs His and Lys and the NEAA Asp also showed an interaction between dietary Thr level and feeding system (P < 0.05), with a cubic decrease in concentration in the IPF pigs and a tendency (P < 0.10) toward a cubic increase in the GPF pigs. The NEAAs Ala and Pro were affected by an interaction between dietary Thr level and feeding system (P < 0.05), with the concentration decreasing in a cubic manner in the IPF pigs and increasing in a quadratic manner in the GPF pigs. Proline (P < 0.05), phenylalanine and Val (P < 0.05) and Leu (P < 0.10), were 5%, 4%, 3%, respectively, higher in the GPF pigs than the IPF pigs. Threonine, Lys, Iso, Ala, Asp, Ser and Tyr were 4% (P < 0.10) higher in the GPF pigs than the IPF pigs. Cysteine (P < 0.05) and Gly (P < 0.10) were 6% and 4% higher, respectively, in the GPF pigs than the IPF pigs, and these AAs were not affected by dietary Thr level. Glutamate, DM, ash, fat, and collagen were not affected by Thr level or feeding system or their interaction during the growing phase. However, CP tended (P < 0.10) to be 1.5% higher in the GPF pigs than in the IPF pigs. Performance is affected by Thr level Threonine levels did not affect ADFI during this growing phase, a result that is consistent with the literature [28,31,32]. The improved G:F ratio is due to the linear increase in ADG without changes in the ADFI. Thr, level of threonine; FS, feeding system; L × Thr, interaction between level of threonine and feeding system; c linear effect for Thr; d quadratic effect for Thr; e cubic effect for Thr Normally, pigs fed in conventional group-feeding systems receive on average during the overall growing and finishing period 26% more Lys than pigs fed daily tailored diets do [7]. However, SID Lys intake was similar in this trial between the GPF and the IPF pigs. This similarity was due to the fact that dietary SID Lys concentration was decreased by 10% in the GPF pigs to ensure that Lys was the second-limiting AA, whereas each day, the IPF pigs received the estimated amount of SID Lys required for maintenance and growth. As well, SID Lys requirement for GPF was precisely adjusted knowing individual requirements, making this concentration (SID Lys 0.88%), similar to the average SID Lys provided to IPF pigs (SID Lys of 0.85%). It was this artefact that allowed us to compare both programs in equal basis avoiding Lys to drive the protein response. Still, SID Thr intake increased linearly, as expected, due to the increase in Thr concentration in the feeds. During this growth trial, the linear increase in dietary Thr concentration allowed PD to increase linearly in both feeding systems, in line with the literature [28]. However, PD was not affected by feeding system, whereas compared with the 100% level of SID Thr intake, 30% Thr restriction resulted in only 12% decrease of PD. Previously, Andretta et al. [7] showed that moving from conventional to precision feeding systems does not affect growing pigs PD or performance. The percentage of protein or lipids in daily gain during the growing phase was not affected by dietary treatments even at the lower levels of PD. Cloutier et al. [7] observed a tendency of decrease in the percentage of protein in daily gain but no effect in LipD in the pigs receiving a diet 30% deficient in SID Lys. A higher backfat thickness and lower lean percentage resulted from feeding pigs with Lys deficient diets [33]. It is however expected that when dietary energy levels are sufficient to promote maximum Table 5 Liver amino acid concentrations of growing barrows (25 to 42 kg body weight) fed different levels of threonine (70%, 85%, 100%, 115%, and 130% of the ideal threonine:lysine ratio of 0.65) in an individual precision-feeding (IPF) system or a group phase-feeding (GPF) Thr, level of threonine; FS, feeding system; Thr × FS, interaction between level of threonine and feeding system PD, but that an essential AA is limiting, PD would be reduced and the energy that is not used for protein synthesis would be stored in the form of lipids [34]. Still that growing pigs have high PD potential, but also that there is a great variation between animals. This large variation with respect to the percentage of protein in daily gain may have prevented the increase in LipD that is expected when PD is limited with a similar energy intake. Estimated Thr and Lys efficiencies of utilization increased to nearly 100% at lower AA intake levels, with the most efficient animals in terms of AA utilization generating values over 100% of AA retention. Threonine efficiency values of 91% [35] and 86% [28] and Lys efficiency values of 107% and 101% [36] are found in the literature when pigs are fed AA-deficient diets. Lysine efficiency seems to increase with the level of dietary Lys deficiency, indicating that pigs are more efficient in utilizing Lys when they are fed below requirements [37]. The Lys and Thr efficiencies values found in this study are higher than those found in the literature, which are around 72% for Lys and 62% for Thr [29]. The difference between the values observed in this trial and those in the literature may be the result of metabolic or experimental factors [38]. Thus, the increase in Lys and Thr efficiency values when pigs are fed Lys-and Thr-deficient diets may result in part from the difficulties of estimating maintenance requirements [28], which may be different from one animal to another because of each individual animal's metabolism. Furthermore, a constant efficiency value is generally proposed because body protein AA concentration is assumed to be constant and independent of the pig's age, nutrient intake, and lean and fat growth rates [28]. Table 6 Intestine amino acid concentrations of growing barrows (25 to 42 kg body weight) fed different levels of threonine (70%, 85%, 100%, 115%, and 130% of the ideal threonine:lysine ratio of 0.65) in an individual precision-feeding (IPF) system or a group phase-feeding (GPF) system Thr, level of threonine; FS, feeding system; Thr × FS, interaction between level of threonine and feeding system; a cubic effect within GPF Therefore, high AA efficiency of utilization might result from the fact that these efficiencies values were obtained through a back calculation using the observed PD in the pigs but assuming the Lys concentration constant as 6.9% of the protein. This constant AA concentration in protein seems to be an invalid assumption, given that protein and energy levels [39], age [11], sulfur AA deficiency [12,40], Thr deficiency [13] or excess, and genetics [41] can change body AA composition. The most metabolically efficient pigs may use several mechanisms, such as decreased protein degradation, increased AA absorption in the small intestinal tissue, and increased absorption of AAs from plasma proteins, to cope with lower AA intake, thereby contributing to the higher AA efficiency. Amino acid ratios cannot be used for precision feeding In this study, the estimated ideal Thr:Lys ratio was 65% for the GPF system, but the ideal ratio for pigs fed daily tailored diets was not clear, due the linear response to increasing Thr:Lys. Ratios based on the ideal protein profile have been assumed to be a practical way to formulate diets for non-ruminants, decreasing the use of CP [24,42,43]. There was concern, however, about whether these constant AA ratios could also be applied for IPF. In this feeding system, the required concentration of SID Lys is estimated individually for each pig using individual DFI and BW information. The other EAAs and the pool of NEAAs are supplied in this method using conventional ideal AA ratios. The proportional decrease in Thr as Lys Table 7 Longissimus dorsi amino acid concentrations of growing barrows (25 to 42 kg body weight) fed different levels of threonine (70%, 85%, 100%, 115%, and 130% of the ideal threonine:lysine ratio of 0.65) in an individual precision-feeding (IPF) system or a group phase-feeding (GPF) system Thr, level of threonine; FS, feeding system; Thr × FS, interaction between level of threonine and feeding system; † linear effect for Thr; a linear effect within IPF; b linear effect within GPF; c quadratic effect within IPF; d quadratic effect within GPF; requirement decreased seemed to limit the performance of the IPF system when a Thr:Lys ratio of 65% was used. Our findings point to the conclusion that for IPF, independent estimates of Thr and possibly other AAs requirements, are required. Establishing recommendations for AA requirements can be hampered by the differences between individuals and the availability of dietary nutrients. More important than determining an acceptable ratio between AAs is understanding the factors that are at the origin of the differences between animals. In this trial, we observed a large variation within treatments in both feeding systems. This within-treatment variation might be associated with between-animal variation, as well as with experimental and metabolic factors. In situations where the AA intake is not sufficient to support maximum growth, the growth rate is reduced and the AA composition of muscles is changed [11]. It is possible in such situations that the AA metabolism is affected and that this effect is modulated by the composition and amount of AAs supplied in the diet. In other words, the animal does not have a requirement but rather a response to AA intake, thereby generating variance. Metabolism is affected by feeding system and Thr levels Normally, AST, ALT, CK, and creatinine are the recommended variables used for identifying liver and kidney damage or failure. In this study, these biochemical variables were within the expected ranges for growing pigs [44], and therefore, the plasma enzymatic changes in Table 8 Carcass muscle amino acid concentrations (without longissimus dorsi) of growing barrows (25 to 42 kg body weight) fed different levels of threonine (70%, 85%, 100%, 115%, and 130% of the ideal threonine:lysine ratio of 0.65) in an individual precisionfeeding (IPF) system or a group phase-feeding (GPF) system Thr, level of threonine; FS, feeding system; Thr × FS, interaction between level of threonine and feeding system; † linear effect for Thr; a cubic effect within IPF; b cubic effect within GPF; c tendency for a cubic effect within GPF; d quadratic effect within GPF AST, ALT, and CK observed in this trial are associated more likely with changes in total muscle tissue mass and metabolism than with liver damage. The AST in plasma was 8% higher in the IPF pigs than in the GPF pigs, pointing to possible muscle breakdown. With the lowest levels of Thr intake in the IPF system (i.e., 30% below the requirement), ALT activity and urea in plasma were increased, suggesting an increase in the deamination of Ala and other AAs and in urea synthesis. Meanwhile, in the GPF system, ALT in plasma increased in a cubic manner and urea decreased in a quadratic manner with the increase in dietary Thr level. Thus, increased ALT with linear plasma urea increase within IPF at lower levels of dietary Thr can indicate that pigs restrictive treatments had lower protein synthesis or higher AA catabolism. C-reactive protein was within normal values for healthy pigs [44]. Nonetheless, Thr in plasma increased with the increase in Thr intake, reflecting a linear increase in CRP in the IPF pigs and a quadratic increase in CRP in the GPF pigs. C-reactive protein is a major acute-phase protein in pigs exposed to health challenges [45]. But more importantly, this protein is composed mainly of Ser (9.62%), Gly (7.48%) and Thr (6.4%) [46]. Because Thr and its products are important components of CRP, it is possible that more CRP was synthesized at higher levels of Thr intake and that, at lower levels of Thr intake, CRP was degraded to provide Thr, serine, and Gly for protein synthesis. It is therefore likely that the increases in plasma Ser, Gly, and Thr favoured the synthesis of CRP. The low levels of albumin in plasma observed in the pigs in the Thr-deprived dietary treatments may point to albumin synthesis reduction. The rate of albumin synthesis is reduced in cases of malnutrition, malabsorption, or maldigestion [47], what could result from Thr deficient diets. Plasma albumin accounts for 0.5% of total body proteins, as it is the major blood protein and an important protein carrier in plasma [48]. The decrease in albumin concentration in plasma could have contributed to the reduction of the supply of AAs for the natural turnover of protein in peripheral tissues [45]. In general, we observed a linear increase in plasma proteins (albumin, total protein, and CRP) with the plasmatic increase of Thr. Albumin prevents irreversible oxidative losses by capturing excess AAs and transporting them to peripheral tissues, in order to sustain local protein synthesis [49]. When the concentration of AAs in tissue cells decreases, plasma proteins are transported into tissue cells to provide AAs and ensure cellular equilibrium [50]. Therefore, when Thr deficient diets are provided to pigs, low plasma protein concentration may occurs due use of these proteins to maintain to peripheral tissues protein synthesis; still, Thr deficiency might decrease the rate of plasma protein synthesis. Both mechanisms could be used by the metabolism to increase the efficiency with which it uses the limiting AA, as has been observed in this and other trial [52] where pigs were fed at lower levels of Thr. Higher concentrations of plasma Lys and His were found in the pigs fed at low levels of dietary Thr in both feeding systems. When one AA is limiting in the diet (Thr in our case), some essential AAs such as Lys [13] and His [11] will increase in plasma, probably due to their low utilization for net PD [52]. The linear increase Table 9 Blood plasma biochemical parameters of growing barrows (25 to 42 kg body weight) fed different levels of threonine (70%, 85%, 100%, 115%, and 130% of the ideal threonine:lysine ratio of 0.65) in an individual precision-feeding (IPF) system or a group phase-feeding (GPF) system L, level of threonine; FS, feeding system; L × FS, interaction between level of threonine and feeding system; † linear effect for L; ‡ quadratic effect for L; a linear effect within IPF; b cubic effect within GPF; c quadratic effect within GPF in the plasma concentrations of Gly and serine in both feeding systems, might be due to the Thr linear increase in plasma. Threonine in pigs is oxidized in the liver and pancreas into Gly and Ser [53]. Plasma Met and Ser levels were 11% and 7% higher, respectively, in the IPF pigs than in the GPF pigs. This difference might suggest higher oxidation of Gly in Ser in IPF system even if the rate of conversion of Gly to Ser seems limited by intestinal capacity in young pigs [54] or higher oxidation of Glu in Ser. The higher plasma Met is likely due to lower Met retention in the small intestinal tissue of the IPF pigs, which was 10% lower than in the GPF pigs. Splanchnic tissue tends to be preserved over AA restriction Amino acid concentration and protein content in the small intestinal tissue and liver were not affected by dietary Thr levels, with the exception of Ser and a trend for Thr in the liver, which were 2% and 1% higher, respectively, in the IPF pigs than in the GPF pigs. Other studies in which animals were fed in conventional group-feeding systems with diets deficient in either Thr [13] or sulfur AAs [11,12] showed low or no impact on AA concentration in the small intestinal tissue. This lack of effect of dietary AA deficiency on small intestinal tissue AA concentration can be attributed to the fact that most of the AAs retained in the proximal part of the small intestine come from the diet [55] and that absorbed dietary AAs are used first by the splanchnic tissues [12]. We can speculate that splanchnic tissues are protected from AA deficiency because of the dietary AA pathway, which reaches the liver via the portal vein after crossing the intestinal walls. Indeed, the liver and intestine are the main sites for AA metabolism in mammals. The metabolism seems to protect the integrity of these organs before other tissues, because the liver and intestine receive the absorbed AAs before others such as the skeletal tissues, thus resulting in smaller variation in AA splanchnic tissue composition. Hamard et al. [13] found higher Thr retention in the liver and colon of Thr-deficient pigs. It is plausible that the IPF pigs that received decreasing concentrations of AAs throughout the growing period developed additional metabolic mechanisms to cope with Thr deficiency, such as higher Thr retention. The lower Thr concentration and the tendency toward lower Ser concentrations found in the pool of skeletal muscles of the IPF pigs may indicate that the organism tried to retain the limiting AA for protein synthesis in the liver in order to optimize protein synthesis at the moment of AA availability. The higher levels of AST in the IPF pigs in this and another study [51] may signal skeletal muscle protein breakdown for resynthesis during AA restriction, supporting the idea that pigs use diverse mechanisms to cope with AA deficiency. Muscle AA composition is affected differently by Thr restriction and feeding systems In the IPF and GPF systems evaluated in this study, muscle AA concentrations were affected by Thr restriction in an opposite cubic manner. Conde-Aguilera et al. [40] found that sulfur AA restriction had little effect on carcass AA concentration when the trial duration was 10 d, but longer periods of restriction affected muscle protein content and AA concentration [11]. In a 14-day experiment, Hamard et al. [13] found no effect on protein content and little effect on AA concentration in carcasses muscles, with the exception of Thr, which decreased in animals with a 30% Thr restriction. The 21-day length of the present trial, which is 7 d longer than previous studies [13,40], can explain the effects of Thr restriction on muscle AA concentration and protein content observed in our study. Protein concentration in the longissimus dorsi increased linearly in the IPF pigs and was not affected in the GPF pigs. In the longissimus dorsi, protein concentration was, on average, equal between the two systems, whereas protein concentration in the pool of carcass muscles tended to be 1.5% higher in the GPF pigs than in the IPF pigs. This lower protein concentration signals that the IPF pigs were more affected by Thr restriction than the GPF pigs were. Nutrient requirements in growing pigs change rapidly over the growing period, and animals fed in conventional GPF systems may have limiting supplies of AAs at the beginning of the phase but not necessarily throughout the entire period [23]. In an in silico study, Hauschild et al. [23] demonstrated that the optimal SID Lys concentration to be served in a 28-day feeding phase underfed part of the population during half of the period but overfed another part of the population. In contrast, the requirements of pigs fed daily tailored diets are adjusted every day, and AA concentration decreases over time [6,56]. Thus, the IPF pigs that were restricted in Thr on the first day of the trial were restricted for the entire experimental period. This might explain the high impact of AA restrictions on protein and AA concentrations in the IPF pigs in comparison with the GPF pigs. The difference in AA concentration among different tissues, mainly among different muscles, can be due to growth hormone action; in other words, a nutritional restriction can downregulate growth hormone mRNA receptors in the liver but also upregulate them in skeletal tissues [57]. More than feed intake and energy balance, other nutrients can regulate growth hormones. In the longissimus dorsi, for example, a Thr deficiency can upregulate growth hormone [58]. Growth hormone was not measured in this trial, but it can be speculated that the effect of Thr restriction on the AA and protein concentrations observed in this trial was also mediated by hormonal changes. Collagen has been considered a source of NEAA reserves, and in situations where less Thr is available, proteins that are poorer in this AA, such as collagen, can be synthesized. Threonine restriction did not affect collagen synthesis in the GPF pigs in this trial, a result that is in agreement with those of previous studies [11,13] in which pigs were fed in conventional group-feeding systems. The results of the present trial seem to indicate, however, that dietary Thr can affect collagen formation in pigs in an IPF system. It is possible that the IPF pigs developed several mechanisms to cope with Thr deficiency, such as collagen synthesis along with increased AA retention in the liver, as well as the use of plasma proteins as sources of AAs for peripheral tissues during AA restriction. Conclusions The growth performance of growing pigs in this trial was affected by the Thr supply but not by the feeding systems under study. Dietary Thr deficiency decreased plasma proteins whereas increased collagen in the Longissimus dorsi. In addition, Thr deficiency impaired empty body composition by changing AA concentration and decreasing carcass protein in the IPF pigs in comparison with the GPF pigs. The level of dietary Thr estimated using non-linear models to optimize PD was different between the feeding systems, with the pigs in the IPF system having Thr:Lys ratio requirements that were at least 30% higher than those of the pigs in the conventional GPF system. The results of this trial show that AA requirements vary between individual pigs and cannot be accurately estimated based on traditional AA:Lys ratio studies. Furthermore, the results of this trial indicate that pigs have great capacity to deal with excess and limited AA resources, by limiting PD and changing AA composition differently among body tissues. Under limiting AA conditions, pigs modulate to some extent the utilization and retention of the limiting resource in order to maintain its natural functions in a normal manner.
2019-02-22T04:41:57.262Z
2019-02-22T00:00:00.000
{ "year": 2019, "sha1": "2671f97b153522bb544347900b09a2fa8fbe2a32", "oa_license": "CCBY", "oa_url": "https://jasbsci.biomedcentral.com/track/pdf/10.1186/s40104-019-0328-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2671f97b153522bb544347900b09a2fa8fbe2a32", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
56349531
pes2o/s2orc
v3-fos-license
Genetic parameter estimation to milk yield and fat and protein yield deviated from 3 % of concentration in milk , in dairy herds of southern Chile Chilean milk processing industry is providing clear signs to milk farmers regarding the significant importance of milk solids within raw milk payment schemes. A litre of milk containing 3% of fat and 3% of protein is defined as a basic litre and, each additional kilo of fat and/or protein receives an extra payment. These are new production traits which must be researched on. In this study, 64,029 lactations, on 24 Región de Los Ríos at Southern of Chile herds, were used. Genetic parameters for milk yield and fat and protein above 3% yield were estimated. A multiple trait linear model, solved by BLUP methodology, was used. Variance components were estimated using AIREMLF90 and VCE software. Estimated heritability for milk yield, and fat and protein above 3% yield were 0.16±0.004, 0.44±0.007 and 0.42±0.006, respectively. Estimated genetic correlations were –0.285 and –0.331 between milk yield and fat and protein above 3% yield, respectively. It is concluded that there exist genetic variation for the two new traits proposed by the Chilean milk processing industry and, genetic selection for these traits should be done based on their estimated breeding values. However these two traits, plus milk yield, should be included in a selection index to account for the negative genetic correlations among them and minimise selection against milk yield. INTRODUCTION Seventy percent of industrialised Chilean milk is produced in the southern area of the country.The most important dairy production regions are Araucanía, Los Lagos and Los Ríos (Lerdón et al 2010).In these regions, milk production is mainly based on grazing pasture where the goal is to obtain high production of milk and milk solids per unit of land (González-Verdugo et al 2004). There are some similarities between the dairy pastoral systems of New Zealand and southern Chile.As in New Zealand, the economic success of the Chilean dairy farmer is continuously being more dependent on milk solids output per unit of land rather than milk yield per cow (Delgadillo et al 2016). The main factors influencing economic return of the dairy farmers are: 1) litres (L) containing 3.0% weight/ volume (w/v) of fat and 3.0% w/v of protein, which is known as a standardised litre of milk (LB); 2) kilogram (kg) of fat above 3.0% w/v in each L of LB and, 3) kg of protein above 3.0% w/v in each L of LB.The payment scheme of the industry for raw milk is highly influenced by these three factors (PROLESUR) 1 . Considering the importance of milk solids concentration in the value of the raw milk and as part of its "Strategies for a Competitive Development of the Chilean Dairy Sector 2010-2020" document, the Milk Consortium S. A.2 outlined a production blueprint to be followed by the dairy sector.One of the goals indicated by the Milk Consortium was to increase the national average milk fat and protein concentration from 7.1% in 2010 to 7.6% in 2020.Improvement of traits, such as fat and protein yield, can effectively be reached by genetic improvement programs, where selection 1 Prolesur.2014.Resumen de Pauta de Pago de leche para la compra que PROLESUR realiza a los actuales productores de leche de la X Región de Los Lagos, continental.Vigencia: Desde el 1 de septiembre de 2014.Accessed on September 14, 2016; http://www.prolesur.cl/component/docman/doc_download/249-pauta-de-precios-lechecruda-x-continental-sept-14 . based on estimated breeding values would be an essential tool to choose the best breeding stock.Unfortunately the Chilean dairy sector does not have in place a national or regional dairy genetic improvement program where official estimation of additive genetic values can be done, and the Chilean dairy farmers depend on breeding value estimation done abroad for purchasing of breeding stock.Therefore the farmer is exposed to choose among several artificial insemination sire catalogs from different populations and estimation methodologies. A second but not permanent way to increase milk solids is through improvement in management practices.Genetic improvement has the advantage of being accumulative; however, genetic parameters are needed to estimate reliable additive genetic merit of the population under an improvement genetic program. According to the current Chilean raw milk payment scheme, a kg of milk protein above 3.0%, is valued at almost 5 times higher than a kg of milk fat above 3.0% hence new traits which are milk fat and protein deviated from 3.0% may be worthwhile to explore as selection criteria.Dairy sire frozen semen catalogs provide genetic information on milk fat and protein yield and percentage but no on milk fat and protein departing from 3.0% concentration.Therefore Chilean dairy farmers have no genetic information on traits which are continuously gaining importance in the final price received by the milk that they produce. Genetic parameter estimations for milk and milk fat and protein, in different dairy cattle breeds, are abundant on the literature; however, the new traits that arise from the particular Chilean raw milk payment scheme, which are a blending of solids yield and concentration, have not been researched. The final purpose of this work was to estimate (co) variance components for milk yield and fat and protein yield deviated from 3% of concentration in milk. DATA The raw data set had information on 64,029 complete lactation records taken on 24 farms of the Región de Los Ríos, southern of Chile, from 1994 to 2014.Milk yields came already adjusted by the milk recording agency to 305 days of lactation.Fat and protein deviations were calculated by subtracting 3.0 to the original percentage of both, and multiplying this result by corresponding kilograms of raw milk; hence the new traits blended milk volume and concentration of milk solids.The third trait included in the analysis was milk yield measured in kilograms.As part of data editing, yield observations, in any of the three traits, below or above three standard deviations from the raw mean were considered outliers and deleted from the data set.Also, to minimise difficulties on variance components estimation convergence the herd, year and season interaction subclasses that had less than five observations were deleted from de data set.The final data set had 62,532 lactations on 23,505 cows; through the pedigree file it was possible to include in the analyses a total of 27,244 animals. STATISTICAL MODEL The data were analysed using a multivariate three traits animal model solved by best linear unbiased prediction (BLUP), (Henderson 1984).Yields variables (milk, fat and protein) were all identically modeled as function of herd, year of calving, season of calving, parity number, animal and permanent environmental effect.The interaction among herd, year and season of production was treated as a random effect in the model as well as the animal genetics and permanent environmental effects.Parity number was treated as a fixed effect.The statistical model for each trait was: Where: y ijkl = is a phenotypic record on one of the three yield traits above described.μ = is the population mean.L i = is the fixed effect of the i th parity number.H j = is the random effect of the j th herd, year and season interaction ~N 0,Iσ h 2 ( ) . a ijk = is the random additive genetic effect of the ijk th animal for each of the three yield traits ~N 0,Aσ a 2 ( ) . p ijk = is the random permanent environment effect of the ijk th animal for each of the three yield traits ~N 0,Iσ pe e ijkl = is the random residual error of the ijk th phenotypic record ~N 0,Iσ e The (co)variance structure of the model was: Where: Where: A = is the additive genetic relationship matrix among all animals found in the pedigree file, σ a i 2 = is the additive genetic variance for the i th yield trait, σ a i. j = is the additive genetic covariance between the i th and j th yield traits, I 1 = is an identity matrix of size equal to the number of animals with records (23,505), I 2 = is an identity matrix of size equal to the number of herd, year and season interactions (601), I 3 = is an identity matrix of size equal to the number of observations (62,532), is the permanent environmental variance for the i th yield trait, σ pe i. j = is the permanent environmental covariance between the i th and j th yield traits, σ h i 2 = is the herd, year and season interaction variance for the i th yield trait, σ h i. j = is the herd, year and season interaction covariance between the i th and j th yield traits, σ e i 2 = is the residual variance for the i th yield trait, σ e i. j = is the residual covariance between the i th and j th yield traits. The final multivariate model had 154,071 equations to be solved.The variance components were estimated by restricted maximum likelihood (Patterson and Thompson 1971) using VCE 3 and AIREMLF90 (Misztal et al 2002) software. RESULTS Overall raw means, standard deviations, minimum and maximum values of the three traits are presented in table 1. Yield deviation from 3% can be negative when a particular cow has a concentration of fat or protein below 3% which was the case in some of the cows in this data set.After data editing and deleting outliers the variation of fat yield deviated from 3% of concentration was between +217% and -214% regarding the average (51.09kg).Protein yield deviated from 3% of concentration in milk moved from -253% to +252% regarding the average (22.5 kg).These results (table 1) show the large variation that exists for both fat and protein yield deviated from 3% of concentration. Milk yield average was 7,606±1,670 kg and ranged from 2,468 to 12,819 kg per lactation which reflects a great phenotypic variability. The mixed model equations took 245 rounds to reach the convergence criterion which, in the software parameter program, was previously set at 1.0x10 -7 .Table 2 shows genetic, permanent environmental, herd-year-season, residual and phenotypic estimated variances, and heritability for the three traits.Phenotypic variance was the sum of the genetic, permanent environmental, herd-year-season and residual estimated variances. Table 3 shows the estimates of genetic and phenotypic correlations for milk yield and fat and protein deviated from 3% yield.At the genetic level milk yield was negatively correlated with fat and protein yield deviated from 3% of concentration in milk, the correlation coefficients were found at -0.285 and -0.331 between milk yield and fat and protein, respectively. Phenotypic correlations found in this study were low among milk yield and fat and protein above 3% yield (table 3), and medium between fat and protein yield (0,410).DISCUSSION Elzo et al (2004), using records of 56,277 first lactation cows, found a milk yield mean similar to that found working with 57,018 records of New Zealand Holstein cows milked twice daily; although direct comparison between these results is not straightforward due to the fact that New Zealand milk production is seasonal, having less days in milk, while dairy production in southern Chile relays on medium to high concentrate input.Uribe and Smulders (2004) found an average milk yield of 5,044 kg in a sample of 3,837 lactations in Overo Colorado cattle from 16 dairy herds in southern Chile.Heritability estimate for milk yield was 0.16±0.004which is in the low range of previous estimates found in the literature for this trait.This heritability estimate is similar to that reported by Lembeye et al (2016 a ) for low producing New Zealand cows milked once-a-day (0.18±0.02).The estimated heritability of Montaldo et al (2015) for the same trait in Chilean cows was 0.19±0.006which is slightly superior to that found in this research.Uribe and Smulders (2004) found that heritability of milk yield for Chilean Overo Colorado cattle was 0.25, which is also higher than the estimated heritability reported in this study.However, Sneddon et al (2015) analysing 15,366 test day records of 4,378 New Zealand cows born in 2009 reported a hereditability estimate of 0.19, which is closer to the heritability reported in this study.Higher heritability estimates, for milk yield, were found by Lembeye et al (2016 b ) they were at 0.33 and 0.36 for once-and twice-a-day milking cows, respectively.Elzo et al (2004) reported heritability estimates, for milk yield, ranging from 0.31 to 0.34 among the Chilean breeds included in their data set; in their research estimates of genetic variance were similar to that found in this research (table 2) but higher than those reported by Lembeye et al (2016 b ) which were 90,445 and 163,396 for once-and twice-a-day milking cows, respectively.Heritability estimates change among studies due to several reasons, being the most important the type of data and the statistical model.Also in developed countries, where milking cows are kept inside barns and fed using total mixed ration, the environmental variation across herd is lower yielding higher heritability estimates. Heritability estimates of milk fat and protein yield deviated from 3% of concentration were 0.44±0.007and 0.42±0.006,respectively (table 2), these are similar to the estimates of Uribe and Smulders (2004), for fat and protein percentage in Overo Colorado cattle, 0.44 and 0.43, respectively.These results are very close to the average between estimates of Lembeye et al (2016 b ) for fat and protein yield (0.25±0.010), and fat and protein percentage (0.66±0.009).Sneddon et al (2015) found heritabilities, for fat and protein percentage, at 0.35±0.05and 0.32±0.05,respectively, which are lower than the estimates presented in this research.The estimates of Elzo et al (2004) for fat and protein yield in Chilean cows ranged from 0.29 to 0.37 and 0.17 and 0.24, respectively.Higher heritabilities were estimated by Montaldo et al (2015) which were 0.55±0.007for both fat and protein percentage.Estimations of heritability for solids traits yield deviated from a given concentration in milk were not found in the literature.Although the fat and protein traits reviewed here and reported by previous researchers are not exactly the same as the ones used in this work, it seems that fat and protein yield deviated from 3% of concentration in milk follow similar behaviour for fat and protein percentage; this may help to explain the higher heritability of fat and protein yield deviated from 3% of concentration in milk, when compared to fat and protein yields.Genetic correlations among milk yield and milk solids yield has been usually found positive and high (Elzo et al 2004, Lembeye et al 2016 a,b , Sneddon et al 2015) however, genetic correlations among milk yield and milk solids percentage have been found negative (Lembeye et al 2016 b ; Sneddon et al 2015).In this work the correlations among fat and protein yield, deviated from 3% of concentration in milk, and milk yield were negative (table 3), these findings might indicate that the behavior of the traits analysed here is closer to solids concentration than to solids yield. Phenotypic correlations found in this study (table 3) are much different than those of Sneddon et al (2015) who reported phenotypic correlations among milk yield and fat and protein yield of 0.75 and 0.92, respectively.Similar findings, for pure bred Holstein cows, were reported by Elzo et al (2004) where the correlations, in the same order, were estimated at 0.82 and 0.88.The estimates of Lembeye et al (2016 b ), for the same traits, were also high and positive, 0.68 and 0.91. Chilean dairy farmers are facing a scenario where the weights put on the raw milk payment are shifting from volume to milk solids.The companies that buy raw milk in Chile, where 3 of them concentrate near to 70% of the total 4 , pay per kg of fat and protein above 3% of concentration in milk.However, the breeding decision of the farmer, at the time of choosing a particular sire, rely on the information provided by frozen semen sire catalogs elaborated in a different country.Hence estimated breeding values and selection indexes, provided in a particular catalog, do not directly apply to their production reality because, the information used to estimate genetic merit is from a different population and the economic scenario of production is also different.Moreover, the new traits created by the Chilean dairy processing industry (fat and protein yield deviated from 3% of concentration in milk) are not genetically evaluated somewhere else, hence would not be included in an imported selection index. There are many reports in the literature concerning variance components estimation for milk traits.The studies discussed in this research were chosen because three of them were done using Chilean data (Uribe and Smulders 2004, Elzo et al 2004, Montaldo et al 2015), and the others were the newest ones and done in New Zealand (Lembeye et al 2016 a,b , Sneddon et al 2015); where dairy production rely on seasonal grows of pasture and grazing which is the case in south of Chile.Kilograms of fat and protein deviated from 3% of concentration in milk are new traits that blend milk yield and milk solids concentration, therefore direct extrapolation of genetic parameters either from milk yield, fat and protein yield or fat and protein percentage might be not totally correct.Genetic correlations between milk yield and fat and protein yield deviated from 3% of concentration are both negative (table 3), which may become favorable in an economic scenario where farm gate milk payment allocates more weight to fat and protein and penalizes milk volume.However, under the current Chilean economic scenario, these negative correlations may lead that genetic selection for fat and protein yield above 3% of concentration would improve the price per litre of milk but decrease the total farmer income due to a reduction in the total volume of milk.Furthermore, the negative genetic correlations found in this work would indicate that fat and protein yields above 3% of concentration are highly dependent on milk fat and protein percentage therefore their genetic behavior is more similar to those traits than to milk fat and protein total yield.Genetic variation in the new traits exist, heritability estimates for milk fat and protein yield deviated from 3% of concentration indicate that genetic selection for these traits must be effective to increase milk fat and protein output, provided that breeding values are estimated using data from the same population.Ideally, estimated breeding values of these traits should be blended into a Chilean selection index, in which the estimated breeding values for the new traits and milk yield are weighed according to their economic importance in a local scenario. Table 1 . Means, standard deviations and minimum and maximum values of milk yield and fat and protein yield deviated from 3% of concentration in milk. Table 3 . Estimates of genetic (above the diagonal) and phenotypic (below the diagonal) correlations between milk yield and fat and protein yield deviated from 3% of concentration in milk.
2018-12-17T18:19:49.655Z
2017-05-01T00:00:00.000
{ "year": 2017, "sha1": "9319d792e764c9a44dbc3aa36c2c5b0c73e9febe", "oa_license": "CCBYNC", "oa_url": "https://scielo.conicyt.cl/pdf/australjvs/v49n2/0719-8132-australjvs-49-02-00071.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "9319d792e764c9a44dbc3aa36c2c5b0c73e9febe", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
119265269
pes2o/s2orc
v3-fos-license
Random simplicial complexes Random shapes arise naturally in many contexts. The topological and geometric structure of such objects is interesting for its own sake, and also for applications. In physics, for example, such objects arise naturally in quantum gravity, in material science, and in other settings. Stochastic topology may also be considered as a null hypothesis for topological data analysis. In this chapter we overview combinatorial aspects of stochastic topology. We focus on the topological and geometric properties of random simplicial complexes. We introduce a few of the fundamental models in Section 23.1. We review high-dimensional expander-like properties of random complexes in Section 23.2. We discuss threshold behavior and phase transitions in Section 23.3, and Betti numbers and persistent homology in Section 23.4. INTRODUCTION Random shapes arise naturally in many contexts. The topological and geometric structure of such objects is interesting for its own sake, and also for applications. In physics, for example, such objects arise naturally in quantum gravity, in material science, and in other settings. Stochastic topology may also be considered as a null hypothesis for topological data analysis. In this chapter we overview combinatorial aspects of stochastic topology. We focus on the topological and geometric properties of random simplicial complexes. We introduce a few of the fundamental models in Section 23.1. We review highdimensional expander-like properties of random complexes in Section 23.2. We discuss threshold behavior and phase transitions in Section 23.3, and Betti numbers and persistent homology in Section 23.4. MODELS We briefly introduce a few of the most commonly studied models. ERDŐS-RÉNYI-INSPIRED MODELS A few of the models that have been studied are high-dimensional analogues of the Erdős-Rényi random graph. The Erdős-Rényi random graph The Erdős-Rényi random graph G(n, p) is the probability distribution on all graphs on vertex set [n] = {1, 2, . . . , n}, where every edge is included with probability p jointly independently. Standard references include [Bol01] and [J LR00]. One often thinks of p as a function of n and studies the asymptotic properties of G(n, p) as n → ∞. We say that an event happens with high probability (w.h.p.) if the probability approaches 1 as n → ∞. Erdős-Rényi showed thatp = log n/n is a sharp threshold for connectivity. In other words,: for every fixed > 0, if p ≥ (1 + )p then w.h.p. G(n, p) is connected, and if p ≤ (1 − )p then w.h.p. it is disconnected. A slightly sharper statement is given in the following section. Several thresholds for topological properties of G(n, p) are summarized in GLOSSARY Threshold function: Let P be a graph property. We say that f is a threshold function for property P in the random graph G = G(n, p) if whenever p = ω(f ), G has property P w.h.p. and whenever p = o(f ), G does not have property P. Sharp threshold: We say that f is a sharp threshold for graph property P if there exists a function g = o(f ) such that for p < f − g, G / ∈ P w.h.p. and if p > f + g, G ∈ P w.h.p. Simplicial complex: A simplicial complex ∆ is a collection of subsets of a set S, such that (1) if U ⊂ V is nonempty and V ∈ ∆ then U ∈ ∆, and (2) {v} ∈ ∆ for every v ∈ S. An element of ∆ is called a face. Such a set system can be naturally associated a topological space by considering every set of size k in ∆ to represent a k − 1-dimensional simplex, homeomorphic to a closed Euclidean ball. This topological space is sometimes called the geometric realization of ∆, but we will slightly abuse notation and identify a simplicial complex with its geometric realization. Link: Given a simplicial complex ∆ and a face σ ∈ ∆, the link of σ in ∆ is defined by The link is itself a simplicial complex. Homology: Associated with any simplicial complex X, abelian group G, and integer i ≥ 0, H i (X, G) denotes the ith homology group of X with coefficients in G. If k is a field, then H i (X, k) is a vector space over k. Homology is defined as "cycles modulo boundaries". Homology is invariant under homotopy deformations. Betti numbers: If one considers homology with coefficients in R, then H i (X, R) is a real vector space. The Betti numbers β i are defined by β i = dim H i (X, R) . The 0th Betti number β 0 counts the number of connected components of X, and in general the ith Betti number is said to count the number of i-dimensional holes in X. The random 2-complex Random hypergraphs have been well studied, but if we wish to study such objects topologically then random simplicial complexes is probably a more natural point of view. Linial and Meshulam introduced the topological study of the random 2-complex Y (n, p) in [LM06]. This model of random simplicial complex has n vertices, n 2 edges, and each of the n 3 possible 2-dimensional faces is included independently with probability p. The random 2-complex is perhaps the most natural 2-dimensional analogue of G(n, p). For example, the link of every vertex in Y (n, p) has the same distribution as G(n − 1, p). Several topological thresholds for Y (n, p) discussed in the next section are described in The random d-complex The natural generalization to d-dimensional model was introduced by Meshulam and Wallach in [MW09]. For the random d-complex Y d (n, p), contains the complete (d − 1)-skeleton of a simplex on n vertices, and every d-dimensional face appears independently with probability p. Some of the topological subtlety of the random 2-dimensional model collapses in higher dimensions: for d ≥ 3, the complexes are d − 2-connected, and in particular simply connected. By the Hurewicz theorem, , so these groups have the same vanishing threshold. The random clique complex Another analogue of G(n, p) in higher dimensions was introduced in [Kah09]. The random clique complex X(n, p) is the clique complex of G(n, p). It is the maximal simplicial complex compatible with a given graph. In other words, the faces of the clique complex X(H) correspond to complete subgraphs of the graph H. The random clique complex asymptotically puts a measure over a wide range of topologies. Indeed, every simplicial complex is homeomorphic to a clique complex, e.g. by barycentric subdivision. There are several comparisons of this model to the random d-complex, but some important contrasts as well. One contrast to Y d (n, p) is that for every k ≥ 1, X(n, p) has not one but two phase transitions for kth homology, one where homology appears and one where it vanishes. In particular, higher homology is not monotone with respect to p. However, there are still comparisons to Y d (n, p)the appearance of homology H k (X) is analogous to the birth of top homology H d (Y ). Similarly, the vanishing threshold for H k (X(n, p)) is analogous to vanishing of H d−1 (Y ). RANDOM GEOMETRIC MODELS The random geometric graph G(n, r) is a flexible model, defined as follows. Consider a probability distribution on R d with a bounded, measurable, density function f : R d → R. Then one chooses n points independently and identically distributed (i.i.d.) according to this distribution. The n points are the vertices of the graph, and two vertices are adjacent if they are within distance r. Usually r = r(n) and n → ∞. The standard reference for random geometric graphs is Penrose's monograph [Pen03]. There are at least two commonly studied ways to build a simplicial complex on a geometric graph. The first is the Vietoris-Rips complex, which is the same construction as the clique complex above-one fills in all possible faces, i.e. the faces of the Vietoris-Rips complex correspond to the cliques of the graph. The second is theČech complex, where one considers the higher intersections of the balls of radius r/2. This leads to two natural models for random geometric complexes V R(n, r) and C(n, r). HIGH DIMENSIONAL EXPANDERS GLOSSARY Cheeger number: The normalized Cheeger number of a graph h(G) with vertex set V is defined by andĀ is the complement of A in V . Laplacian: For a connected graph H, the normalized graph Laplacian L = L[H] is defined by Here A is the adjacency matrix, and D is the diagonal matrix with vertex degrees along the diagonal. Spectral gap: The eigenvalues of the normalized graph Laplacian of a connected graph satisfy 0 = λ 1 < λ 2 ≤ · · · ≤ λ n ≤ 2. The smallest positive eigenvalue λ 2 [H] is of particular importance, and is sometimes called the spectral gap of H. Expander family: Let {G i } be an infinite sequence of graphs where the number of vertices tends to infinity. We say that {G i } is an expander family if Expander graphs are of fundamental importance for their applications in computer science and mathematics [HLW06]. It is natural to seek their various higherdimensional generalizations. See [Lub14] for a survey of recent progress on higherdimensional expanders, particularly Ramanujan complexes which generalize Ramanujan graphs. Gromov suggested that one property that higher-dimensional expanders should have is geometric or topological overlap. A sequence of d-dimensional simplicial complexes ∆ 1 , ∆ 2 , . . . , is said to have the geometric overlap property if for every geometric map (affine-linear on each face) f : ∆ i → R d , there exists a point p ∈ R d such that f −1 (p) intersects the interior of a constant fraction of the d-dimensional faces. The sequence is said to have the stronger topological overlap property if this holds even for continuous maps. One way to define higher-dimensional expander is via coboundary expansion, which generalizes the Cheeger number of a graph. Following Linial and Meshulam's coisoperimetric ideas, Dotterrer and Kahle [DK12] pointed out that d-dimensional random simplicial complexes are coboundary expanders. By a theorem of Gromov [Gro09,Gro10] (see the note [DKW15] for a self contained proof of Gromov's theorem), this implies that random complexes have the topological overlap property. Lubotzky and Meshulam introduced a new model of random 2-complex, based on random Latin squares, in [LM15]. The main result is the existence of coboundary expanders with bounded edge-degree, answering a question asked implicitly in [Gro10] and explicitly in [DK12]. Another way to define higher-dimensional expanders is via spectral gap of various Laplacian operators. Hoffman, Kahle, and Paquette studied spectral gap of random graphs [HKP12], and applied Garland's method to prove homology-vanishing theorems. Gundert and Wagner extened this to study higher-order spectral gaps of these complexes [GW15]. Parzanchevski, Rosenthal, and Tessler showed that this implies the geometric overlap property [PRT15]. PHASE TRANSITIONS There has been a lot of interest in identifying thresholds for various topological properties, such as vanishing of homology. As some parameter varies, the topology passes a phase transition where some property suddenly emerges. In this section we review a few of the most well-studied topological phase transitions. HOMOLOGY-VANISHING THEOREMS The following theorem describing the connectivity threshold for the random graph G(n, p) is the archetypal homology-vanishing theorem. We say that it is a homology-vanishing theorem because path connectivity of a topological space X is equivalent to H 0 (X, G) = 0 with any coefficient group G. It is also a cohomology-vanishing theorem, since H 0 (X, G) = 0 is also equivalent to path-connectivity. In many ways it is better to think of it as a cohomology theorem, since the standard proof, for example in Chapter 10 of [Bol01], is really a cohomological one. This perspective helps when understanding the proof of the Linial-Meshulam theorem. The following cohomological analogue of the Erdős-Rényi theorem was the first nontrivial result for the topology of random simplicial complexes. One of the main tools introduced in [LM06] is a new co-isoperimetric inequality for the simplex, which was discovered independently by Gromov. These coisoperimetric inequalities were combined by Linial and Meshulam with intricate cocycle-counting combinatorics to get a sharp threshold. See [DKW15] for a comparison of various definitions co-isoperimetry, and a clean statement and self-contained proof of Gromov's theorem. Theorem 23.3.2 was generalized further by Meshulam and Wallach. Fix d ≥ 1, and let Y = Y d (n, p). Let G be any finite abelian group. If Theorem 23.3.3 generalizes Theorem 23.3.2 in two ways: by letting the dimension d ≥ 2 be arbitrary, and also by letting coefficients be in an arbitrary finite abelian group G. Spectral gaps and Garland's method There is another approach to homology-vanishing theorems for simplicial complexes, via Garland's method [Gar73]. The following refinement of Garland's theorem is due to Ballman andŚwiatkowski. THEOREM 23.3.4 [Gar73, BŚ97] If ∆ is a finite, pure d-dimensional, simplicial complex, such that This leads to a new proof of Theorem 23.3.3, at least over a field of characteristic zero. THEOREM 23.3.5 [HKP12] Fix d ≥ 1, and let Y = Y d (n, p). If Here ω(1) is any function that tends to infinity as n → ∞. This is slightly weaker than the Meshulam-Wallach theorem topologically speaking, since H i (Y, G) = 0 for any finite group G implies that H i (Y, R) = 0 by the universal coefficient theorem, but generally the converse is false. However, the proof via Garland's method avoids some of the combinatorial complications of cocycle counting. Garland's method also provides proofs of theorems which have so far eluded other methods. For example, we have the following homology-vanishing threshold in the random clique complex model. Note that k = 0 again corresponds to the Erdős-Rényi theorem. THEOREM 23.3.6 [Kah14a] Fix k ≥ 1 and let X = X(n, p). Let ω(1) denote a function that tends to ∞ arbitrarily slowly. If It may be that this theorem holds with R coefficients replaced by a finite group G or even with Z, but for the most part this remains an open problem. The only other case that seems to be known is the case k = 1 and G = Z/2 by DeMarco, Hamm, and Kahn [DHK13], where a similarly sharp threshold is obtained. The applications of Garland's method depends on new results on the spectral gap of random graphs. THEOREM 23.3.7 [HKP12] Fix k ≥ 0. Let λ 1 , λ 2 , . . . denote the eigenvalues of the normalized graph Laplacian of the random graph G(n, p). If p ≥ (k + 1) log n + ω(1) n then Theorem 23.3.6, combined with some earlier results [Kah09], has the following corollary. THEOREM 23.3.8 [Kah14a] Let k ≥ 3 and > 0 be fixed. If where C 3 = 3 and C k = k/2 + 1 for k > 3, then w.h.p. X is rationally homotopy equivalent to a bouquet of k-dimensional spheres. The main remaining conjecture for the topology of random clique complexes is that these rational homotopy equivalences are actually homotopy equivalences. CONJECTURE 23.3.9 The bouquet-of-spheres conjecture. Let k ≥ 3 and > 0 be fixed. If n n 1/k ≤ p ≤ n − n 1/(k+1) then w.h.p. X is homotopy equivalent to a bouquet of k-spheres. Given earlier results, this is equivalent to showing that H k (X, Z) is torsion free. So far, integer homology for X(n, p) is not very well understood. Some progress has been made for Y d (n, p), described in the following. Integer homology Unfortunately, neither method discussed above (the cocycle-counting methods pioneered by Linial and Meshulam or the spectral methods of Garland), seems to handle integral homology. There is a slight subtlety here-if one knows for some simplicial complex Σ that H i (Σ, G) = 0 for every finite abelian group G then H i (Σ, Z) = 0 by the universal coefficient theorem. See, for example, Chapter 2 of Hatcher [Hat02]. So it might seem that the Theorem 23.3.3 will also handle Z coefficients, but the proof uses cocycle counting methods which require G to be fixed, or at least for the order of the coefficient group |G| to be growing sufficiently slowly. Cocycle counting does not seem to work, for example, when |G| is growing exponentially fast. The following gives an upper bound on the vanishing threshold for integer homology. THEOREM 23.3.10 [HKP12] Fix d ≥ 2, and let Y = Y d (n, p). If p ≥ 80d log n n , The author suspects that the true threshold for homology with Z coefficients is the same as for field coefficients: d log n/n. CONJECTURE 23.3.11 A sharp threshold for Z homology. THE BIRTH OF CYCLES AND COLLAPSIBILITY G(n, p) in the p = 1/n regime There is a remarkable phase transition in structure of the random graph G(n, p) at the threshold p = 1/n. A "giant" component, on a constant fraction of the vertices, suddenly emerges. This is considered an analogue of percolation on an infinite lattice, where an infinite component appears with probability 1. • If c < 1 then w.h.p. all components are of order O(log n). • If c > 1 then w.h.p. there is a unique giant component, of order Ω(n). An overview of this remarkable phase transition can be found in Chapter 11 of Alon and Spencer [AS08]. In random graphs, the appearance of cycles with high probability has the same threshold 1/n. THEOREM 23.3.13 [Pit88] Suppose p = c/n where c > 0 is constant. • If c ≥ 1 then w.h.p. G contains at least one cycle, i.e. The analogy in higher dimensions is only just beginning to be understood. The birth of cycles Kozlov first studied the vanishing threshold for top homology in [Koz10]. THEOREM 23.3.14 [Koz10] Let Y = Y d (n, p), and G be any abelian group. (1) If Part (1) of this theorem cannot be improved. Indeed, let S be the number of subcomplexes isomorphic to the boundary of a (d + 1)-dimensional simplex. If p = c/n for some constant c > 0, then as n → ∞. Moreover, S converges in law to a Poisson distribution with this mean in the limit, so In particular, for p = c/n and c > 0, P[H d (Y, G) = 0] is bounded away from zero. On the other hand, part (2) can be improved. Indeed straightforward computation shows that if p ≥ c/n and c > d + 1 then w.h.p. the number of d-dimensional faces is greater than the number of (d−1)-dimensional faces. Simply by dimensional considerations, we conclude that H d (Y, G) = 0. This can improved more though. Aronshtam and Linial found the best possible constant factor c * d , defined for d ≥ 2 as follows. Let x ∈ (0, 1) be the unique root to the equation Linial and Peled also showed the birth of a giant (homological) shadow at the same point. This is introduced and defined in [LP14], and it is discussed there as a higher-dimensional analogue of the birth of the giant component in G(n, p). The threshold for d-collapsibility In a d-dimensional simplicial complex, an elementary collapse is an operation that deletes a pair of faces (σ, τ ) such that • σ is a d − 1-dimensional face contained in τ , and • σ is not contained in any other d-dimensional faces. An elementary collapse results in a homotopy equivalent simplicial complex. If a simplicial complex can be reduced to a d − 1-dimensional complex by a series of elementary collapses, we say that it is d-collapsible. For a graph, 1-collapsible is equivalent to being a forest. In other words, a graph G is 1-collapsible if and only if H 1 (G) = 0. This homological criterion does not hold in higher dimensions. In fact, somewhat surprisingly, d-collapsibility and H d = 0 have distinct thresholds for random complexes. Let d ≥ 2 and set Define c d to be the unique solution x > 0 of g d (x) = d + 1. • if p ≤ c/n where c < c d then Y is d-collapsible with probability bounded away from zero. So again, this is a one-sided sharp threshold. Regarding collapsibility in the random clique complex model, Malen showed in his PhD thesis that if p n −1/(k+1) , then w.h.p. X(n, p) is k-collapsible. Embeddability Every d-dimensional simplicial complex is embeddable in R 2d+1 , but not necessarily in R 2d . Wagner studied the threshold for non-embeddability of random d-complexes in R 2d , and showed the following for Y = Y d (n, p). THEOREM 23.3.18 [Wag11] There exists constants c 1 , c 2 > 0 depending only on the dimension d such that: There is a folklore conjecture that a d-dimensional simplicial complex on n vertices embeddable in R 2d can have at most O(n d ) faces [Dey93,Kal91]. See, for example, the discussion in the expository book chapter [Wag13] or Chapter 24 of this handbook. The d = 1 case is equivalent to showing that a planar graph may only have linearly many edges, which follows immediately from the Euler formula, but the conjecture is open for every d ≥ 2. Theorem 23.3.18 shows that it holds generically. PHASE TRANSITIONS FOR HOMOLOGY IN RANDOM GEOMETRIC COMPLEXES Penrose described sharp thresholds for connectivity of random geometric graphs, analogous to the Erdős-Rényi theorem. In the case of a uniform distribution on the unit cube [0, 1] d or a standard multivariate distribution, these results are tight [Pen03]. Thresholds for homology in random geometric complexes was first studied in [Kah11]. A homology vanishing threshold for random geometric complexes is obtained in [Kah11], which is tight up to a constant factor, but recently a much sharper result was obtained by Bobrowski and Weinberger. THEOREM 23.3.19 [BW15] Fix 1 ≥ k ≥ d − 1. If nr d ≥ log n + k log log n + ω(1), then w.h.p. β k = 0, and if nr d ≤ log n + (k − 2) log log n − ω(log log log n), then w.h.p. β k → ∞. RANDOM FUNDAMENTAL GROUPS GLOSSARY Fundamental group: In a path-connected topological space X, choose an arbitrary base point p. Then the homotopy classes of loops in X based at p, i.e. continuous functions f : [0, 1] → X with f (0) = f (1) = p may be endowed with the structure of a group, where the group operation is concatenation of two loops at double speed. This is called the fundamental group π 1 (X), and up to isomorphism it does not depend on the choice of base point p. If π 1 (X) = 0 then X is said to be simply connected. The first homology group H 1 (X, Z) is isomorphic to the abelianization of π 1 (X). A chain of implications: The following implications hold for an arbitrary simplicial complex X. Here q is any prime. This is a standard application of the universal coefficient theorem for homology [Hat02]. A partial converse to one of the implications is the following. If H 1 (X, Z/qZ) = 0 for every prime q, then H 1 (X, Z) = 0. Hyperbolic group: A finitely presented group is said to be word hyperbolic if it can be equipped with a word metric satisfying certain characteristics of hyperbolic geometry [Gro87]. Kazhdan's property (T): A group G is said to have property (T) if the trivial representation is an isolated point in the unitary dual equipped with the Fell topology. Equivalently, if a representation has almost invariant vectors then it has invariant vectors. Group cohomology: Associated with a finitely-presented group G is a contractible CW complex EG on which G acts freely. The quotient BG is the classifying space for principle G bundles. The group cohomology of G is equivalent to the cohomology of BG. Cohomological dimension: The cohomological dimension of a group G, denoted cdim G, is the largest dimension k such that H k (G, R) = 0 for some coefficient ring R. The random fundamental group π 1 (Y (n, p)) may fruitfully be compared to other models of random group studied earlier, such as Gromov's density model [Oll05]. The techniques and flavor of the subject owes as much to geometric group theory as to combinatorics. The vanishing threshold and hyperbolicity Babson, Hoffman, and Kahle showed that the vanishing threshold for simple connectivity is much larger than the homology-vanishing threshold. THEOREM 23.3.20 [BHK11] Let > 0 be fixed and Y = Y (n, p). If Most of the work in proving Theorem 23.3.20 is showing that, on the sparse side of the threshold, π 1 is hyperbolic. This in turn depends on a local-to-global principle for hyperbolicity due to Gromov [Gro87]. Gundert and Wagner showed that it suffices to assume that for some constant C > 0 to show that w.h.p. π 1 (Y ) = 0 [GW16]. Korándi, Peled, and Sudakov showed that it suffices to take C = 1/2 [KPS16]. The author suspects that there is a sharp threshold for simple connectivity at C/ √ n for some C > 0. CONJECTURE 23.3.21 A sharp vanishing threshold for π 1 (Y ). Kazhdan's property (T) One of the most important properties studied in geometric group theory is property (T). Loosely speaking, a group is (T) if it does not have many unitary representations. Property (T) is also closely related to the study of expander graphs. For a comprehensive overview of the subject, see the monograph [BHV08]. Inspired by Garland's method,Żuk gave a spectral condition sufficient to imply (T). Hoffman, Kahle, and Paquette appliedŻuk's condition, together with Theorem 23.3.7 to show that the threshold for π 1 (Y ) to be (T) coincides with the Linial-Meshulam homology-vanishing threshold. THEOREM 23.3.22 [HKP12] Let Y = Y (n, p). Cohomological dimension Costa and Farber [CCFK12] studied the cohomological dimension of the random fundamental group in [CF13]. Their main findings are that there are regimes when the cohomological dimension is 1, 2, and ∞, before the collapse of the group at p = 1/ √ n. Here we use f g to mean f = o(g), i.e. lim n→∞ f /g = 0. The fundamental group of the clique complex Babson showed that p = n −1/3 is the vanishing threshold for π 1 (X(n, p)) in [Bab12]. An independent and self-contained proof, including more refined results regarding torsion and cohomological dimension, was given by Costa, Farber, and Horak in [CFH15]. Finite quotients Meshulam studied finite quotients of the random fundamental group, and showed that if they exist then the index must be large-the index must tend to infinity with n. His technique is a version of the cocycle-counting arguments in [LM06] and [MW09], for non-abelian cohomology. Let c > 0 be fixed. If p ≥ (6+7c) log n n then w.h.p. π 1 (Y ) has no finite quotients with index less than n c . Moreover, if H is any fixed finite group and p ≥ (2+c) log n n then w.h.p. there are no nontrivial maps to H. PHASE TRANSITIONS IN THE MULTI-PARAMETER MODEL Applying Garland's method, Fowler described the homology-vanishing phase transition in the multi-parameter model in [Fow15]. THEOREM 23.3.25 Fowler [Fow15] Let X = X(n, p 1 , p 2 , . . . ) with p i = n −αi and α i ≥ 0 for all i. If A more refined estimate may be obtained along the following lines. BETTI NUMBERS AND PERSISTENT HOMOLOGY We have the Euler relation where f i denotes the number of i-dimensional faces. The expected number of idimensional faces is easy to compute-by linearity of expectation we have If we make the simplifying assumption that only one Betti number β i is nonzero, then we have So we obtain a plot of all of the Betti numbers by plotting the single function This seems to work well in practice. See for example Figure 23.4.1. It is interesting that even through all of the theorems we have discussed are asymptotic as n → ∞, the above heuristic gives a reasonable prediction of the shape of the Betti number curves, even for n = 25 and p ≤ 0.6. Homological domination in the multi-parameter model In [CF15b,CF15c], Costa and Farber show that for many choices of parameter (an open, dense subset of the set of allowable vectors of exponents) in the multiparameter model, the homology is dominated in one degree. Random geometric complexes Betti numbers of random geometric complexes were first studied by Robins in [Rob06]. Betti numbers of random geometric complexes are also studied in [Kah11]. Estimates are obtained for the Betti numbers in the subcritical regime nr d → 0. In this regime the Vietoris-Rips complex andČech complex have small connected components (bounded in size), so all the topology is local. For the following theorems, we assume that n points are chosen i.i.d. according to a probability measure on R d with a bounded measurable density function f . So the assumptions on the underlying probability distribution are farily mild. The following describes the expectation of the Betti numbers of the Vietoris-Rips complex in the subcritical regime. In this regime the homology of V R(n, r) is dominated by subcomplexes combinatorially isomorphic to the boundary of the cross-polytope. THEOREM 23.4.1 [Kah11] Fix d ≥ 2 and k ≥ 1. If nr d → 0 then E [β k [V R(n, r)]] / n 2k+2 r d(2k+1) → C k , as n → ∞ where D k is a constant which depends on k, d, and the function f . The analogous story for theČech complex is the following. Here the homology is dominated by simplex boundaries. In the thermodynamic limit The thermodynamic limit, or critical regime, is when nr d → C for some constant C > 0. In [Kah11], it is shown that for every 1 ≤ k ≤ d − 1, we have β k = Θ(n). Yogeshwaran, Subag, and Adler obtained the strongest results so far for Betti numbers in the thermodynamic limit, including strong laws of large numbers [YSA15], in particular that β k /n → C k . Limit theorems Kahle and Meckes computed variance of the Betti numbers, and proved Poisson and normal limiting distributions for Betti numbers in the subcritical regime r = o n −1/d in [KM13]. More general point processes Yogeshwaran and Adler obtained similar results for Betti numbers in a much more general setting of stationary point processes [YA15]. PERSISTENT HOMOLOGY Bubenik and Kim studied persistent homology for i.i.d. random points on the circle, in the context of a larger discussion about foundations for topological statistics [BK07]. Bobrowski, Kahle, and Skraba studied maximally persistent cycles in V R(n, r) and C(n, r). They defined the persistence of a cycle as p(σ) = d(σ)/b(σ), and found a law of the iterated logarithm for maximal persistence. OTHER RESOURCES For an earlier survey of Erdős-Rényi based models with a focus on the cohomologyvanishing phase transition, see also [Kah14b]. For a more comprehensive overview of random geometric complexes, see [BK14]. Several other models of random topological space have been studied. Ollivier's survey [Oll05] provides a comprehensive introduction to random groups, especially to Gromov's density random groups and the triangular model. Dunfield and Thurston introduced a new model of random 3-manifold [DT06] which has been well studied since then.
2016-07-24T17:15:47.000Z
2016-07-24T00:00:00.000
{ "year": 2016, "sha1": "aa36e0fa8febf87a132db3e9d57b18f34997a0f6", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "aa36e0fa8febf87a132db3e9d57b18f34997a0f6", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
67780424
pes2o/s2orc
v3-fos-license
Optimal rates of decay for operator semigroups on Hilbert spaces We investigate rates of decay for $C_0$-semigroups on Hilbert spaces under assumptions on the resolvent growth of the semigroup generator. Our main results show that one obtains the best possible estimate on the rate of decay, that is to say an upper bound which is also known to be a lower bound, under a comparatively mild assumption on the growth behaviour. This extends several statements obtained by Batty, Chill and Tomilov (J. Eur. Math. Soc., vol. 18(4), pp. 853-929, 2016). In fact, for a large class of semigroups our condition is not only sufficient but also necessary for this optimal estimate to hold. Even without this assumption we obtain a new quantified asymptotic result which in many cases of interest gives a sharper estimate for the rate of decay than was previously available, and for semigroups of normal operators we are able to describe the asymptotic behaviour exactly. We illustrate the strength of our theoretical results by using them to obtain sharp estimates on the rate of energy decay for a wave equation subject to viscoelastic damping at the boundary. Introduction Motivated by applications to partial differential equations, and in particular to the study of energy decay in damped wave equations, there has been a considerable amount of interest over the last decade in obtaining sharp estimates for the asymptotic behaviour of C 0 -semigroups. Given a complex Banach space X, consider the abstract Cauchy problem (1.1) ż(t) = Az(t), t ≥ 0, where A is a closed and densely defined operator on X and x ∈ X is the initial data. Let us suppose that (1.1) is well-posed in the sense that A is the infinitesimal generator of a C 0 -semigroup (T (t)) t≥0 on X, and let us assume that the semigroup (T (t)) t≥0 is bounded, which is to say that sup t≥0 T (t) < ∞. Then the unique solution z : R + → X of (1.1) in the mild sense is given by z(t) = T (t)x, t ≥ 0, and z solves (1.1) in the classical sense if and only if x lies in the domain of A. In applications the norm of X often has a useful physical interpretation, for instance as an energy. Since (T (t)) t≥0 is assumed to be bounded the spectrum of A necessarily lies in the closed left-half plane, and in many applications it is even contained in the open left-half plane. In this case A is invertible and its domain coincides with the range of A −1 , so in order to obtain (uniform) rates of decay for classical solutions one is led to investigate the quantitative behaviour of the operator norm T (t)A −1 as t → ∞. In recent years, one of the most important activities in the asymptotic theory of C 0 -semigroups has been to obtain good estimates for the rate at which this quantity decays assuming one has knowledge of how the resolvent operator R(is, A) = (isI − A) −1 , s ∈ R, behaves along the imaginary axis. The underlying motivation here is that in typical applications estimates for the norm of the resolvent are more or less readily available whereas information on the semigroup itself is hard to come by. Let M (s) = sup |r|≤s R(is, A) , s ≥ 0, and suppose that M (s) → ∞ as s → ∞. It was shown in [7] that for some constants C, c > 0 and all sufficiently large values of t, where M −1 is any right-inverse of M and M log is a modified version of the function M which grows faster than M itself by a logarithmic correction factor. For instance, if M grows like s α as s → ∞ for some α > 0 then (1.2) becomes and the authors of [7] conjectured that in this case "the logarithmic correction may be dropped, or at least replaced by a smaller rectification, in the case of Hilbert space, but cannot be forgotten in general Banach spaces." Both parts of this conjecture were proved to be correct in the highly influential paper [12], whose authors showed that the upper bound in (1.3) cannot be improved if no restrictions are imposed on the Banach space X, whereas if X is assumed to be a Hilbert space then the logarithm in (1.3) may be dropped completely. The latter result has been applied extensively in the recent literature on energy decay for damped wave equations and other concrete partial differential equations; see for instance [1, 2, 4, 8, 13, 14, 17-19, 22-24, 26, 31, 33] and also [6,Section 1]. If M is no longer assumed to grow polynomially then it is not difficult to see that one cannot always expect the lower bound in (1.2) to coincide with the actual rate of decay of T (t)A −1 as t → ∞, even when X is a Hilbert space. It is natural to ask, therefore, for which functions M beyond polynomials is it possible, at least in the Hilbert space setting, to replace M −1 log by M −1 in (1.2). This question was first addressed in [6], where it is shown that for certain so-called regularly varying functions, which in a sense are close to growing polynomially, this is indeed possible. The proof of this result given in [6] relies on delicate results from functional calculus theory, and in fact the authors of [6] do not obtain the improved estimate for all regularly varying functions M but only for a certain subclass. They also show that for normal semigroups one obtains the sharper upper bound if and only if M , in the terminology of this paper, has positive increase, which is a strictly weaker condition than regularly varying growth. The purpose of this paper is to extend the main results of [6] by showing that even for general bounded semigroups one may in fact replace M −1 log by M −1 in (1.2) for all functions M which have positive increase. Since for normal semigroups this condition is not only sufficient but also necessary for the sharper estimate to hold, ours is in a sense the best possible result of this kind. We furthermore investigate rates of decay under milder assumptions on the resolvent growth, and in particular we are able to give an exact description of the rate of decay under arbitrary resolvent growth in the case of normal semigroups. We summarise several of our main results as follows. Theorem 1.1. Let X be a complex Hilbert space and let A be the generator of a bounded C 0 -semigroup (T (t)) t≥0 on X. Suppose that σ(A) ∩ iR = ∅ and let M : R + → (0, ∞) be defined by M (s) = sup |r|≤s R(ir, A) , s ≥ 0. If M has positive increase, then there exist constants C, c > 0 such that for all sufficiently large values of t. Moreover, if (T (t)) t≥0 is a semigroup of normal operators, then the upper bound in (1.4) holds if and only if M has positive increase, and in fact whenever M is unbounded and ε ∈ (0, 1) The general approach we adopt in obtaining these results is inspired by the proof of [6,Theorem 4.7] but is nevertheless different in spirit from the approach taken in [6]. In particular, we do not rely on any intricate results from functional calculus theory. Instead we combine the basic idea found in the proof of [6,Theorem 4.7] with techniques recently developed in [15]. Another important influence on the ideas underlying our approach, although perhaps a less conspicuous one given our focus on the Hilbert space setting, comes from the theory of operator-valued (L p , L q ) Fourier multipliers and its use in the asymptotic theory of C 0 -semigroups, as developed in [27][28][29]. We hope in future work to explore this aspect more fully, also for non-Hilbertian Banach spaces. Our paper is structured as follows. First, in Section 2, we briefly introduce the requisite background material on regularly varying functions and functions having positive increase. Section 3 is the heart of this paper. Here we prove one of our main results, Theorem 3.2, which contains the first part of Theorem 1.1 above, namely that for bounded C 0 -semigroups on Hilbert spaces the rate of decay of T (t)A −1 as t → ∞ can be estimated from above in terms of M −1 whenever M has positive increase. Following [6,15,25,30] we also consider the cases where the resolvent operator is allowed to have a singularity not just at infinity but instead at zero, or indeed both at zero and at infinity. In the latter case our result, Theorem 3.9, is the first in the literature yielding the M −1 -estimate for non-polynomially growing resolvents. In each of the three cases we moreover show, as indicated in Theorem 1.1, that the assumption of positive increase is not only sufficient but also necessary for this sharper estimate to hold, at least in many naturally arising cases and in particular for semigroups of normal operators. In Section 4 we relax the condition of positive increase. First, in Theorem 4.1, we obtain a new quantified asymptotic result for general bounded C 0 -semigroups on Hilbert spaces, which in many cases improves on the known decay estimates. Then, in Theorem 4.4, we prove the last part of Theorem 1.1 above, by determining the precise rate of decay for normal semigroups. Finally, in Section 5 we consider a one-dimensional wave equation with viscoelastic damping at the boundary and, in particular, we provide a simple criterion for determining whether the rate of energy decay can be estimated from above and below by the same function, namely the reciprocal of the so-called acoustic impedance of the system. We also show, by means of an explicit construction, that this model is rich enough to generate many examples which are covered by our results but not by those found in the previous literature. Our notation is standard. In particular, we let ) and g(t) = O(f (t)) as t → ∞. The functions f and g are said to be asymptotically equivalent if f (t)/g(t) → 1 as t → ∞, and in this case we write f (t) ∼ g(t), t → ∞. Given non-negative quantities x and y we occasionally write x y if x ≤ Cy for some constant C > 0. Given a Banach space X we write B(X) for the algebra of bounded linear operators on X. Throughout the remainder of this paper, all Banach spaces are implicitly assumed to be complex. If A is a closed linear operator on X we write σ(A) for the spectrum of A, ρ(A) = C \ σ(A) for its resolvent set, and given z ∈ ρ(A) we let R(z, A) = (zI − A) −1 denote the resolvent operator. We write F for the Fourier transform given, for a vector-valued function h ∈ L 1 (R, X), by and we define the Laplace transform of a function h ∈ L ∞ (R + , X) by Special classes of functions In this section we introduce some useful classes of real-valued functions; further information may be found in [11,Chapters 1 and 2], but see also [6,Section 2]. Given a ≥ 0 and α ∈ R, we say that a measurable function It can be shown that the mere existence of the limit in where p : [a, ∞) → R is a measurable function such that s → p(s)/s is locally integrable on [a, ∞) and p(s) → 0 as s → ∞, and q : [a, ∞) → (0, ∞) is a measurable function such that q(s) → q 0 as s → ∞ for some q 0 > 0. Using this representation it can be shown that every regularly varying function of strictly positive (respectively, negative) index is asymptotically equivalent to an eventually increasing (respectively, decreasing) regularly varying function of the same index, and one can even ensure that this function is smooth. Moreover, if one is interested in regularly varying functions only up to asymptotic equivalence then one may always take the function q in the representation (2.2) to be constant. Given a ≥ 0 and a measurable function M : [a, ∞) → (0, ∞) we say that M has positive increase if there exist strictly positive constants α > 0, c ∈ (0, 1] and s 0 ≥ a such that Conversely, if (2.5) holds for some strictly positive c = 1, then M has positive increase and in particular (2.5) holds for all c > 0. Proof. If M has positive increase then there exist strictly positive constants α > 0, c 0 ∈ (0, 1] and s 0 ≥ a such that Now (2.5) follows easily using the fact that M −1 is non-decreasing. Conversely, suppose that (2.5) holds for some strictly positive c = 1. Let us first assume that c > 1. Then there exist λ > 1 and so by Lemma 2.1 the function M has positive increase. A similar argument applies if c ∈ (0, 1), and the final statement follows from the first part. 3. Optimal decay for resolvent growth with positive increase 3.1. Singularity at infinity. The following result is proved in [7]. Then there exists a constant c > 0 such that The spectral assumption is natural here, since by [ The same result implies that if in the setting of Theorem 3.1 we let M (s) = sup |r|≤s R(ir, A) , s ≥ 0, and assume that M (s) → ∞ as s → ∞, then there exist constants C, c > 0 such that for all sufficiently large values of t. In general, this lower bound decays strictly faster than the upper bound in (3.1). In the important special case where M (s) = Cs α , s ≥ 1, for some constants C, α > 0 it was shown in [12] that (3.1) is sharp in general but that one may replace M −1 log by M −1 if X is a Hilbert space; see also [5]. This result was extended in [6,Corollary 5.7] to the case of regularly varying functions M satisfying M (s) = s α /ℓ(s), s ≥ 1, for some α > 0 and some non-decreasing slowly varying function ℓ : [1, ∞) → (0, ∞) having a certain symmetry property. Our first main result extends this conclusion to the class of all functions of positive increase. It is worth noting, however, that for functions M which grow significantly faster than polynomially the asymptotic behaviour of M −1 is the same as that of M −1 log . Thus Theorem 3.1 is already optimal in these cases, and our results improve Theorem 3.1 only if the growth of M is sufficiently close to being polynomial. The proof combines ideas taken from [15] and [6,Theorem 4.7] and is inspired by techniques from operator-valued Fourier multiplier theory; see [27][28][29]. Theorem 3.2. Let X be a Hilbert space and let A be the generator of a bounded C 0 -semigroup (T (t)) t≥0 on X. Suppose that σ(A) ∩ iR = ∅ and that M : R + → (0, ∞) is a continuous non-decreasing function of positive increase such that sup |r|≤s R(ir, A) ≤ M (s), s ≥ 0. Then Note also that R φ R (t) dt = 1 for all R > 0. Now temporarily fix t > 0 and, given n ∈ Z + , let g n : R → R be defined by In particular, g 0 = χ [0,t] . Let x ∈ X and n ∈ N be fixed for now. We define the map h n : R → X by h n (s) = g n (s)T (s)A −1 x, s ∈ R, where the semigroup is extended by zero to the whole real line. Then Our strategy is to split this integral by writing where δ denotes the Dirac mass at zero, and to estimate the resulting two integrals separately by making suitable choices of R > 0 and of n ∈ N. We begin by introducing the auxiliary function Φ : so that Φ ′ = φ − δ in the sense of distributions. Using the fact that Φ, being a primitive of a Schwartz function, decays rapidly at infinity and that R φ R (s) ds = 1, a simple calculation using integration by parts yields Now the distributional derivative of h n is given by where the implicit constant is independent of R, n, t and x. We now inductively define functions Φ k : R → R, k ∈ N, by setting Φ 1 = |Φ| and Then, for each k ∈ N, Φ k vanishes rapidly at infinity and we have Hence by a simple inductive argument using integration by parts we see that, for m ∈ Z + and s ≥ 0, Applying this with m = n − 1 and m = n in (3.7) we find after a simple calculation that n + 1 where the implicit constant is still independent of R, n, t and x and where, for m ∈ Z + , We now turn to the remaining term in the splitting. Note first that by Hölder's inequality We now estimate the L 2 -norm of φ R * h n . Given α > 0, define the function h n,α ∈ L 1 (R) by h n,α (s) = e −αs h n (s), s ∈ R. Then h n,α (s) = n!(T * n α * h 0,α )(s), where T α (s) = e −αs T (s), s ∈ R, again after extending the semigroup by zero to the whole real line. Hence and by the dominated convergence theorem, given any Schwartz function Since σ(A) ∩ iR = ∅ the resolvent of A extends holomorphically across the imaginary axis and hence is uniformly bounded in an open neighbourhood of i supp ψ R . It follows from (3.11) and another application of the dominated convergence theorem that where m n (s) = n!R(is, A) n A −1 and h(s) = g 0 (s)T (s)x, s ∈ R. A straightforward estimate using Plancherel's theorem now gives By rescaling M if necessary we may assume that M (s) ≥ 1 for all s ≥ 0, and then where s 0 > 0 is fixed but arbitrary. Now since M is non-decreasing and has positive increase there exist constants α > 0 and c ∈ (0, 1] such that We now make a specific choice of n by setting n = ⌈α −1 ⌉. A simple calculation then gives Combining the above estimates in (3.10) shows that for R ≥ s 0 we have where the implicit constant is independent of R, t and x. Using (3.12) in (3.5) along with our earlier estimate gives for all R ≥ s 0 and t > 0, where the implicit constant is independent of both R and t. In fact, the implicit constant would also be independent of n if it were still free to vary, and this will become important in the proof of Theorem 4.1 below. We now set R = M −1 (ct) for t ≥ c −1 M (s 0 ). Then the first two terms in (3.13) are uniformly bounded because the functions P n , P n−1 defined in (3.9) are non-increasing, and the final term is constant by our choice of R. Hence the result follows from Proposition 2.2. Remark 3.3. The techniques used in the above proof can be adapted and combined with ideas from [15] to give an alternative proof of Theorem 3.1. In this case the number n is allowed to grow arbitrarily large and one needs to control the norms Φ k L 1 , k ∈ N, by appealing to the Denjoy-Carleman theorem [21, Theorem 1.3.8]; see the proof of Theorem 4.1 below. Note also that in the general Banach space setting Plancherel's theorem has to be replaced by cruder ways of estimating the norms of Fourier transforms. The conclusion of Theorem 3.2 becomes false if we drop the assumption of positive increase. In fact, it is easy to construct examples of bounded normal semigroups (T (t)) t≥0 whose generator A satisfies σ(A) ∩ iR = ∅ and R(is, A) ≤ 1 + log |s|, |s| ≥ 1, but for which (3.14) One crucial feature in this example is that M is unbounded even though dist(is, σ(A)) ≥ 1 for all s ∈ R. As we shall see now, the situation changes if we restrict attention to cases in which the resolvent growth is controlled by the distance to the spectrum. Indeed, the following result is similar to [6, Proposition 5.1] and shows for a large class of semigroups, including in particular all normal semigroups, that the assumption of positive increase is in fact necessary for (3.3) to hold, so Theorem 3.1 is optimal in this sense. Note that the assumptions made in our result appear to be weaker, and are certainly easier to verify, than those of [6, Proposition 5.1]. We shall take advantage of this in Section 5 below. 3.2. Singularity at zero. Let X be a Banach space and let A be the generator of a C 0 -semigroup (T (t)) t≥0 on X. Recall from [9] the definition of the non-analytic growth bound ζ(T ) of (T (t)) t≥0 , namely where H(B(X)) denotes the set of all maps S : (0, ∞) → B(X) which have an exponentially bounded analytic extension to some sector containing (0, ∞). It follows from properties of the Laplace transform of analytic functions that if ζ(T ) < 0, then σ(A) ∩ iR is a compact set and sup |s|≥s 0 R(is, A) < ∞ whenever s 0 ≥ 0 is sufficiently large. For bounded C 0 -semigroups on Hilbert spaces these conditions are even equivalent to having ζ(T ) < 0; see [9] for a proof of this fact using the theory of Fourier multipliers. The following result is proved in [15]. It is shown in [6, Theorem 6.10] that if T (t)AR(1, A) → 0 as t → ∞ then necessarily σ(A) ∩ iR ⊆ {0} and sup |s|≥1 R(is, A) < ∞, so the spectral assumption and the condition on the non-analytic growth bound made in Theorem 3.5 are natural, especially when X is a Hilbert space; see also [30,Section 4.2]. Moreover, by [6, Corollary 6.11] we see that in the setting of Theorem 3.5 for the choice of M : [1, ∞) → (0, ∞) given by M (s) = sup s −1 ≤|r|≤1 R(ir, A) , s ≥ 1, there exist constants C, c > 0 such that for all sufficiently large t, at least provided R(is, A) grows faster than |s| −1 as |s| → 0. It is further shown in [6] that if X is a Hilbert space then one may replace M −1 log by M −1 in (3.18) when M (s) = Cs α , s ≥ 1, for some constants C > 0, α ≥ 1, and also if M is a regularly varying function of positive index satisfying certain supplementary conditions. Our next result is an analogue of Theorem 3.2 and extends these statements considerably. Theorem 3.6. Let X be a Hilbert space and let A be the generator of a bounded C 0 -semigroup (T (t)) t≥0 on X. Suppose that σ(A) ∩ iR = {0}, that sup |s|≥1 R(is, A) < ∞ and that M : [1, ∞) → (0, ∞) is a continuous nondecreasing function of positive increase such that sup s −1 ≤|r|≤1 R(ir, A) ≤ M (s), s ≥ 1. Then Proof. The proof is similar to that of Theorem 3.2. Let ψ : R → C be a Schwartz function such that ψ L ∞ = 1 and ψ(s) = 1 for |s| ≤ 1, and let φ = F −1 ψ. Temporarily fix x ∈ X, n ∈ N and t > 0, and define the map h n : R → X by h n (s) = g n (s)T (s)AR (1, A)x, s ∈ R, where the semigroup is extended by zero to the whole real line and where g n is as defined in (3.4). Moreover, let H n : R → X be given by In particular H n (s) = 0 for s < 0, and using integration by parts we obtain where K = sup t≥0 T (t) . For r ∈ (0, 1] we let φ r (t) = rφ(rt), t ∈ R, and ψ r = F(φ r ), as in the proof of Theorem 3.2. Integration by parts gives and hence As in the proof of Theorem 3.2 we now introduce functions Φ k : R → R, k ∈ N, defined as in (3.8) but with Φ 1 = |φ ′ |. This leads to the estimate where the implicit constant is independent of r, n, t and x, and where P n is as defined in (3.9). By our assumption that sup |s|≥1 R(is, A) < ∞ and a standard Neumann series argument there exists ε > 0 such that R(z, A) is uniformly bounded over all z ∈ C satisfying dist(z, i supp(1 − ψ r )) < ε. Hence as in the proof of Theorem 3.2 we have where m n (s) = n!AR(is, A) n , s ∈ R \ {0}, and h(s) = g 0 (s)T (s)R(1, A)x, s ∈ R. Using the fact that M (s) ≥ s, s ≥ 1, it is straightforward to show that AR(is, A) n ≤ 2|s|M (|s| −1 ) n , 0 < |s| ≤ 1. By rescaling M if necessary we may assume that R(is, A) ≤ M (1), |s| ≥ 1. Since M is assumed to have positive increase it follows as before that for an appropriate choice of n we have where c > 0 is a constant. We deduce, upon applying Plancherel's theorem and Hölder's inequality, that for sufficiently small values of r we have where the implicit constant is independent of r, t and x. Combining this with (3.20) as in the proof of Theorem 3.2 gives where the implicit constant is independent of both r and t. We now set r = M −1 (ct) −1 for sufficiently large t. Then in particular rt ≥ c −1 , and since P n is non-increasing the result follows from Proposition 2.2. As in Section 3.1 we can show that the condition of positive increase is not only sufficient but even necessary for the conclusion of Theorem 3.6 to hold, at least for a large class of semigroups. We omit the proof, which is similar to that of Theorem 3.4; see also [6, Proposition 6.13]. Theorem 3.7. Let X be a Banach space and let A be the generator of a for some c > 0 then M has positive increase. 3.3. Singularities at zero and infinity. We now consider the remaining case where the resolvent operator has singularities at both zero and infinity. The following result is proved in [25]. Then there exists a constant c > 0 such that The spectral assumption is again natural here, since by [6, Corollary 6.2] we have σ(A) ∩ iR ⊆ {0} whenever T (t)AR(1, A) 2 → 0 as t → ∞. Note also that if X is a Hilbert space and the function M ∞ is bounded then ζ(T ) < 0 and hence the conclusion of Theorem 3.8 follows from Theorem 3.5 in this case. It is shown in [6,Corollary 8.2] that in the setting of Theorem 3.8 for the smallest possible choices of M 0 , and M ∞ , defined as in Sections 3.1 and 3.2, there exist constants C, c > 0 such that for all sufficiently large t, at least provided R(is, A) grows faster than |s| −1 as |s| → 0. It is further shown in [6,Theorem 8.4] that if X is a Hilbert space then one may replace M −1 log by M −1 in (3.21) when M 0 (s) = Cs α and M ∞ (s) = cs β , s ≥ 1, for some constants C, c, β > 0 and α ≥ 1. However, the techniques used in [6] do not allow the authors to obtain similar results for any broader class of functions. Our next result shows that one may replace Proof. The proof follows the same pattern as those of Theorems 3.2 and 3.6, and indeed combines ideas from both proofs. This time the splitting arises from the decomposition where r ∈ (0, 1], R > 0 and the notation is as before, with φ being the same as in the proof of Theorem 3.2 and ϕ being the function arising in the proof of Theorem 3.6. The integrals corresponding to the first two terms of the splitting can now be dealt with as in the proof of Theorem 3.2, the terms arising from the second two as in the proof of Theorem 3.6. Once again it can be shown that the condition of positive increase is necessary for the conclusion of Theorem 3.9 to hold, at least for a large class of semigroups. The proof involves no new ideas, so as in the case of Theorem 3.7 we omit it. for some c > 0 then M has positive increase. Decay for resolvent growth with quasi-positive increase The purpose of this section is to investigate rates of decay in the case of resolvent growth which does not have positive increase. Our main interest is in the case where the resolvent growth is sub-polynomial. Since this situation cannot arise when there is a singularity at zero, it is natural to consider only the case of a singularity at infinity. We begin by extending the terminology introduced in Section 2. Given measurable functions M, N : R + → (0, ∞) with N non-decreasing we say that M has quasi-positive increase (with auxiliary function N ) if there exist constants c ∈ (0, 1] and s 0 > 0 such that In particular, a measurable function M : R + → (0, ∞) has positive increase if and only if it has quasi-positive increase and admits a bounded auxiliary function. Suppose, for instance, that M : R + → (0, ∞) is a slowly varying function which admits a representation as in (2.2) for some a > 0, with p positive, continuous and non-increasing and with q constant. We shall refer to such slowly varying functions as being normalised. It is then straightforward to verify that (4.1) is satisfied for the function N (s) = p(s) −1 , s ≥ s 0 , if we choose c = 1 and any s 0 ≥ a. Furthermore, any non-decreasing function M : R + → (0, ∞) has quasi-positive increase with auxiliary function N (s) = log(2 + s), s ≥ 0. In this case (4.1) holds for c = e −1 and s 0 = 1. Recall that Theorem 3.2 becomes false if we drop the assumption of positive increase. The following result is a generalisation of Theorem 3.2 to the case where the resolvent growth has quasi-positive increase. Here and in the remainder of this section, given two functions M : R + → (0, ∞) and K : [a, ∞) → (0, ∞) for some a ≥ 0 we shall let M K : [a, ∞) → (0, ∞) denote the function defined by M K (s) = M (s)K(s), s ≥ a, even though strictly speaking this is inconsistent with the notation M log used elsewhere in the paper. In particular, given any ε ∈ (0, 1) we have Proof. If N is bounded then M has positive increase and the result follows from Theorem 3.2, so we may assume that N (s) → ∞ as s → ∞. Let us first prove (4.2). Note that by Stirling's formula (4.4) (n + 1)! ≍ n n e n 1 + 3 log n 2n n , n → ∞. We use the same notation as in the proof of Theorem 3.2 and proceed in exactly the same way except that we now allow our choice of n to be depend on R. Indeed, if we choose n = ⌈N (R)⌉ then (3.13) and (4.4) imply that for R sufficiently large and t > 0 we have where the implicit constant is independent of both R and t. We now set R = M −1 K (cet) for t sufficiently large. Thus (4.2) follows provided the first two terms inside the brackets remain uniformly bounded as t → ∞. By the Denjoy-Carleman theorem [21, Theorem 1.3.8] we may assume that the function ψ in addition to the properties already mentioned satisfies Integrating by parts we then find that |φ(s)| C k (1 + |s|) −k for all k ∈ Z + and s ∈ R, and hence Φ k L 1 C k+2 for all k ∈ Z + . Using (3.9) and estimating crudely we thus find, after adjusting the value of the constant C, that for t ≥ 1 we have where the implicit constant is independent of t and hence of R. Since N grows at most logarithmically, we deduce that P n (Rt) is uniformly bounded as t → ∞. Moreover, since N (R) t we see similarly that the second term in (4.5) remains bounded as t grows large. This completes the proof of (4.2). In order to obtain (4.3) it suffices to observe that given any ε ∈ (0, 1) we have M K (s) ≤ (1 − ε) −1 M N (s) for all sufficiently large values of s. In general, one would expect this estimate to be significantly better than (4.3) but perhaps not quite as sharp as (4.2). As we shall see shortly, in some important cases (4.2) and (4.6) lead to the same rate of decay. The assumptions made in Theorem 4.1 are natural. Indeed, since M is assumed to be non-decreasing the growth assumption on N in view of the comments made at the beginning of this section involves no essential loss of generality. Moreover, if N were allowed to grow faster than logarithmically then M N and M K would in general grow faster than the function M log appearing in Theorem 3.1, so (3.1) would give a better estimate than Theorem 4.1. Finally, the assumption that M is unbounded, which in Section 3 was implicit in the assumption of positive increase, is also natural here. Indeed, if (T (t)) t≥0 is a bounded C 0 -semigroup on a Hilbert space whose generator has uniformly bounded resolvent along the imaginary axis then (T (t)) t≥0 is in fact uniformly exponentially stable by the Gearhart-Prüss theorem [3, Theorem 5. .7) it is easy to see that any pointwise minimal auxiliary function must eventually be proportional to the logarithm function, and then straightforward optimisation arguments lead to the choice N (s) = (1 + α) −1 log s for sufficiently large values of s, which satisfies (4.1) for c = e −1 (α −1 + 1) α . Hence given ε ∈ (0, 1) it follows from (4.3) that where c α = α −α (1 + α) 1+α . Using either (4.2) or (4.6) we obtain the significantly sharper estimate Given a Banach space X we say that a C 0 -semigroup (T (t)) t≥0 on X with generator A on is a quasi-multiplication semigroup if for every λ ∈ ρ(A). This terminology is taken from [6], although the definition given there is slightly more restrictive. It follows from the spectral theorem that any C 0 -semigroup of normal operators is a quasi-multiplication semigroup, but the class also contains multiplication semigroups on non-Hilbertian function spaces. Our next result describes the exact rate of decay for quasi-multiplication semigroups with arbitrary resolvent growth. The proof is an extension of the ideas used in Theorem 3.4; see also [ Proof. Since (T (t)) t≥0 is a quasi-multiplication semigroup we have Since M is unbounded we may assume, by choosing t to be sufficiently large, that the supremum is unaffected by restricting consideration to points z ∈ σ(A) satisfying | Im z| ≥ 1. Thus Now let ε ∈ (0, 1) and consider the function K : R + → (0, ∞) defined by Note that, by (4.11), the function K is continuous and strictly increasing. Arguing as in the proof of Theorem 3.4 we see that for sufficiently large values of s we may find α+iβ ∈ σ(A) such that −α ≤ M (s) −1 and |α+iβ| ≤ (1 − ε) −1 s. It then follows as before from (3.17) with N −1 replaced by K, and with the choices c = δ = 1 and C = 1 − ε, that there exists a constant s 0 > 0 such that K −1 (λs) ≥ M (s) log λ for all λ ≥ 1 and all s ≥ s 0 . Thus . This completes the proof. If we allow s(A) < 0 in Theorem 4.4 then it is still true that , as can be seen from a straightforward extension of the first part of the proof. However, in this case (4.10) no longer holds in general. For instance, if we let A be the generator of a quasi-multiplication semigroup such that −α ∈ σ(A) ⊆ (−∞, −α] for some α > 0, then T (t)A −1 = α −1 e −αt but M −1 max (t) −1 = e −αt , t ≥ 0, so (4.10) is violated unless α = 1. We leave open whether (4.13) holds for more general bounded C 0 -semigroups (T (t)) t≥0 on a Hilbert space with generator A satisfying σ(A) ∩ iR = ∅. Note that one does not in general have T (t)A −1 ≍ M −1 max (t) −1 , t → ∞, as can be seen by letting A be a 2 × 2 Jordan block. We conclude this section by revisiting the special cases considered in Example 4.3. T , t → ∞. We do not know whether the polynomial factor is really needed or whether perhaps the sharper estimate (4.13) holds in this case. gives the best possible estimate up to the arbitrarily small loss in the constant multiplying t, and one may apply (4.2) or (4.6) to get a sharper estimate. However, for α ∈ [1/2, 1) the function M −1 max (t) grows strictly faster than M −1 N (et) as t → ∞, so (4.3) would not give the best possible rate of decay even if we were allowed to set ε = 0. In this example it is possible to push our approach slightly further by allowing the choice of the auxiliary function N and of the constant c in (4.1) to depend on s, but we do not pursue this idea here. Application to a wave equation with viscoelastic damping In this section we apply the theoretical results of Section 3 to obtain sharp estimates on the rate of energy decay for solutions of a wave equation subject to damping at the boundary. Indeed, let us consider the problem Here ∂ n denotes the outward normal derivative in the space variable at the boundary, the convolution is with respect to the time variable and k : R + → R is a completely monotone integrable function, which is to say that there exists a positive Radon measure ν on R + , satisfying ν({0}) = 0 and R + τ −1 dν(τ ) < ∞, such that We extend k to the whole real line by zero, and we assume throughout that k = 0. This system can be viewed as a model of sound propagation under reflection subject to viscoelastic damping at the boundary, and in this case the boundary condition captures memory effects, u t and −∇u are the pressure and velocity of the fluid and Fk, or alternatively the Laplace transform of k, is the acoustic impedance; for further details see [32], where the same model is considered also for higher-dimensional domains. The results in this section are closely related to those obtained independently in [10], where rates of energy decay are investigated for a very similar model. We begin by recasting the problem in the form of an abstract Cauchy problem, (5.3) ż(t) = Az(t), t ≥ 0, where the initial data vector x is an element of some Hilbert space X and represents not only the pressure and velocity of the fluid at time t = 0 but also the fluid pressure at the boundary for all times t < 0. It is shown in [16] that for suitable choices of A and of the Hilbert space X this abstract Cauchy problem is well-posed and that the C 0 -semigroup (T (t)) t≥0 generated by A is contractive. Moreover, the square of the norm in the Hilbert space X can be interpreted physically as the energy of the system. The following result is proved in [32]. , s ≥ 0. The assumption on ν ensures that 0 ∈ σ(A). If this condition is not satisfied for any ε > 0 then 0 ∈ σ(A) ⊆ C − ∪ {0} and R(is, A) ≍ |s| −1 as |s| → 0; see [32]. Hence the model can also give rise to resolvents which have singularities at both zero and infinity. For simplicity we focus here only on the case where there is no singularity at zero. In view of Theorem 5.1 it is natural to introduce the function M : R + → (0, ∞) defined by , s ≥ 0. We have Re Fk(s) = R + τ τ 2 + s 2 dν(τ ), s ≥ 0, so the function M is well-defined, continuous, non-decreasing and satisfies M (s) → ∞ as s → ∞. We now turn to the study of energy decay for classical solutions of (5.3). By combining Theorem 5.1 with Theorem 3.1 and the subsequent remarks we obtain the following result. Examples given in [32] show that there exist suitable functions k such that Re Fk(s) ≍ s −α , s → ∞, for any α ∈ (0, 1). In this case M (s) ≍ s α as s → ∞ and Theorem 5.4 certainly applies, but the same optimal rate of decay could already have been obtained using [12,Theorem 2.4]. We conclude this paper with a result showing that the function k in our model can be chosen in such a way that Re Fk has the same asymptotic behaviour as 1/M for any given regularly varying function M : R + → (0, ∞) of index strictly between 0 and 2. Such cases are only very partially covered by the results in [6], but fall squarely into the scope of Theorem 5.4 above. Moreover, let ν be the Radon measure on R + with Lebesgue density g. Then R + τ −1 dν(τ ) < ∞, so the function k defined by (5.2) is integrable. A simple application of Fubini's theorem shows that Re Fk(s) =
2018-12-31T12:38:02.269Z
2017-09-26T00:00:00.000
{ "year": 2017, "sha1": "5da6bf5e06baf22db2489ed57d6c169ec2c0b970", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1709.08895", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "5da6bf5e06baf22db2489ed57d6c169ec2c0b970", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
17292584
pes2o/s2orc
v3-fos-license
Conserved Meiotic Machinery in Glomus spp., a Putatively Ancient Asexual Fungal Lineage Arbuscular mycorrhizal fungi (AMF) represent an ecologically important and evolutionarily intriguing group of symbionts of land plants, currently thought to have propagated clonally for over 500 Myr. AMF produce multinucleate spores and may exchange nuclei through anastomosis, but meiosis has never been observed in this group. A provocative alternative for their successful and long asexual evolutionary history is that these organisms may have cryptic sex, allowing them to recombine alleles and compensate for deleterious mutations. This is partly supported by reports of recombination among some of their natural populations. We explored this hypothesis by searching for some of the primary tools for a sustainable sexual cycle—the genes whose products are required for proper completion of meiotic recombination in yeast—in the genomes of four AMF and compared them with homologs of representative ascomycete, basidiomycete, chytridiomycete, and zygomycete fungi. Our investigation used molecular and bioinformatic tools to identify homologs of 51 meiotic genes, including seven meiosis-specific genes and other “core meiotic genes” conserved in the genomes of the AMF Glomus diaphanum (MUCL 43196), Glomus irregulare (DAOM-197198), Glomus clarum (DAOM 234281), and Glomus cerebriforme (DAOM 227022). Homology of AMF meiosis-specific genes was verified by phylogenetic analyses with representative fungi, animals (Mus, Hydra), and a choanoflagellate (Monosiga). Together, these results indicate that these supposedly ancient asexual fungi may be capable of undergoing a conventional meiosis; a hypothesis that is consistent with previous reports of recombination within and across some of their populations. Introduction Meiosis, a hallmark of eukaryotic cells, is necessary for the production of gametes (e.g., spores). It is a major driver of recombination in all eukaryotes, resulting in the shuffling of genomic material between chromosomes. Although predominant throughout eukaryotes (Malik et al. 2008), the advantages versus costs of sexual reproduction and meiosis are still a matter of debate (Ackerman et al. 2010;Archetti 2010). Indeed, although evolutionary theory predicts a rapid extinction of asexual lineages as a consequence of the accumulation of deleterious mutations (Otto and Lenormand 2002;Otto 2009), a number of eukaryotes commonly referred to as ''ancient asexuals'' (Maynard-Smith 1986) have thrived across diverse ecosystems for millions of years without sex. These ancient asexuals include evolutionarily distant groups such as the bdelloid rotifers, the arbuscular mycorrhizal fungi (AMF) and a number of protist lineages (Maynard-Smith 1986;Haig 1993;Judson and Normark 1996;Gordo and Charlesworth 2000;Normark 2003;Schurko et al. 2009), which are all derived from sexual ancestors (Ramesh et al. 2005). In the last decade, research on ancient asexuals has provided new insights into how these organisms coped with the absence of observable sexual cycles. First, the majority of these ''asexuals'' have been found to exhibit genetic recombination consistent with sexual reproduction (Schurko et al. 2009;Heitman 2010). Second, all former putatively asexual ª The Author(s) 2011. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/ 3.0), which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. taxa whose completely sequenced genomes have been surveyed so far-including a number of medically important pathogens (i.e., Giardia intestinalis, Trichomonas vaginalis, Entamoeba histolytica, several microsporidia, Candida spp., Cryptococcus neoformans, Aspergillus spp., etc.)have ''core meiotic gene'' homologs that encode a set of proteins that only function during meiosis in model animals, fungi, and plants studied, and other general DNA repair proteins that are required for proper completion of meiotic recombination in model organisms (Villeneuve and Hillers 2001;Wong et al. 2003;Ramesh et al. 2005;Lee et al. 2008Lee et al. , 2010Malik et al. 2008), consistent with the hypothesis that the core meiotic gene products would also function in meiotic recombination (or a similar recently derived parasexual process) in these organisms. Orthologs of meiosisspecific genes would be under relaxed functional constraint and accumulate deleterious mutations in asexual organisms and could not be detected by comparative genomic approaches (discussed further by Schurko and Logsdon 2008;Schurko et al. 2009 and references therein), whereas general function DNA repair proteins required for meiotic and mitotic recombination would persist (e.g., Rad51,Mlh1,Pms1,Msh2,Msh6,Rad50,Mre11,etc.). Third, sexual reproduction has recently been identified in other organisms long thought to reproduce only clonally, such as the fungus Aspergillus fumigatus, (Poggeler 2002;Heitman 2006Heitman , 2010O'Gorman et al. 2009), indicating that asexuality should not be assumed based solely on absence of a recognizable sexual stage. AMF represent one of those ancient asexual lineages whose genomes have yet to be explored for the evidence of sex. These coenocytic and multinucleate fungi represent a group of plant symbionts that play a dramatic role in terrestrial ecosystems and that are thought to date back 500 My (Humphreys et al. 2010). Attempts to explain their extreme longevity in the absence of sex have generally focused on their atypical cellular content, which includes hundreds of nuclei per spore that may, or may not, be genetically divergent (Kuhn et al. 2001;Pawlowska and Taylor 2004;Hijri and Sanders 2005;Pawlowska 2005;Stukenbrock and Rosendahl 2005). These nuclei are thought to be exchanged at a certain rate among different AMF through anastomosis (fusion between hyphae) (Croll et al. 2008;Angelard et al. 2010) and to regularly recombine. In this study, we explore the potential for meiotic recombination in AMF by searching for the core meiotic genes across the genomes of four AMF species. This ''core'' of genes, a term first coined by Villeneuve and Hillers (2001) with reference to meiotic recombination machinery of model animal, fungal, and plant systems, and later expanded by phylogenomic analyses to include diverse protists (Ramesh et al. 2005;Malik et al. 2008), encodes 30 proteins that comprise the conserved meiotic recombination machinery of eukaryotes. The present comparative genomic survey allows us to form a hypothetical model for meiosis-like recombination in a cryptic sexual (or parasexual) cycle in AMF, which is consistent with previous reports of recombination in these organisms and may be tested further by future functional studies. Materials and Methods Acquisition of a Low Coverage Survey of Glomus diaphanum, Glomus irregulare, Glomus clarum, and Glomus cerebriforme Low coverage genome surveys of G. diaphanum (MUCL 43196), G. irregulare (DAOM-197198), G. cerebriforme (DAOM 227022), and G. clarum (DAOM 234281) were obtained using the 454 pyrosequencing facility at the Gé nome Qué bec Innovation Centre (McGill University, Canada). In all cases, an average of 350 Mb of genome data have been generated using an average read length of 336 bp (median length 368 bp). The assemblies resulted in an average of 46,000 contigs for all species with an average length of 1,010 bp. All AMF sequences identified in this study and their relative accession numbers are shown in table 1 and the supplementary table S1, Supplementary Material online. In Silico Identification of AMF Meiosis Genes A total number of 87 genes known to be required for proper meiotic recombination in Saccharomyces cerevisiae have been searched across the genomes of G. irregulare (DAOM 181602) and G. diaphanum (MUCL 43196) using three independent, yet highly complementary approaches (table 1; supplementary table S1, Supplementary Material online). This list of genes is based on previous genomic surveys of fungi and other eukaryotes (Malik et al. 2008;Burns et al. 2010;Nowrousian et al. 2010), which we expanded here to include representatives from those extant fungal phyla with publicly available complete or near-completely sequenced genomes in gene depositories, that is, the Basidiomycota C. neoformans, Ustilago maydis, and Coprinus cinereus; the Chytridiomycota Batrachochytrium dendrobatidis and Allomyces macrogynus; and the Zygomycota Rhizopus oryzae and Phycomyces blakesleeanus. Putative meiotic genes were initially searched using reciprocal TBlastX and TBlastN (Altschul et al. 1997) searches of publicly available expressed sequence tags from G. irregulare deposited in GenBank (National Center for Biotechnology Information [NCBI]). This preliminary analysis allowed the identification of several AMF transcripts with unambiguous homology to meiosis genes from a number of more distantly related fungal taxa. The remaining genes were from a low coverage genome survey of G. diaphanum, G. clarum, and G. cerebriforme using reciprocal BlastX, TBlastX, and TBlastN procedures and by using a polymerase chain reaction (PCR) approach based on degenerate primers. Overall, the combination of these bioinformatics and molecular approaches allowed the identification of Phylogenetic Verification of Glomus spp. Meiosis-Specific Protein-Coding Genes in Gene Families The identification of meiosis-specific gene homologs in Glomus spp. was subject to further scrutiny by phylogenetic comparison to other representative eukaryotes, to ensure that orthologs (vs. paralogs) of the seven meiosis-specific genes (Rec8, Spo11-1, Dmc1, Hop2, Mnd1, Msh4, and Msh5) were correctly identified. Generally, we compared Glomus spp. sequences with data from representative fungi, animals (Mus musculus, Hydra magnipapillata), and a choanoflagellate (M. brevicollis). Sequences obtained from G. cerebriforme (Msh4, Rad21, Dmc1) and G. clarum (Rad51, Dmc1) were added to the phylogenetic analyses in an effort to improve resolution. We assembled Glomus spp. meiosis-specific genes and annotated putative open reading frames by using Geneious Pro 5.3.6 (Biomatters Ltd.) with reference to pairwise comparisons made by BlastX of GenBank and to multiple sequence alignments of homologous proteins made with MUSCLE v. 3.7 (Edgar 2004). Where applicable, vector or PCR primer sequences were excluded from the assemblies. Besides Glomus spp., homologs of meiosis-specific proteins were identified by BlastP searches of the nonredundant NCBI database, JGI, and the Broad Institute (see above). Multiple amino acid sequence alignments (MUSCLE v. 3.7 [Edgar 2004]) were inspected and adjusted manually using MacClade 4.08 (Maddison WP and Maddison DR 1989), and only unambiguously aligned amino acid sites were used for phylogenetic analyses. We used RAxML v. 7.2.8 (Stamatakis 2006) and MrBayes v. 3.12 (Huelsenbeck and Ronquist 2001;Ronquist and Huelsenbeck 2003) for phylogenetic analyses. Amino acid sequence phylogenies were computed using RAxML v. 7.2.8 with the LG model of amino acid substitutions (Le and Gascuel 2008) and 25 c-distributed substitution rate categories (LG þ 25c) for 1,000 bootstrap replicates. Bootstrap support was estimated from 1,000 replicates using PhyML v3.0 (Guindon and Gascuel 2003) with the LG þ I (2), for e values less than or equal to 1 Â 10 À05 (Finn et al. 2008). a Best CS-BLAST hit information (source organism, score, e value, and % identity) of each Glomus meiotic protein against a database containing the Saccharomyces cerevisiae S288c (Sc), Neurospora crassa OR74A (Nc), and Sordaria macrospora (Sm) genomes or against the Sc genome when no ortholog is present in the two other ascomycetes. b Alignments were performed using CS-BLAST with two iterations. Meiosis-specific genes are highlighted in gray cells. Each Glomus spp. protein best hit information corresponds to the expected fungal ortholog, except for Rad54, for which corresponding hits represent the greatest identity percentage and the greatest hit length. þ 8c model. We ran MrBayes for 10 6 generations hosted by the CIPRES Science Gateway Portal v. 3.1 at the San Diego Supercomputer Center (Miller et al. 2011), with four incrementally heated Markov chains, a sampling frequency of 10 3 generations, temperature set at 0.5 and Whelan and Goldman model (WAG) þ I þ 8c (Whelan and Goldman 2001). Only the RAxML topologies are shown. Phylogenetic Analyses of Concatenated DNA Repair Proteins A fungal phylogeny was inferred using 12 orthologous DNA repair proteins among the meiotic proteins retrieved for all surveyed fungal taxa (highlighted as blue cells; supplementary table S1, Supplementary Material online), as well as in the outgroup species M. brevicollis (Choanoflagellata) and the animals M. musculus and Tetraodon nigroviridis. A multiple sequence alignment was produced using MUSCLE for each protein (Edgar 2004), and divergent or ambiguous positions were removed. Evolutionary models for each protein were determined using ProtTest (Abascal et al. 2005). Several phylogeny inference procedures gave similar trees (data not shown). The alignments were concatenated using Concaterpillar (Leigh et al. 2008). The phylogenetic tree was inferred using PhyML v3.0 (Guindon and Gascuel 2003) and WAG þ 4c with 1,000 bootstrap replicates. A phylogenetic tree was also inferred using MrBayes v3.1.2 (Ronquist and Huelsenbeck 2003), with 10 7 generations with 4 c-distributed substitution rate categories and separate substitution models for each protein. Results and Discussion In the present study, we identified a total of 51 genes in Glomus spp. that are required for the proper completion of meiosis in S. cerevisiae ( fig. 1 and table 1, supplementary table S1, Supplementary Material online). Homologues of S. cerevisiae genes that could not be identified in Glomus spp. were also absent from the representative genomes of other higher fungal groups, including Sordariales among ascomycetes. Overall, among the 87 S. cerevisiae genes we searched, none were missing exclusively from our available AMF sequence data set, suggesting that our in silico and molecular approaches have covered most, if not all, of the available AMF predicted meiotic proteome. Importantly, more than 85% of the core meiotic genes were found to be present in AMF (fig. 2). The only AMF core meiotic genes that could not be detected were homologues of Pch2, Hop1, Mei4, and Mer3; all genes whose loss does not affect the successful completion of meiosis in many fungi (Malik et al. 2008;Kumar et al. 2010). In particular, Pch2, Mei4, Hop1, and Mer3 genes are also absent from the genome of the zygomycete R. oryzae, and Hop1 and Mer3 are absent from known sexual organisms (i.e., N. crassa, (Nowrousian et al. 2010). The presence or absence of these genes have been scored in the genomes of fungal relatives, including representative species belonging to the phylum Ascomycota ( [Nowrousian et al. 2010], purple circle), Basidiomycota ( [Donaldson and Saville 2008;Burns et al. 2010], orange circle), Chytridiomycota (red circle), Zygomycota (dark blue circle), and the AMF Glomeromycota (Green circle), inventoried in detail in supplementary table S1, Supplementary Material online. Meiosis-specific genes are shown in red text. Asterisks represent genes that are sometimes absent in the genome of one or more members of a given phylum. Data included in the purple circle were reported elsewhere (Nowrousian et al. 2010), and we did not repeat the analyses. Our study indicates that AMF genomes contain genes encoding all the tools necessary for meiotic recombination. In particular, they have genes that encode orthologs of seven meiosis-specific proteins involved in sister-chromatid cohesion (Rec8), double-strand DNA breaks (Spo11-1), interhomolog recombination (Mnd1, Hop2, and Dmc1), and class II crossovers (Msh4 and Msh5) (supplementary figs. S1 and S2, Supplementary Material online). Phylogenetic analyses were used to verify the orthology of these seven meiosis-specific gene homologs in Glomus spp. relative to other fungi, with animals and a choanoflagellate as outgroups (supplementary figs. S1 and S2, Supplementary Material online). These proteins were not selected to trace the species genealogy but are sufficient to determine orthology of all Glomus spp. meiosis-specific genes we identified and their vertical descent (as opposed to them being specifically related to another organism by lateral gene transfer or contaminants in our cultures). In particular, Glomus spp. encode a meiosis-specific Rec8 protein that is distinct from the general Rad21 sister-chromatid cohesion and harbor orthologs of the meiosis-specific transesterase Spo11-1. The meiosisspecific RecA homolog, Dmc1, encoded Glomus spp. is also of fungal origin, as is Rad51, the general eukaryotic recombinase required for homologous recombination. Glomus spp. encode distinct meiosis-specific Mnd1 and Hop2 orthologs; these function with Dmc1 in interhomolog DNA strand exchange during meiosis in model organisms. Glomus spp. are also equipped for mismatch repair with Msh2 and Msh6 proteins and also for meiosis-specific (class II) crossovers that exhibit interference, with Msh4 and Msh5 proteins. Altogether, the presence of these genes in Glomus spp. is compelling evidence for an active, hitherto undetected, meiosis-like program in the life cycle of AMF. The presence of meiotic recombination proteins in AMF is also supported by other independent signatures of sexuality, namely the presence of many retrotransposons (Ty1-Copia and Ty3-Gypsy; data not shown) (Matic 2001;Wright and Finnegan 2001;Arkhipova 2005 Villeneuve and Hillers 2001;Malik et al. 2008;Joshi et al. 2009 and references therein) and their presence (þ) and absence (À, i.e., not detected) in the fungal genomes surveyed in this study. þ/À denotes the absence of the given genes in some species belonging to that specific phylum. Meiosis-specific proteins are shown in gray columns. A. Ascomycota; B. Basidiomycota; C. Chytridiomycota; Z. Zygomycota; G. Glomeromycota (i.e., AMF). Orthologs of Rad21, Rad51, Pms1, and Mlh and meiosis-specific Spo11-1, Rec8, Hop1, Hop2, Mnd1, and Dmc1 genes of basidiomycetes and B. dendrobatidis were identified with assistance from Arthur Pightling. 2005; Gollotte et al. 2006) and recombination within their populations (Vandenkoornhuyse et al. 2001;Croll and Sanders 2009;den Bakker et al. 2010). These evolutionary features, combined with the presence of an expanded suite of conserved meiotic recombination genes, are compelling indicators of sexual reproduction in many eukaryotes (Malik et al. 2008;Schurko et al. 2009). Here, we propose a model of meiotic recombination in AMF based on the presence of core meiotic genes ( fig. 3). We also identified meiosis-specific gene homologs in B. dendrobatidis, a chytridiomycete that lacks any described sexual cycle. Although sex is now known in the other fungi included in our analyses, B. dendrobatidis, the fungal pathogen of amphibians, appears to primarily reproduce The names of meiosis-specific proteins are highlighted in green. Exact stoichiometry is not implied. In meiosis I, cohesins bind to sister chromatids (A), after which double-strand DNA breaks occur, with Spo11 and accessory recombination initiation proteins if present (B). Double-strand break repair is initiated (C). Interhomolog recombination and strand exchange proteins are attracted to the double-strand break (accessory proteins not shown) (D). The resulting heteroduplex (E) may be resolved by class II crossovers, which utilize meiosis-specific proteins (F, G) or by gene conversion (proteins not shown) or Class I crossovers (via Mus81), which do not. This model is derived from the general model that was based on details from Saccharomyces cerevisiae, Drosophila melanogaster, Caenorhabditis elegans, and Arabidopsis thaliana, and phylogenomic analyses described in references (Malik et al. 2008) and references within. asexually (James et al. 2009). Core meiotic genes identified in figure 2 indicate that B. dendrobatidis is also capable of undergoing meiotic recombination. The acquisition of a large sequence data set allowed us to tackle another interesting aspect of AMF evolution, namely their origin from within the fungal kingdom. In particular, we tested the most recent findings suggesting that these ubiquitous organisms may be more closely related to the Zygomycetes than previously thought (Corradi and Sanders 2006;Lee and Young 2009;Liu et al. 2009). We tested this hypothesis by reconstructing a fungal phylogeny of the DNA repair and recombination proteins encoded by all surveyed taxa ( fig. 2), as these are fairly well conserved. The resulting phylogenies were all very similar to those identified recently, showing that AMF cluster with relatively strong support as a sister group of Mucorales (phylum Zygomycota). Obviously, the reduced species sampling in our study does not allow any conclusive evidence about the specific evolutionary origin of AMF within the fungal kingdom. However, this relevant phylogenetic signal, together with a virtually identical set of core meiotic genes between those groups (all genes that are absent in AMF are also absent from R. oryzae), is a highly intriguing relationship that will hopefully bolster future research in this specific area of comparative genomics upon completion of the first AMF genome sequence (Martin et al. 2008). Recent advances in the field of population genetics have allowed the identification of several events of recombination both within and across several AMF populations (Vandenkoornhuyse et al. 2001;Croll and Sanders 2009;den Bakker et al. 2010). However, conclusions about the origin of such events (i.e., meiotic vs. mitotic recombination) have been systematically shadowed by a lack of evidence for meiosis in these putative ancient asexuals. By providing the first evidence for an expanded and conserved catalog of AMF meiosis-specific genes, the present study fills an important gap in our understanding of the genetics of these ubiquitous ecologically important organisms. In particular, these conclusions open up the exciting perspective that AMF may not be the evolutionary aberration that they have been long held to be and that they may be able to undergo a cryptic sexual cycle. Future studies such as colocalization or genetic disruption are required to demonstrate the conditions in which the meiosisspecific gene homologs we identified in this study encode products functioning in meiosis in Glomus spp. or if they function in a putative parasexual process including interhomolog recombination and crossing over, that is recently derived from a typical meiotic recombination process. Supplementary Material Supplementary figures S1 and S2 and tables S1 and S2 are available at Genome Biology and Evolution online (http:// www.gbe.oxfordjournals.org/).
2016-05-12T22:15:10.714Z
2011-08-29T00:00:00.000
{ "year": 2011, "sha1": "c03481ee2bef13cb2b30ea40c89bf481888ffe2c", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/gbe/article-pdf/doi/10.1093/gbe/evr089/17925551/evr089.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c03481ee2bef13cb2b30ea40c89bf481888ffe2c", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
53452263
pes2o/s2orc
v3-fos-license
Bronchoalveolar Lavage and Sampling in Pulmonary Sarcoidosis Sarcoidosis is a multisystem disorder of unknown cause. It commonly affects young and middle-aged adults and frequently it presents with bilateral hilar lymphadenopathy, pulmonary infiltration, ocular and skin lesions. Other organs may also be involved. The diagnosis is established when clinicoradiological findings are supported by histological evidence of noncaseating epitheloid cell granulomas. Granulomas of known causes and local sarcoid reactions must be excluded. Frequently observed immunological features are depression of cutaneous delayed-type hypersensitivity and increased CD4/CD8 ratio at the site of involvement. Circulating immune complexes along with signs of B-cell hyperactivity may also be detectable. The course and prognosis may correlate with the mode of the onset and the extent of the disease. An acute onset with erythema nodosum or asymptomatic bilateral hilar lymphadenopathy usually heralds a self-limiting course, whereas an insidious onset, especially with multiple extrapulmonary lesions, may be followed by relentless, progressive fibrosis of the lungs or other organs. Corticosteroids relieve symptoms, suppress inflammation and granuloma formation (Grutters et al., 2009). Sarcoidosis is the most frequently observed interstitial lung disease of unknown origin in Europe (MüllerQuernheim, 1998). In young adults, pulmonary sarcoidosis is the second most common respiratory disease after asthma (Rothkrantz-Kos, 2003). The reported prevalence and presenting symptoms of sarcoidosis vary significantly by sex, racial group, and country. The true prevalence of the disease is difficult to assess because a lot of patients are asymptomatic. Estimation of radiographic population screening programmes indicates a global prevalence of 10-40 per 100 000 and an incidence of 10 per 100 000. The incidence appears to be higher in Northern European countries, Japan and African Americans (Dements, 2001). Sarcoidosis is a granulomatous disorder resulting from an uncontrolled cell-mediated immune reaction. Following recognition of unknown antigens, the accumulation of immunocompetent cells in the lungs, i.e. alveolitis, occurs. Although lung parenchyma normally contains only a few lymphoid elements, lymphocyte populations are strikingly compartmentalized in air spaces and interstitium in sarcoidosis. The infiltration of activated CD4 positive T-cells represents the immunological hallmark of sarcoidosis. However, many types of other immune cells, such as macrophages, are involved in the inflammatory 102 response of the disorder.In the lung, this accumulation in both air spaces and interstitium (alveolitis) precedes and accompanies the development of granulomas (Semenzato, 2005).Sarcoid granulomas are immune granulomas resulting from a specific cell-mediated immune response to an antigenic antigen.The granulomas of sarcoidosis are well-formed, compact aggregates.They usually are of varying age, ranging from highly cellular lesions to collections with diminishing cellularity, some fibrosis and progressive hyalinization.Two characteristic zones can be seen in a typical, well-developed sarcoid granuloma: 1) a central zone or follicle, which is tightly packed with cells composed primarily of macrophages, multinucleated giant cells and epitheloid cells; 2) a peripheral zone consisting of a collar of loosely arranged lymphocytes, monocytes and fibroblasts.Although many microscopic features may suggest sarcoidosis, the epitheloid granulomas, especially in their earlier stages, are indistinguishable from those of other idiopathic granulomatous disorders or even granulomatous disorders of known origin, such as berylliosis, tuberculosis or hypersensitivity pneumonitis (Müller-Quernheim, 1998).Sarcoidosis is a worldwide disease with a lifetime incidence rate of 0.85-2.4%.It generally affects 25-40-year-old people.The clinical phenotype of sarcoidosis can be extremely diverse in terms of presentation, involved organs, duration and severity.Lung involvement is present in 86-92 % of cases according to the chest X-ray, alone or in association with extrapulmonary localizations in about 50 % of cases (Nunes, 2005).International pulmonary registries have illustrated differences in the presentation of sarcoidosis in different countries: in Asia the majority of cases presented with a radiological stage I, and a positive tuberculin skin test was found more frequently than in other countries.However, erythema nodosum has not been reported among the Japanese, is rare among African-Americans, it is the presenting symptom in 18% of cases in Finland and occurs in about 30% of British sarcoidosis patients (Dements, 2001).Clinical features of sarcoidosis are varied.It may manifest as an acute form (Löfgren's syndrome), chronic sarcoidosis or asymptomatic disease that may be found accidentally.However, even an acute form (e.g.erythema nodosum and joint pain) of disorder, which is the most typical clinical feature of sarcoidosis, is a nonspecific one (Bourke, 2006).There are five radiologic stages (forms) of intrathoracic changes of sarcoidosis: stage 0, normal chest radiograph; stage 1, only lymphadenopathy; stage 2, lymphadenopathy with parenchyma infiltration; stage 3, only parenchymal disease; stage 4, pulmonary fibrosis (Koyama, 2004).Sarcoidosis may present at any stage.However, a great variation of radiologic appearance in each stage has been noticed.Radiographic features of sarcoidosis may be atypical, especially in older patients (Conant, 1988).Pulmonary sarcoidosis radiologically may be indistinguishable from tuberculosis, lymphangitic carcinomatosis, pulmonary metastases or metastatic lymphadenopathy (Heo, 2005;Kaira, 2007;Thomas 2008).Furthermore, subtle radiologic changes in sarcoidosis (e.g.presence of subpleural micronodules or mild intrathoracic lymphadenopathy) may be similar to those present in healthy adults, especially smokers and/or residents of urban areas (Remy-Jardin, 1990).Histological features of the disease are varied (Rosen, 1978).Non-necrotising granuloma, a hallmark of morphologic appearance of the disease, is not unique for sarcoidosis.The granulomas in tuberculosis, extrinsic allergic alveolitis (hypersensitivity pneumonitis) and chronic beryllium disease are often identical to those of sarcoidosis (Williams, 1967;Popper, 1999).Even if the pathologic diagnosis of sarcoidosis is confirmed by biopsy, this may not confirm that all the lesions appear because of sarcoidosis (Kaira, 2007).Usage of needle aspirate, either transbronchial or percutaneous, provides support but never an absolute proof of diagnosis (Baughman, 2000).Sarcoid-like reactions have been reported to be associated with carcinoma and lymphoma (Brincker, 1986;Laurberg, 1975;Tomimaru, 2007).Bronchoalveolar lavage (BAL) is a method of sampling fluid and cells from a large area of the lung tissue by instilling and aspirating saline via a bronchoscope wedged in bronchi.BAL as a method of sampling cells is very useful in the diagnosis and differential diagnosis of sarcoidosis (Drent, 1993;Welker, 2004).High lymphocytosis and CD4/CD8 ratio in bronchoalveolar lavage fluid (BALF) are the main features of sarcoidosis (Poulter, 1992).In patients with a clinical picture typical for sarcoidosis, an elevated CD4/CD8 ratio in BAL fluid may confirm the diagnosis and obviate the need for biopsy (Costabel, 2001;Kvale, 2003).However, CD4/CD8 ratio in BALF is highly variable (Kantrow, 1997).BALF cell patterns, including CD4/CD8 ratio are related to radiographic stage, clinical symptoms of sarcoidosis and previous empiric treatment with corticosteroids.Optimal cutoff point for CD4/CD8 ratio is different in various manifestations of sarcoidosis (Danila et al., 2008(Danila et al., , 2009)).Diagnosis of sarcoidosis requires a compatible clinical and radiologic picture.However, there are no specific diagnostic tests and sarcoidosis is therefore a diagnosis of exclusion (Boer, 2010).The diagnosis of sarcoidosis must always be based on summation of clinical and radiological symptoms, results of BALF examination and other findings, which include data of histological examination of the lung or lymph node biopsy material if necessary.In this chapter the diagnostic role of bronchoalveolar lavage and other sampling methods (including endobronchial biopsy, bronchoscopic lung biopsy, transbronchial lymph node biopsy and mediastinoscopy) in various clinical situations are discussed. History of bronchoalveolar lavage Bronchoalveolar lavage was first used at Yale in 1922 in the management of phosgene poising.This approach has been extended to cystic fibrosis and alveolar proteinosis.In 1961, Myrvik showed how this simple lavage procedure could be used in rabbits to obtain lung macrophages.This seminal observation spawned new discipline, pulmonary cell biology (Gee & Fick, 1980).With the introduction in the mid-1960's of the design of the fiberoptic bronchoscope into clinical medicine by S. Ikeda, bronchoalveolar lavage was widely used for clinical investigations and diagnostic purposes (Zizel & Müller-Quernheim, 1998).Bronchoalveolar lavage was adapted to fiberoptic bronchoscopy by Reynolds andNewball in 1974 (Winterbauer et al., 1993).Bronchoscopy and lavage procedure have been a great stimulus for lung research to have access to normal and disease-affected airways and alveolar surfaces for direct samples (Reynolds, 1992).The observation of characteristic changes in the cytology of the BAL fluid in interstitial lung diseases were first reported by Hunninghake andCrystal in 1981 (Müller-Quernheim, 1998).With the widespread use of fibreoptic bronchoscopy for diagnostic evaluation of patients with interstitial lung diseases, bronchoalveolar lavage has also become part of the procedure (Reynolds, 1992). Technique of bronchoalveolar lavage After the fiberoptic bronchoscope has been inserted and the search for abnormalities in the respiratory tract is complete, the tip of the bronchoscope should be advanced into the desired bronchus as far as possible until well wedged.Biopsy and brushing should be avoided before BAL.The right middle lobe or the lingula of the left lung are the preferred sites for BAL (Emad, 1997).From these lobes, almost 20 % of more fluid and cells are recovered than from the lower lobes.However, in cases of predominant infiltrates in other lobes (e.g.upper lobe), bronchaolaveolar lavage should be done in these lobes or multiple lung segments (Cantin et al., 1983;Ziora et al. 2001).The fluid used to perform bronchoalveolar lavage is isotonic 0,9 % NaCl solution suitable for intravenous use.Saline fluid is instilled through the working channel of the fiberoptic bronchoscope as a bolus with syringe with aliquots of 20 ml to 100 ml.The volume infused ranges from 100 ml to 300 ml.Overall, the amount of BAL fluid collected is about 40-60 % of the volume instilled (Klech & Pohl, 1989).The first neutrophil-rich aliquot, which contains the airways sample, is usually excluded from analysis of BALF differential cell count.The information about cell types obtained in volumes of 100-250 ml is comparable, supposedly that cell populations obtained from volumes excess of 120 ml will not add to diagnostic accuracy.In most patients with sarcoidosis lavage at one site gives sufficient clinical information (Klech & Pohl, 1989;Winterbauer et al., 1993).After measuring the recovered volume and performing total cell counts, the normal method of processing the cells from the BALF for differential counting is to prepare cytospins.The differential counts are assessed by viewing with a light microscope and counting at least 300-500 cells (Klech & Pohl, 1989).Lymphocyte subsets (CD4 and CD8) are evaluated usually using flow cytometry. Cellular components of bronchoalveolar lavage fluid in healthy persons The alveolar macrophages constitute the largest cell population in BALF, about 80-95 % of total recovered cells.Lymphocytes are the second major cell population in BALF .Other cells found in lavage fluid include neutrophils, occasional eosinophils, basophils and mast cells.For practical reasons the following percentages can be expected as normal within nonsmokers: lymphocytes < 20 %, neutrophils < 5 %, eosinophils < 0.5 %.T lymphocytes are the main lymphocytes, and the ratio of T-helper to T-suppressors (CD4/CD8) is approximately 1.0-3.5.Smokers usually have a decreased percentage of lymphocytes and decreased CD4/CD8 ratio (Klech & Pohl, 1989;Zizel & Müller-Quernheim, 1998). Cellular components of bronchoalveolar lavage fluid in sarcoidosis Bronchoalveolar lavage is thought to mirror parenchymal inflammation in the interstitial lung diseases.In sarcoidosis BAL recovers activated lymphocytes and alveolar macrophages, which are the precursors of granuloma formation (Hendricks et al., 1999).A distinct compartmentalization to the lungs of CD4 T cells has been already found in the early 1980s.The characteristic finding of lung-accumulated CD4 T cells in sarcoidosis and resulting increase in the BAL fluid CD4/CD8 ratio has come to be a clinically important marker of the disease and is used for diagnostic purposes (Grunewald & Eklund, 2007).However, cellular components and T lymphocyte profiles are related to clinical presentation, radiological stage, smoking status, and previous treatment with corticosteroids (Danila et al., 2008(Danila et al., , 2009)).Therefore, the CD4/CD8 ratio in BAL fluid may be highly variable (Kantrow et al., 1997).It should be remembered that advanced sarcoidosis may present with no increase in numbers of BAL fluid lymphocytes, and CD4/CD8 ratio can be normal. Patients with erythema nodosum and/or arthralgia show the most marked characteristics of alveolitis, including increased percentages of T lymphocytes, the highest CD4/CD8 ratios (up to 30) in BALF samples (Ward et al., 1989;Drent et al., 1993).However, asymptomatic sarcoid patients have significantly lower BAL fluid lymphocytosis and CD4/CD8 ratio comparing with non-treated patients with sarcoidosis-related symptoms.Moreover, previously corticosteroid-treated symptomatic patients have lower BALF lymphocytosis and CD4/CD8 ratio compared to non-treated symptomatic patients (Danila et al., 2009).The increase of the macrophage and neutrophil count, decrease of lymphocyte count and CD4/CD8 ratio with increased radiographic stage of sarcoidosis in BAL fluid in patients with newly diagnosed sarcoidosis have been documented (Danila et al., 2008).Spontaneous macrophage-lymphocytes rosettes (adherence of lymphocytes to alveolar macrophages) in BALF from active sarcoid patients have been found, probably due to active antigen presentation at the focus of inflammation (Reynolds, 1992).Macrophagelymphocyte rosettes and giant cells (elements of immune granuloma) are found more often in BAL fluid of symptomatic patient groups compared to asymptomatic patients (Danila et al., 2008).A case of severe pulmonary sarcoidosis with intact granulomas in BAL fluid was described in medical literature (Hendricks et al., 1999).These findings may reflect still ongoing inflammation in lung parenchyma.Acute onset of the disease and high CD4/CD8 ratio is associated with good prognosis.On the other hand, increased neutrophil counts are associated with a more advanced, chronic disease course, impaired lung function, poor response to corticosteroid treatment and persisting a b n o r m a l c h e s t r a d i o g r a p h s .I t i s s u p p o s e d t h a t a n i n c r e a s e d p e r c e n t a g e o f B A L f l u i d neutrophils and eosinophils reflect an ongoing inflammatory process, which may result in progressive loss of lung parenchyma (Lin et al., 1985;Dren et al., 1999;Ziegenhagen et al., 2003).However, BALF lymphocyte count at diagnosis is not a valuable prognostic factor in patients with newly diagnosed sarcoidosis (Greening et al., 1984;Laviolette et al., 1991).Moreover, high lymphocyte count and high CD4 lymphocyte count (as percentage of lymphocytes) reflect an intense alveolitis at the time of the procedure, but they are not indicators of poor prognosis on which therapeutic decisions can be based (Verstraeten et al., 1990) and may be a favorable prognostic factor for lung function in pulmonary sarcoidosis (Foley et al., 1989).Sarcoidosis patients may present with extrapulmonary lesions due to the multisystem character of the disease.In patients presenting with extrapulmonary sarcoid lesions interstitial pulmonary changes with or without hilar adenopathy may be present.There may be a normal chest X-ray film, but conclusions from roentgenographic examination may underestimate the alveolitis already present.Moreover, typical sarcoid changes in BAL fluid samples can be found even without lung field involvement shown by high-resolution computed tomography, for example in patients with only ocular findings (ocular sarcoidosis) (Hoogsteden et al., 1988;Takahashi et al., 2001).Cigarette smoking modifies the immunologic BAL fluid sample profile and alveolitis is found to be less pronounced in smokers.Smoking results in increased total cell counts, increased CD8 lymphocytes, and less increased CD4/CD8 ratios in the BAL fluid samples in sarcoid patients.CD4/CD8 ratios are lower in smoking than in non-smoking patients (Valeyre et al., 1988;Drent et al., 1993). Clinical role of bronchoalveolar lavage in pulmonary sarcoidosis Several groups of investigators examined diagnostic value of the CD4/CD8 ratio of BAL lymphocytes for differentiating sarcoidosis from other causes of lung diseases.Costabel et al. reported that a ratio of 3.5 or greater had a sensitivity of 52 % and specificity of 94 % in 117 consecutive patients with biopsy-proven sarcoidosis (Costabel et al., 1992).Winterbauer et al. described that a ratio of 4.0 or greater distinguished patients with sarcoidosis from patients with other interstitial lung diseases with a sensitivity of 59 % and a specificity of 96 % (Winterbauer et al., 1993).Thomeer & Demedts found that a CD4/CD8 ratio of greater than 4.0 had a sensitivity of 55 % and a specificity of 94 % (Thomeer & Demedts, 1997).Welker et al. found that when the CD4/CD8 ratio is combined with lymphocyte and granulocyte numbers, the probability of sarcoidosis could exceed 85 % (Welker et al., 2004).(Danila et al., 2009) Comparable results were reported by other authors (Fireman et al., 1999).CD4/CD8 ratio of less than 1.0 virtually excludes the diagnosis of sarcoidosis (Winterbauer et al., 1993). Group We have found that optimal cutoff points for CD4/CD8 ratio are 3.5 and 4.0 for asymptomatic and symptomatic patients, respectively (Danila et al., 2009).Sensitivity of the optimal cutoff points (3.5 and 4.0) of CD4/CD8 ratio were lower in the asymptomatic patient groups compared to the symptomatic (non-treated and treated) patients.Sensitivity of the optimal cutoff points decreased with increased stage of sarcoidosis.The values of sensitivity, specificity and predicted values are presented in Tables 1 and 2. Normal BALF cell counts were found in 7 % of 318 consecutive sarcoid patients with newly diagnosed disease.However, typical sarcoid BAL fluid cellular pattern (lymphocytosis and CD4/CD4 >3.5) was found in 6.2 % of all control subjects.Additionally, in 3.8 % of all control subjects BALF CD4/CD8 ratios of more than 3.5 without lymphocytosis were found.Maximum value of BALF CD4/CD8 ratio for non-sarcoid subjects was 5.6, except for one patient with non-Hodgkin's lymphoma of low-grade malignancy (CD4/CD4 ratio = 8.8). According to the world-leading expert in interstitial lung disorders Professor U. Costabel examination of bronchoalveolar lavage fluid may be of diagnostic value in sarcoidosis, obviating need of biopsy in 40-60 % of patients (Costabel, 1997).The author's experience is in agreement with this statement.Having in mind that significant part of sarcoid patients, at least in European countries, manifested with an acute form of the disease (Löfgren's syndrome of fever, erythema nodosum, arthralgias, and bilateral hilar lymphadenopathy), even more of the patients due to very typical clinical-radiological symptoms and signs may obviate need of biopsy.So CD4/CD8 ratio has an important role in personal diagnostic algorithms of many clinicians, although the best use of this test requires considerable experience in its application (Wells, 2010).In summary, an increased lymphocyte count with CD4/CD8 ratio > 3.5 is regarded as typical for pulmonary sarcoidosis, and is considered generally sufficient to secure the diagnosis of sarcoidosis in the appropriate clinical setting (Spagnolo et al., 2009). Side-effects of bronchoalveolar lavage One of the reasons why bronchoalveolar lavage is enjoying its general acceptance among scientists and clinicians is its noninvasiveness.This makes bronchoalveolar lavage possible to perform in virtually all patients with few exceptions.Bronchoalaveolar lavage is a very safe procedure.Serious complications like significant bleeding, pneumothorax and other are extremely rare (Klech et al., 1992).Fever occurred some hours after BAL in about one fifth of all patients that underwent the procedure.Side-effects can be minimized by not exceeding lavage volume of 250 ml (Klech & Pohl, 1989).At the Department headed by the author only one serious complication of bronchoalveolar lavage (performed in a patient with tuberculosis) -pneumothorax, occurred during the last fifteen years.So, the rate of serious complications is extremely small, less than 0.1 %.Usually we do not perform BAL in patients with blood platelet count below 20000 / μl.Through its safety bronchoalveolar lavage does not raise any special ethical considerations (Rennard et al., 1992). Airway involvement in sarcoidosis Bronchoscopic abnormalities have been observed in up to 60 % of patients with sarcoidosis (Shorr et al., 2001).These include "retinalization" of mucosa from increased mucosal vascularity, mucosal coarseness, pallor, flat yellow mucosal plaques, wartlike excrescences, "bleb-like" formations, irregular mucosal thickening, ulceration, and atrophic mucosa.The three common findings were bronchial mucosal hyperemia or edema, distortion of the bronchial anatomy, and bronchial narrowing (due to extrinsic compression of airways by the enlarged lymphnodes, various types of mucosal involvement or airway distortion caused by parenchymal changes).The classic endobronchial sarcoidosis is characterized by mucosal islands of waxy yellow mucosal nodules, 2 to 4 mm in diameter.Bronchoscopy may reveal endobronchial occlusion by sarcoid granulomas in the submucosa or an endobronchial polyp caused by sarcoid granulomas.Lobar, segmental, subsegmental, and more distal bronchi as well as bronchioles are affected more frequently than the trachea and main bronchi (Polychronopoulos & Prakash, 2009).Rarely, sarcoidosis manifested with endoluminal stenosis of proximal bronchi (Chambellan et al., 2005).The presence of endobronchial sarcoid lesions significantly increases the risk for airway obstruction and airway hyperreactivity in patients with sarcoidosis (Lavergne et al., 1999;Shorr et al., 2001). Technique of endobronchial biopsy After satisfactory anesthesia is established, the lesion is visualized, biopsy forceps are passed through the working channel of the fiberoptic bronchoscopy until the forceps are just beyond the tip of the bronchoscope.The forceps are opened, advanced into the area to be biopsed, and closed firmly.The forceps should be withdrawn slowly to avoid its slipping from the tissue.The forceps may then be withdrawn through the bronchoscope (Cortese & McDougall, 1994).Biopsy is taken from most prominent lesions.If bronchial mucosa seems normal, biopsy is usually taken from the carina of segmental, subsegmental or subsubsegmental bronchus.Usually from 4 to 6 biopsy samples are taken. Diagnostic yield of endobronchial biopsy in sarcoidosis Although airway appearance affects the results of endobronchial biopsy (EBB), this biopsy technique may demonstrate non-necrotizing granulomas even if the airways are normal on visual inspection.EBB resulted in diagnostic tissue in 50-70 % of cases (Puar et al., 1985;Shorr et al., 2001).The results of EBB correlated with airway appearance.EBB result is more likely to be positive if the endobronchial mucosa is abnormal.However, a normal-appearing airway mucosa does not exclude the presenc e o f g r a n u l o m a s .E B B i s p o s i t i v e i n approximately 35 % of subjects with normal airway mucosa.Endobronchial biopsy increased in about 20 % in diagnostic value of fiberoptic bronchoscopy (Shorr et al., 2001). Side-effects of endobronchial biopsy Endobronchial biopsy is an extremely safe procedure.To the best of the author's knowledge there are no publications addressed specifically to the side-effects to endobronchial biopsy in sarcoidosis.The risk of the major complications during endobronchial biopsy, such as significant bleeding, is extremely small when a patient's blood platelets count is 50000 /μl or more.However, it should be remembered that massive or even fatal bleeding may occur after endobronchial biopsy in case of an abnormal bronchial artery of Dieulafoy's disease of the bronchus (Sweerts et al., 1995;Werf et al., 1999;Maxeiner, 2001;Stoopen et al., 2001), which may appear as submucosal smooth elevated non-pulsating lesion.At the Department for which the author works only one massive bleeding after endobronchial biopsy (presumably due to abnormal located bronchial artery) occurred during the last twenty years.There happened no other with endobronchial biopsy associated to serious complications during this period.Thus, the rate of serious complications after this procedure is less than 0.05 %. History of bronchoscopic lung biopsy The ability to obtain lung tissue without subjecting a patient to an open lung biopsy is a major advance in diagnostic bronchoscopy.Bronchoscopic lung biopsy (also named as transbronchial lung biopsy) was first performed by H. Andersen in 1963, using the rigid bronchoscope.In 1974 first results of the BLB via the flexible bronchoscope were published (McDougall & Cortese, 1994).After introduction of the fiberoptic bronchoscope into clinical practice, bronchoscopic lung biopsy (BLB) during fibrobronchoscopy became a standard procedure.BLB is utilised to sample alveolar parenchyma beginning at the bronchiolar, noncartilaginous segment of the airway (Leslie et al., 2000). Technique of bronchoscopic lung biopsy After the inspection of the tracheobronchial tree, a bronchoscope is inserted to subsegmental or smaller bronchus until the wedging position.Under fluoroscopic control, biopsy forceps (a crocodile type biopsy forceps are usually used) are pushed forward until a peripheral position.The position of biopsy forceps is controled by two directions of chest fluoroscopy.Afterwards, the forceps are withdrawn about 2-3 cm, then opened and pushed forward.Usually this maneuver is repeated once or twice, and then the forceps are closed and withdrawn.If the patient indicates ipsilateral chest or shoulder pain then forceps are closed, and should be opened and withdrawn a few centimeters before closing or introducing to other segment or subsegment of the lung.The bronchoscope should not be removed from a wedge position until there is no evidence of significant bleeding.The BLB is usually performed after a patient's inhale (Zavala, 1978;McDougall & Cortese, 1994;Dierkesmann & Dobbertin, 1998).In the Department for which the author works for about 6 biopsies are performed in cases of suspected sarcoidosis.Most of the samples are of 1-3 mm in diameter. www.intechopen.com Sarcoidosis Diagnosis and Management 110 Diagnostic yield of bronchoscopic lung biopsy in sarcoidosis The specimens obtained during bronchoscopic lung biopsy are small, but in most cases permit accurate histological diagnosis.Although some authors (Roethe et al., 1980) indicated that 10 are optimal for obtaining the diagnosis in stage I and 5 biopsies in stages II and III.Most investigators (Gilman & Wang, 1980;Harber, 1981, Cavazza et al., 2009) found that 3-5 biopsies are enough when biopsy is performed by an experienced bronchoscopist. Bronchoscopic lung biopsy has diagnostic yield of 50 % to 97 % (Mitchell et al., 1980;Roethe et al., 1980;Puar et al., 1985;Leonard et al., 1997;Boer et al., 2009).Density of the granulomas in the lung is not uniform (Rosen et al., 1977).Rosen et al. have found that nongranulomatous, nonspecific interstitial pneumonitis were predominant or prominent histopathologic findings in 62% of 128 granuloma-containing specimens from open lung biopsies obtained from patients with sarcoidosis (Rosen et al., 1978).Diagnostic accuracy is increased when biopsy is taken from the lobes with predominant involvement by chest Xray or computed tomography scanning (Roethe et al., 1980;Boer et al., 2009). Although the rate of positive findings on BLB is high among patients with sarcoidosis who have radiological evidence of pulmonary infiltration, it is also high (about 60 %) among patients with or even without hilar lymphadenopathy whose chest radiographs show normal lung fields (Mitchell et al., 1980;Ohara et al., 1993). A generous transbronchial biopsy may show numerous compact, coalescent, nonnecrotizing granulomas embedded within hyaline collagen, i.e. features almost diagnostic of sarcoidosis.Frequently, however, not only bronchial but also transbronchial biopsies show just a tiny granuloma, or even a single giant cell or a Schaumann body, that may be enough for the diagnosis but require a more robust clinical support.Sarcoid granulomas, although classically non-necrotizing, may show necrosis.It generally consists of tiny foci of central fibrinoid ("rheumatoid-like") necrosis, but rarely larger areas of fibrinoid, infarct, or suppurative ("Wegener-like") necrosis may be seen (Cavazza et al., 2009).Two characteristic zones can be seen in a typical, well-developed sarcoid granuloma: 1) a central zone or follicle which is tightly packed with cells composed primarily of macrophages, multinucleated giant cells and epitheloid cells; 2) a peripheral zone consisting of a collar of loosely arranged lymphocytes, monocytes and fibroblasts.Taken alone granulomas do not confirm the diagnosis of sarcoidosis, since it may also occur in tuberculosis, lymphoma or other malignant disease, berylliosis, brucellosis, extrinsic allergic alveolitis, histoplasmosis, collagen disorders, and other (Müller-Quernheim, 1998).Specificity of noncaseating epithelioid cell granuloma in transbronchial biopsy for the distinction between sarcoidosis and other forms of diffuse lung disease may be high -about 90 % (Winterbauer et al., 1993).However, specificity of noncaseating granuloma may be less in countries with moderate or high prevalence of pulmonary tuberculosis.Our findings show that the sensitivity of non-necrotizing epithelioid cell granuloma in bronchoscopic biopsy for the diagnosis of sarcoidosis is high (94 %), as well as the negative predictive value (92 %) of this type of epithelioid cell granuloma for the exclusion of sarcoidosis.However, the specificity of epithelioid cell granuloma without necrosis in our investigated group was relatively low-only 60 %.We have found a significant overlap in types of granulomatous inflammation between tuberculosis and sarcoidosis.Moreover, non-necrotizing granulomas were found in several cases of adenocarcinoma and hematological disorder (Danila & Žurauskas, 2008). Side-effects of bronchoscopic lung biopsy Bronchoscopic lung biopsy is a relatively safe diagnostic method.The pneumothorax rate after BLB is 1-5% (Zavala, 1978;Cortese & McDougall, 1997;Becker et al., 1998;Ensminger & Prakash, 2006).Bleeding after the BLB for carefully selected patients is rare and not intensive.Life-threatening haemoptysis occurred in 2-5% of the BLB (Cortese & McDougall, 1997;Dierkesmann & Dobbertin, 1998).Lethal outcome mostly due to the massive bleeding, or pneumothorax, is rare, and it occurred in 0-0.2% of the cases (Schulte & Costabel, 1998).Uremia increased the risk of bleeding.In author's institution of all the bronchoscopic lung biopsies, serious complications occurred in 2.6 % patients.Clinically significant pneumothorax requiring chest tube treatment occurred in 1.6 % patients.Non-significant pneumothorax not requiring the chest tube treatment occurred in 0.7% patients.Severe bleeding occurred in 1 % out of all BLBs.In all the cases the bleeding was stopped during the same procedure, after the bronchoscope tip in bronchus was occluded for several minutes (Danila et al., 2008).There was no lethal outcome related to BLB performed to more than 500 patients during the last fifteen years. Standard transbronchial needle aspiration biopsy The history of transbronchial needle aspiration (TBNA) goes back to 1949 when Eduardo Schieppat presented his new technique of endoscopical puncturing mediastinal lymph nodes across the tracheal spur (Leonard et al., 1997).In 1978, Wang with colleagues first described needle aspiration of paratracheal masses.In 1979, Oho and colleagues reported use of the first needle adapted for the flexible bronchoscope (Midthun & Cortese, 1994).To obtain cytology specimens, 20-22-gauge needles are usually used, while 19-gauge needles are needed to obtain a "core" of tissue for histology.TBNA can be performed safely and successfully during routine flexible bronchoscopy under local anaesthesia.Selection of the proper site for needle insertion to increase diagnostic yield may be facilitated by reviewing the CT scan of the chest.The bevelled end of the needle must be secured within the metal hub during its passage through the working channel.The needle is advanced and locked in place only after the metal hub is visible beyond the tip of the working channel.The catheter can then be retracted, keeping the tip of the needle distal to the end of the fibrebronchoscope.The scope is then advanced to the target area and the tip of the needle is anchored in the intercartilaginous space in an attempt to penetrate the airway wall as perpendicularly as possible.With the needle inserted, suction is applied at the proximal port using a syringe.Aspiration of blood indicates inadvertent penetration of a blood vessel.In this case, suction is released, the needle is retracted and a new site is selected for aspiration.When there is no blood in the aspirate, the catheter is moved up and down with continuous suction, in an attempt to shear off cells from the mass or lymph node. The needle is withdrawn from the target site after the suction is released (Herth et al., 2006).Three to five passes in each location are recommended (Tremblay et al., 2009).Whenever possible, sampling of more than one nodal station is advised to increase diagnostic yield (Trisolini et al., 2008). Endobronchial ultrasonography guided transbronchial needle aspiration biopsy The integration of ultrasound technology and flexible fibrebronchoscopy -endobronchial ultrasound (EBUS) enables imaging of lymph nodes, lesions and vessels located beyond the tracheobronchial mucosa.Recently real-time EBUS-TBNA became possible (Herth et al., 2006).EBUS-TBNA is able to sample stations that may be difficult to reach by mediastinoscopy, such as hilar nodes and posterior carinal nodes (Wong et al., 2007).EBUS-TBNA is usually performed under local anaesthesia and conscious sedation using midazolam.TBNA is performed by direct transducer contact with the wall of the trachea or bronchus.When a lesion is outlined, a 22-gauge full-length steel needle is introduced through the biopsy channel of the endoscope.Power Doppler examination may be performed before the biopsy to avoid unintended puncture of vessels.Under real-time ultrasonic guidance, the needle is placed in the lesion.Suction is applied with a syringe, and the needle is moved back and forth inside the lesion (Herth et al., 2006).Three to five passes in each location are recommended (Tremblay et al., 2009). Endosonography guided needle aspiration biopsy Initially designed for the staging of gastrointestinal malignancies, transoesophageal ultrasound-guided fine needle aspiration (EUS-FNA) has proven to be an accurate diagnostic method for the diagnosis and staging of lung cancer and the assessment of sarcoidosis.Lymph nodes in the following areas can be detected by EUS: paratracheally to the left (station 4L); the aortopulmonary window (station 5); lateral to the aorta (station 6); in the subcarinal space (station 7); adjacent to the lower oesophagus (station 8); and near the pulmonary ligament (station 9) (Herth et al., 2006).Usually, EUS-FNA is incapable of reaching lymph nodes located in the anterior mediastinum and the rest of the thorax beyond the mediastinum (Wong et al., 2007).EUS-FNA is usually performed under local anaesthesia and conscious sedation using midazolam.The echo-endoscope is initially introduced up to the level of the coeliac axis and gradually withdrawn upwards for a detailed mediastinal imaging.Since the ultrasound are emitted parallel to the long axis of the endoscope, the entire needle can be visualised approaching a target in the sector-shaped sound field.Pulse and color Doppler ultrasonography imaging can be performed in cases of suspected vascular structures.For the aspirations, 22-gauge needles are standard, although smaller (25-gauge) and larger needles (19-gauge) can be used as well (Herth et al., 2006).The diagnostic yield of EUS-FNA is of about 80 % (Annema et al., 2005), sensitivity of 89-100 % and specificity of 94-96 % (Fritscher-Ravens et al., 2000;Wildi et al., 2004).EUS-FNA is a safe procedure with rare complications (Wildi et al., 2004;Annema et al., 2005). It should be noted that the presence of non-necrotizing epitelioid granulomas in the specimens of the lymph nodes is not diagnostic per se for sarcoidosis.Specificity of the nonnecrotizing epitelioid granulomas depends on prevalence of sarcoidosis and other granulomatous disorders (such as tuberculosis) in a specific geographic region. Mediastinoscopy Mediastinoscopy is a common procedure used for the diagnosis of thoracic disease and the staging of lung cancer.Since its introduction by Carlens in 1959, mediastinoscopy has become the standard to which all other methods of evaluating the mediastinum are compared (Hammound et al., 1999).Mediastinoscopy is effective in assessment of the mediastinum.Porte et al. have found that sensitivity of the mediastinoscopy was 97 % in 400 mediastinoscopes performed in 398 patients with undiagnosed mediastinal lesions (Porte et al., 1998).It is important to remember that non-necrotizing epithelioid cell granulomas may be related to carcinoma of the lung and other malignant disease.Sarcoid reactions in malignant disease appear in close association with tumors, in regional lymph nodes, or in more distant locations.They have been reported to occur in a variety of malignant diseases, with particularly high incidences in lymphoproliferative disorders (Laurberg, 1975;Brincker, 1986;Segawa et al., 1996;Tomimaru et al., 2007).Mediastinoscopy is more invasive diagnostic method for sampling of the mediastinal lymp nodes comparing with transbronchial or transoesophageal ultrasound-guided fine needle aspiration.Carried out under general anaesthesia, it is costly, requires in-patient care (Hammound et al., 1999).Although, mediastinoscopy is a safe procedure (Venissac et al., 2003;Karfis et al., 2008), death related to mediastinoscopy is described in medical literature (Lemaire et al., 2006). Diagnostic approach in suspected sarcoidosis Presentation of sarcoidosis varied in clinical and radiological patterns.Moreover, comparative epidemiological studies have demonstrated that geographic, ethnic, and genetic factors are linked to the specific clinical characteristics of sarcoid patients (Baughman et al., 2001;Hosoda et al., 2002;Thomas & Hunninghake, 2003).Specificity of the diagnostic findings depends on other dominant diseases (e.g.tuberculosis, extrinsic allergic alveolitis, histoplasmosis) in specific population or a geographic region (Greco et al., 2005;Sibille et al., 2011).Availability of specific diagnostic techniques and patients' insurance policy differ in different countries.Thus, diagnostic pathway, which leads to confirmation of sarcoidosis, may be different.Pathognomonic criteria or diagnostic "gold standard" are absent (Muller-Quernheim, 1998;Baughman & Iannuzzi, 2000).Most authorities thus include several clinical, radiological, immunological and histological features into their diagnostic criteria since other disease processes can simulate sarcoidosis in many ways (Muller-Quernheim, 1998;Hunninghake et al., 1999).In principle, diagnosis of sarcoidosis may be based on typical clinical picture (symptoms of acute sarcoidosis) and typical radiological picture (Costabel, 2001;Iannuzzi et al., 2007).For the patients with no symptoms, bilateral hilar lymphadenopathy, and no other worrisome findings, close clinical observation may be sufficient (Reich et al., 1998;Kvale, 2003;Thomas & Hunninghake, 2003;Reich, 2010). Diagnosis of sarcoidosis may be based on BAL findings (Costabel, 2001;Nunes et al., 2005).In patients with uncertain diagnosis after clinical assessment and high resolution computed tomography scanning, typical BAL cellular profiles may allow a diagnosis of sarcoidosis to be established with greater confidence (Wells et al., 2008).In author's institution fibreoptic bronchoscopy and bronchoalveolar lavage are the first diagnostic procedures following clinical and radiological examination of the patient.Additional to BAL we perform endobronchial biopsy if bronchial mucosa seems abnormal.Examination of BAL fluid always includes microscopy and cultures for tuberculosis.Routinely, biopsy material is stained for acid-fast bacteria as well.Typical BAL fluid cellular or findings of non-necrotizing epitelioid granulomas in endobronchial biopsy material confirmed diagnosis of sarcoidosis in asymptomatic patients and patients with acute symptoms (Löfgren's syndrome).At least 60 % of all sarcoidosis cases are diagnosed this way.If BAL fluid cellular profile is non-typical and non-necrotizing epitelioid granulomas are not found, bronchoscopic forceps lung biopsy is performed.Finding of non-necrotizing granulomas confirms sarcoidosis.Practically, mediastinoscopy is performed only in exceptional cases when in patients with mediastinal lymphadenopathy the diagnosis was not confirmed by less invasive method.Routinely the 3-6 moths follow-up of our patients lasts at least up to 3 years or longer if necessary. Table 1 . Diagnostic value of sarcoid patients' bronchoalveolar lavage fluid CD4/CD8 ratio in relation to clinical symptoms CI -confidence interval.PPV -positive predicted value.NPV -negative predicted value.
2018-10-15T07:33:35.525Z
2011-10-21T00:00:00.000
{ "year": 2011, "sha1": "e837849f767b38a691066937030644f8514ab65b", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/22113", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "e837849f767b38a691066937030644f8514ab65b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15712962
pes2o/s2orc
v3-fos-license
The Perceptual Cues that Reshape Expert Reasoning The earliest stages in our perception of the world have a subtle but powerful influence on later thought processes; they provide the contextual cues within which our thoughts are framed and they adapt to many different environments throughout our lives. Understanding the changes in these cues is crucial to understanding how our perceptual ability develops, but these changes are often difficult to quantify in sufficiently complex tasks where objective measures of development are available. Here we simulate perceptual learning using neural networks and demonstrate fundamental changes in these cues as a function of skill. These cues are cognitively grouped together to form perceptual templates that enable rapid ‘whole scene' categorisation of complex stimuli. Such categories reduce the computational load on our capacity limited thought processes, they inform our higher cognitive processes and they suggest a framework of perceptual pre-processing that captures the central role of perception in expertise. The earliest stages in our perception of the world have a subtle but powerful influence on later thought processes; they provide the contextual cues within which our thoughts are framed and they adapt to many different environments throughout our lives. Understanding the changes in these cues is crucial to understanding how our perceptual ability develops, but these changes are often difficult to quantify in sufficiently complex tasks where objective measures of development are available. Here we simulate perceptual learning using neural networks and demonstrate fundamental changes in these cues as a function of skill. These cues are cognitively grouped together to form perceptual templates that enable rapid 'whole scene' categorisation of complex stimuli. Such categories reduce the computational load on our capacity limited thought processes, they inform our higher cognitive processes and they suggest a framework of perceptual pre-processing that captures the central role of perception in expertise. W hen chess grandmasters glance at a game they simply 'get it', not only do they choose better moves than lesser players but often these moves occur to them within seconds of first looking at a game 1 , long before they have an opportunity for detailed search and analysis. How are they able to do this? Research on expertise highlights several key aspects. In games like chess, a high IQ is not necessary 2 but at least 10,000 hours 3 of training is vital. Over this time 300,000 or more chunks 4 , small frequently occurring patterns, will be learned. This learning process will be non-linear: there will be times when skill plateaus 5 and sharp transitional points need to be negotiated 6 . But learning chunks and coupling them with moves is not enough for good decisions. In the game of Go for example the best move predictor uses chunks and matches an expert's choice 34% of the time 7 , insufficiently accurate for expert play by itself. To address the issue, amongst many others, of integrating local knowledge such as chunks into a global relational context the Template Theory 8 of expertise was developed which models how chunks can be combined to form larger cognitive representations of the task space. The Template Theory is a direct result of earlier work by Simon and colleagues who had considered the role of perception 9 in problem solving, particularly the first seconds of considering a complex problem 10 . Template Theory addresses the primacy of perception and pattern recognition in tasks that previously had been thought to be solely the domain of logical reasoning such as search, planning and evaluation, i.e. the domain of conscious thought processes. Such conscious reasoning is characterised as slow, serial and capacity constrained whereas the perceptual processes Simon considered are fast, parallel and unconstrained in capacity 11 . Recent work in this area has shown that unconscious perceptual learning can occur in domains as complex as board games 12 , speech 13 and mathematics 14 . In such cases early perceptual processes can adapt and learn the complex and often noisy relationships between visual elements, effectively acting as a pre-processing step that influences the later stages of cognition. Most recently this has developed into the perceptual learning of human expertise 15 and is characterised by the developmental changes induced in early sensory regions of the brain by extensive experience. Such early stage adaptations change the way in which a perceiver extracts information from the environment and it is often implicit in two distinct ways: perceptual learning is implicit in that it is not a declarative learning process, instead it occurs without the perceiver being aware of what is being learned 16,17 , and perceptual expertise is implemented without awareness in so far as the perceiver is not overtly aware of the influence their acquired knowledge has on the decisions they make 18,19 . Simon summarised his results in the following way 20 : ''The situation has provided a cue; this cue has given the expert access to information stored in memory, and the information provides the answer […]'' whereby ''[w]e are aware of the fact of recognition, which gives us access to our knowledge […]; we are not aware of the processes that accomplish the recognition.'' [original emphasis]. The goal of this work, then, is to find the perceptual templates that amateurs and professionals have acquired through perceptual learning and that they implement as the basis of their perceptual expertise when playing the oriental game of Go. Such templates are reduced representations of the state of a game, they contain a subset of the total number of pieces on the board but this subset makes up the perceptually learnable relationships in the game. The individual cues that make up a template, i.e. the positions and colours of the game pieces, are processed in parallel during the early stages of sensory perception, much as the global relationships between elements in natural scenes are processed in parallel in early stages of perception 21 . It is at this level of cognition at which templates are employed giving rise to expert intuition 22 . A principal difference between the amateurs and the professionals lies in their perception of the global context of the game in which individual moves are made 23,24 . In discussing the role of these templates we will use recent results on early perceptual learning and the pre-processing of visual stimuli to suggest that experts use the rapid recognition of complex patterns, mediated by perceptual templates, in order to efficiently constrain and guide their search for good moves. Visual illusions highlight the subtle and persistent nature of such perceptual pre-processing. The Ponzo illusion shown in figure 1 comes about through very early processing in the visual cortex, area V1, and the subjective impact of the illusion is influenced by the surface area of an individual's V1 region 25 , highlighting the early stages at which such effects occur and how they are influenced by gross neural properties. In figure 1a. the illusion is that the top red bar is longer than the bottom red bar and it is induced by the parallel railway lines that appear to draw closer together in the distance. The key is that the two red bars appear to be placed at different distances from the observer, a perspective strongly informed by the relationship between the converging tracks and the red bars. The converging tracks act as cues that inform the observer of the different scales of objects in different parts of the scene, and so the apparent differences in the size of the bars is coherent with respect to these cues. Figure 1b. shows how a sparse representation retaining only the two contextual cues (the two lines that converge) and the relevant information (the two red bars being compared) is able to maintain the sense of the illusion when no other information is retained. Even when we are consciously aware of being deceived by the illusion, such overt awareness does not easily change the sense of the illusion, demonstrating how the early stages of perception that use such learned cues are not readily switched off. But when the contextual cues are removed (figure 1c.) the illusion vanishes: the context informs us of a particular interpretation of the environment and while this guidance is often a very useful heuristic at times it can lead us astray. Such illusions are most likely at least partly a result of early cognitive processes reducing the vast amount of information we receive from the environment 19 . It has long been recognised that the information capacity of short term memory is tightly constrained 26 and it is now thought to hold only a few elements 27 . This bottleneck, called the Working Memory (WM) 28,29 , sets an upper limit on how much information can be held in active memory at any given time. WM capacity is not thought to be improved by task specific learning such as chess training 30 or by perceptual expertise 31 , although generic (i.e. non-task specific) training techniques have been shown to improve WM capacity 32 . In order to manage these limitations early perceptual processing does not expand the capacity of WM, instead it reduces the amount of information being passed to our WM, capturing only the relevant information necessary for higher order processing. In the Ponzo illusion of figure 1 the environment is reduced to the commonly occurring regularities that would usually inform our scene comprehension and set the context, i.e. the two converging lines, as well as the information relevant to the task, the parallel red bars. In this sense the contextual information can be thought of as a general purpose caricature of the scene that can be applied flexibly in many different circumstances in which the omitted details are not immediately relevant. Such considerations suggest an important difficulty our cognitive processes are able to address: the total information contained in an environment is too vast to process using deliberative reasoning but the information contained in localised chunks of the environment is too focused to be useful by itself. What we would like to find are the small number of visual cues that make up the salient aspects of the environment and show how these cues change as a function of skill. One approach is to use a neural network that can automatically form ordered, compressed representations of sensory perceptions such as Self-Organizing Maps (SoMs) 33 . SoMs have been used as a model of neurological organisation 34 as well as a tool for data-mining 35 and The original scene with all its complex detail: Some of the cues and the relevant information: Relevant information without the cues: a. b. c. bars about which a decision regarding their relative lengths needs to be made. In 1.b the detailed information has been removed but the information that informs our judgement of the length of the red bars is retained. Without the perceptual template the illusion vanishes as can be seen in 1.c. they have the benefit of being unsupervised learners 36 , that is to say they extract structural regularity from data without external guidance. This last point is significant, from a behavioural perspective human players implicitly learning relationships between game pieces are not aware of what is being learned, they are only picking up on the statistical regularities in the task environment 17 , this requires an unsupervised process. An SoM can be thought of as a non-linear extension of principal component analysis 37 based upon biological principles 34 . The key idea is that a complex perception of the environment containing many different elements, encoded as a vector x i 5 [x 1 , x 2 ,…, x n ], is processed in parallel by a large array of neurons, as previously theorised 38 and recently observed in the laboratory 39 , whereby each idealised neuron represents a learned model of the environment. Algorithmically, an SoM neuron (a model m j ) is a vector of the same dimensions as the vectors that represent a perception of the environment: m j 5 [m 1 , m 2 ,…,m n ] for j g {1,…, N} neurons in the SoM. When an SoM is presented with a training vector x i of the environment it is compared to all SoM neurons and one, the winning neuron m c , is that which satisfies: where d(v 1 , v 2 ) is usually the Euclidean distance between vectors v 1 and v 2 . This is the metric used in the SoM MATLAB toolbox developed at the Helsinki University of Technology 40 and it was their implementation used in this work. Initially all of the neurons m j have randomised elements. After selecting m c the weights of neurons m k , k g {index of neurons 'local' to and including neuron c}, are updated: The functions k and a refer to the convolution kernel and the learning rate 33 respectively and while they have a temporal dependency within the SoM toolbox they were otherwise left fixed throughout this work. This algorithm updates the best matching neuron in the SoM and all neurons locally connected to it, see figure 2 and the Methods section for a complete description of the SoM implementation. Our task environment is the choice of moves made in the oriental game of Go. Go is played on a board made of a 19 3 19 grid on the vertices of which game pieces, called stones, are placed. Before the game starts, players choose to play with either black or white stones and they take it in turns to place a stone on one of the vertices of the board. The goal is to capture more territory than your opponent by surrounding regions of the board with stones of your own colour that are connected to each other in chains. A chain is formed by stones that are placed directly next to each other in the north, south, east or west direction on the grid. Stones are only removed when they are completely surrounded by the stones of the other player such that Database of 18,000 amateur or professional game records, move k (the training move) is preselected The game record is encoded as a sequence of moves The game is played out on a 'virtual board' until move k is played The state of the game when move k occurred is encoded in -1, 0, 1 elements of a 1x361 vector This is a training vector for the SoM there are no more free vertices within the surrounded territory for the surrounded player to play on. At the end of the game the player with the most territory surrounded wins. At any given point in the game there is a maximum of 361 possible vertices on which to place a stone, or alternatively there are a maximum of 360 positions that can influence any given move. In order to reduce this number we need to find those stones that frequently cooccur when a given move is made, these stones make up the contextual cues and a combination of these cues is an instance of a perceptual template. Note that for any move there will be many different board configurations in which it occurs, and as a template is a reduced representation of the state of the board (containing as it does only those stones that have occurred frequently together) then any given template can fit multiple game instances. In this sense SoMs 33 generate perceptual templates that categorise the different 'game scenes' in which moves are made. This constancy in the relationship between the multiple cues and the move itself is necessary for an expert's perceptual learning to occur 41 and it is the basis on which our SoMs are able to categorise high dimensional data 42 , also see the Discussion below. Each SoM neuron is a 13361 vector representing a 'model' of the Go board when a move was made, each element within a neuron is a learned weight w i in the interval [-1, 1] representing how 'black' or 'white' each board position is in that neuron's model. Instead of using these continuous values we set a cut-off value using a threshold parameter t: jw i j . t. This cut-off restricts the elements of each neuron to the discrete values {-1, 0, 1} so each neuron encodes a model of the game containing only black, white or empty positions based on the threshold. The unique set of these neurons are the perceptual templates 42 , they represent a collection of different contextual models of the game environment and are the reduced representations of the state of the game corresponding to the structured regularities that player's are repeatedly exposed to each time they make a move during game-play. Table 1 shows the template statistics: the total count of templates for amateurs and professionals, the percentage increase in the number of templates that a professional has compared to amateurs, the percentage of templates that are shared between amateurs and professionals and the size (in terms of number of stones) of the largest templates. Most notably the difference in the number of templates for amateurs and professionals is quite small ranging from around 12% to 38%. However the number of templates that are common to both amateurs and professionals is equally small, ranging from around 12% to 19%. There is also a persistent difference in the maximum size of amateur and professional templates, professionals being more than 100% larger at some threshold values although there are relatively few of the largest templates for either amateurs or professionals (see figure 3). Results The distribution of the number of stones in each template for four different threshold values is plotted in figure 3. For both amateurs and professionals the mean number of stones was relatively small, between approximately 3 and 6 stones and was more insensitive to changes in the threshold parameter than might be expected. On the other hand the tails for these distributions are quite different for the two classes of players. Table 1 shows the maximum size differs greatly, but this is caused by a very small number of larger templates. A closer look at the templates showed that the professionals used a small number of localised patterns of stones so frequently that they were learned by the SoMs, something that happened considerably less often for the amateurs. However the central portion of the probability distributions in figure 3 remains qualitatively very similar across threshold values and player class. Figure 4 measures the intersection between the amateur and professional templates as a function of the professional template indexing (the indexing is discussed in the Methods section). As the index increases (i.e. the size and complexity of the templates increases), the rate of change in the size of the intersection set decreases until eventually adding a new professional template does not increases the size of the intersecting set. A gradient of 1 in this curve implies that for each professional template added there is a corresponding amateur template, near the origin a sustained gradient of 1 is clear but as the indexing increases the gradient progressively decreases, indicating that the more complex a professional template is the less likely it will also be in the amateur set of templates. Discussion The goal of this work has been to find and compare the structured information, in the form of contextual cues, that is available to experts and non-experts in the game of Go. It is argued that this information is used during implicit learning and subsequent early perceptual processing of information within a given domain of expertise to aid in fast and accurate categorisation and decisionmaking in complex environments. In particular, these processes enable the reduction of the dense information perceived in a complex natural environment using the available structured regularities. Furthermore, the integration of these cues into a cognitive whole leads to the notion of perceptual templates, the aggregate, sparse representations of the salient features of the task environment that enables many of the remarkable feats reported in studies of domainspecific expertise. Figure 5 shows a cognitive model that demonstrates how such perceptual templates might be implemented. The entire scene is initially processed by low level visual systems 43 combined with perceptual templates to produce a perceptual whole in a very short period of time. Estimates of the length of time it takes to categorise the 'gist' of complex scenes range from about 30 milliseconds up to around 150 milliseconds [44][45][46][47] . Note that in the study by Thorpe et al. 44 presentation times of the images (20 ms) were too brief to allow eye movements to search the image, effectively requiring the subjects to comprehend a complex image as a perceptual whole. This initial 'feed forward sweep' 45 of perceptual information is too quick for neural feedback pathways to influence the scene perception, suggesting that strictly feedforward 18 processes of complex visual scenes are sufficient for early perceptual categorisations. Recent work on the physiological basis of expertise, both theoretical 48 and empirical 49 , provides support for cognitive templates being located in the inferior temporal cortex. It is this region that fMRI 50 and diffusion tensor imaging 51 studies have strongly implicated the visual perception of Go board patterns in experts but not novices. In this sense our ability to form a categorical impression of a complex scene is almost immediate and it is this categorical impression that perceptual templates capture. The implementation process is as follows: In figure 5 the combination of the four cues A, B, C and D are compared in parallel to all of the perceptual templates (simplified models of the world) the perceiver has learned and template 3 minimises the difference between the model it encodes and the current visual environment and templates 1 and 2, for example, do not. This results in an activation of a single template (template 3) that acts to contextually activate a later network of processing modules, i.e. which of the modules x, y or z should be activated to provide for further analysis. Template 3 activates the initial eye saccade to some region of the scene (e.g. to cue B) for more deliberate processing in a serial fashion. A combination of contextual information and localised analysis, based on higher level cognitive outputs, may lead to further eye saccades that allow for greater analysis of the environment. Such detailed analysis is usually an evaluative process requiring a small number of alternative strategies to be maintained in working memory at the same time. In this sense there is considerable conceptual similarity between this model and that of both CHREST 52 and the guided search 38 models. Note that changes in the perceptual templates will result in changes in the patterns of the eye saccades that are related to the development of expertise 53,54 . This early processing of the context persistently influences perception, just as visual illusions do, and provides the necessary categorical information required to constrain later search heuristics and the evaluation of moves in order to keep the computational load of such tasks within the bounds of our limited cognitive capacities. In light of the earlier discussion of visual illusions, it is known that an illusion's effect decreases over the time course of perception, illusions being strongest in the first stages of perception and then modulated by later, higher order, cognitive processes 55 . On the basis of this evidence and that of the role of V1 on an illusion's subjective impact 25 and contextual relationships in visual scenes 56 , it is reasonable to suggest that the categorisation of a scene happens relatively early in visual processing and is modulated by later, top-down processes Set of amateur templates Prof templates that enable a more precise comprehension of the local characteristics of the scene. This is very similar to the scene-centred approach to understanding the holistic properties of a scene, called the 'gist', recently put forward by Oliva and colleagues 21,43,57 . There is a significant difference though in that Oliva et al's work is based upon natural scene analysis and not strategic games, but further work is expected to clarify the similarities in these different approaches. The fact that the same mechanisms that are in play in game expertise might also be in play during natural scene comprehension is an exciting possibility that suggests a very general mechanism may mediate an exceptionally broad range of complex task environments. As mentioned above, there is considerable conceptual similarity between our results on perceptual templates and the research of Gobet and others on Template Theory, however the two are not synonymous and some important distinctions should be made. Template Theory developed out of chunking theory as a theoretical construct by Chase and Simon 53 and was based on the earlier work of de Groot 58 on chess expertise and Miller 26 on capacity limits in our cognitive processes. In the original Template Theory 8 , chunks containing several chess pieces are learned by novices but as their experience grows so too does the size, in terms of the number of pieces they contain, of these chunks. Chunks are stored in long term memory but pointers to these chunks are held in short term memory that can only hold around 3 such pointers due to capacity restrictions. As the chunks grow in size the number of pointers does not increase but the size of the chunks they point to do, thereby allowing experts access to greater amounts of information and circumventing the limited capacity of our working memory. Templates are larger and more elaborate structures than chunks, they contain 15 to 20 game pieces 4 but they also have slots into which smaller chunks can be inserted 8 . A template then is an example of a ''schema'' as studied in psychology where they are ''… implicitly learned in the process of acquiring substantive knowledge …''. 4 Much of the high level description of Template Theory is similar to the perceptual templates of this study: perceptual templates contain a reduced number of game pieces (the core in Template Theory) that are implicitly learned during the course of acquiring expertise, they are augmented by detailed and localised analysis of the board (similar to the role chunks have with respect to slots), they are composed of consistently co-occurring game pieces that augment strategising, move selection and circumvent some of our cognitive limitations. The most significant difference between the two lies in the method of extracting the templates. The CHREST cognitive architecture that implements Template Theory uses a 'roving eye' to scan many chess games in order to build chunks first and then more elaborated structures that eventually become templates 59 . That is to say that Template Theory builds up from chunks to form more elaborated structures containing a core and slots that can then contain a variety of different chunks. There is considerable empirical support for this model 59 . On the other hand the cognitive implementation of perceptual templates acts much more like 'SoM-filtered' Bayesian inference. It does not start from chunks and build up like Template Theory, instead it takes the whole board as a single perceptual input. Each training vector x i is a whole board configuration from which a move was then made: x i R m k where the training vector x i varies from instance to instance but the move m k does not, see figure 2. This implicitly conditions the training vector on the move that was then made. Given thousands of training vectors conditioned on a fixed move a SoM (a single 50350 neural network) learns to categorise the board configurations according to the frequently occurring game pieces, filtering out all the infrequently occurring pieces. This implementation is quite different from Template Theory, it implies that when a certain move is made, we implicitly learn the statistical regularities associated with that move. The resultant templates can then be used to invert this process: when a board configuration is perceived, for example during a game, and early perceptual processes are required to suggest a few possible moves (as well as communicating contextual-categorical information) the templates compete amongst themselves, most likely based on a competitive activation model 60 , to communicate to higher cognitive processes all of the moves that had learned that particular template. This is because while the mapping x i R m k fixes the move during one SoM's learning, there are other SoMs trained on different moves using different training vectors that may have learned the same perceptual template, i.e. a single template might have been learned by multiple moves resulting in multiple possible next moves being generated from one template. Given the considerable differences in these two different template paradigms it is not clear where the similarities and the differences between the resultant templates lie. There is already some interesting evidence suggesting a difference in the two methods. The largest templates found using the highest threshold parameter (t 5 0.95) was 24 stones (see table 1). By comparison Gobet and Simon reported psychological experiments 4 showing that chess Masters have a maximum of around 15 chess pieces in their chunks/templates. This most likely occurs because chess positions are less stable than those of Go, stones in Go remain in place unless captured which happens rarely when compared with how often chess pieces move. This means that larger templates can be learned more readily in Go because of the perceptual regularity of the game pieces. A useful exercise for further study will be to implement a SoM based perceptual templates analysis of chess positions. Furthermore, future research into the role of such templates in expert cognition should also be critically informed by psychological experiments. For instance board configurations that more closely match professional templates should result in more rapid generation of possible next moves, i.e. perceptual templates should increase the fluency of move generation for experts. Similarly, board configurations that do not easily match any perceptual templates should increase the time it takes a player to generate options for the next move. Such experiments will help establish the psychological validity of perceptual templates and further inform their theoretical development. These results paint an intriguing picture of the perceptual templates available to skilled and unskilled practitioners. While there are many thousands of unique templates ('game scene' categories), this still represents a massive reduction in the total number of possible scenes that would otherwise need to be analysed deliberately at all levels of detail, much as many artificial intelligence systems do. However, despite the striking similarities in several high-level properties, such as total number (table 1) and distributions of sizes (figure 3), the overlap between amateur and professional players is small and they systematically diverge for the larger and more complex templates (figure 4). From this we see that a professional's perceptual learning in the game of Go is informed by quite different information to that of an amateur and that they share only the most basic information. This is not a sufficient explanation for all of expertise: these templates provide an approximate analysis of the game, they still need to be connected with later cognitive processes and ultimately with a decision regarding where to move. In this light, the current work provides novel evidence of a measurable mechanism for some of the remarkable differences in performance between expert and non-expert decision-making in complex tasks. Methods Game data and preprocessing. We used 18,000 games of professional ranked players (rank 5 dan professional and above) and 18,000 games of amateur players (rank 1 kyu, 1 dan or 2 dan amateur). The professional games were part of the GoGod database available commercially at www.gogod.co.uk and the amateur games were recorded during online play from the KGS Go server: www.gokgs.com. Each game was converted into a 3 3 m matrix where m denotes the number of moves played during the game, each m i 5 [x, y, 6i] where x, y g {1 … 19} are board co-ordinates and 6i is the move number (a negative i represents a black move on the i th turn, a positive i represents a white move). This is sufficient to encode an entire game as a sequence of moves, but it ignores possible captures where previously played stones are removed from the board, freeing up positions that can be played later in the game. While this does not affect the encoding of the game (some positions might be played more than once during a game, but this is irrelevant for the sequence of moves played), it does have an effect on the learning vectors that are presented to the SoM where stones have been removed after capture. This issue is addressed at the point at which the game is 'played out' during the SoM analysis discussed below. SoM implementation. Figure 2 provides a diagrammatic representation of the implementation. In order for an SoM to learn which stones are commonly present when a move is made, the state of the game when that move was made needs to be encoded in a 1 3 361 vector (a linearised representation of the 19 3 19 board) where each element of this vector was 11, -1 or 0 representing a white stone, a black stone or an empty position, respectively. Starting with either the professional or the amateur database of games, we first nominate a (linearised) position on the board, position k g {1, …, 361}. A game record is then chosen from the database and the game is recorded in the sequence in which the moves were played in a 19 3 19 matrix (representing the current state of the board, initially all entries set to 0) where either 11 or -1 is recorded depending on whether a move was white or black, respectively. Each new move is checked to see if it is a capturing move, if so all of the corresponding stones that are captured are removed from the matrix and the game continues. This is repeated until a move is made at position k (but is not yet recorded in the matrix). If the move at position k is a white move, the state of the game is left unchanged, if it is black the game-state is multiplied by -1, essentially making all moves at position k a 'white' move. This does not change the strategic relationships between the stones but it does prevent the SoM from learning separate templates for white and black moves. The game is then stopped and the current (linearised) state of the board is then the training vector for the SoM. In practice this means the training vectors are 1 3 361 vectors containing 61 and 0 elements representing the state of the game when a move was made at position k. Note that each initial game record has a length equal to the number of moves and so changes from one game to the next. However the training vectors representing a board configuration are all of the same length: 1 3 361, enabling them to be compared with the 'world model' encoded by each SoM neuron. This procedure requires each of the 18,000 amateur and professional games to be played until move k is found, but in some games k is never played in which case the game record is not used leading to slightly fewer training vectors. Each training vector is an input into a 50 3 50 neuron SoM that is dedicated to learning the board patterns when move k is made. This procedure is repeated for all k g {1, …, 361}, resulting in an aggregate SoM neural network containing 50 3 50 3 361 5 902, 500 neurons, where each neuron is a 1 3 361 vector representing the learned real valued weights in the interval [-1, 1] for each board position. There are two of these aggregate SoM networks, one for the professionals and one for the amateurs. These networks were too large to analyse directly so the learned weights were set a threshold value t (described below), different values of which were used to generate the results and this also significantly reduced the size of the datasets we had to analyse, see Table 1. Thresholding and sorting templates. In order to see where the most significant differences in the templates lie they were sorted and indexed in three steps. A MATLAB script takes a list of templates and first finds the unique templates (i.e. after thresholding of the learned weights, the built-in MATLAB function unique [] removes repeated templates and sorts them in ascending order), then by frequency of each stone's occurrence in list and finally by the number of stones in each template. In the following script, list contains 902, 500 vectors (trained SoM neurons) of size 36131 with real-valued elements in the interval [-1, 1] and t is the threshold value. The list that is output in the final step is a reduced set of unique, sorted templates with discrete elements containing values {-1, 0, 1} representing the position and colour of stones on a linearised board: list (list .
2018-02-09T18:07:10.247Z
2012-07-11T00:00:00.000
{ "year": 2012, "sha1": "f377257541b15bec23c524ea6183b18418d251c8", "oa_license": "CCBYNCSA", "oa_url": "https://www.nature.com/articles/srep00502.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f377257541b15bec23c524ea6183b18418d251c8", "s2fieldsofstudy": [ "Psychology", "Biology" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
247961584
pes2o/s2orc
v3-fos-license
Supporting the Development of Pre-Service Primary Teachers PCK and CK through a STEM Program : STEM (Science-Technology-Engineering-Mathematics) education has received great attention in recent years not only for promoting interest and learning in these areas but also for encouraging children and young people to pursue careers in them. This research explored the effects of a STEM program in developing the primary pre-service teachers’ Content Knowledge (CK) and Pedagogical Content Knowledge (PCK) about sound. A qualitative and interpretative study analyzed the impact of a STEM program on the CK and PCK of 18 pre-service primary teachers that were attending a master’s degree program in a Portuguese higher education institution. The data was collected from their lesson plans, field notes, a focus group interview, and participants assignments throughout the STEM activities carried out. Findings revealed several scientific misconceptions and weaknesses in the participants’ PCK. Nevertheless, there was a clear positive impact on pre-service teachers’ CK and PCK, specifically regarding the principles underlying STEM integration that was proposed in the conceptual framework. Introduction STEM (Science-Technology-Engineering-Mathematics) Education is related to the need to attract students for STEM areas. The development of STEM literacy has become an educational priority [1] to improve the numbers reported by the Organization for Economic Co-operation and Development [2], according to which, in more than half of the OECD countries, the percentage of students who obtain a degree in STEM areas is lower (24% on average) than in other areas. Even though the advantages of STEM education are well known, there have been numerous obstacles to the dissemination of these practices in the classroom, including a poor understanding about STEM education [3] and scientific concepts [4,5]. Teachers also have shown difficulties in adopting non-traditional teaching strategies [4,5] and integrating content from different STEM areas [6,7]. In fact, the way in which STEM disciplines should be integrated is still the subject of debate in the literature [8,9]. The situation is more problematic in primary education, where most teachers have limited knowledge in STEM areas, particularly regarding Inquiry-Based Learning (IBL) [10]. Research has shown that consistent exposure to IBL is fundamental for preparing future generations of primary teachers to teach using IBL, as well as STEM education [10]. In addition to the already highlighted obstacles to STEM Education, there are others specifically pointed out in the first levels of education, such as the problematic integration of engineering practices [11], the lack of attention to science [12] and technology [13]; the overemphasis on mathematics content and the absence of engineering in the curricula of primary education [14]. Therefore, "consistent exposure to the inquiry may be fundamental for preparing future generations of teachers to teach using inquiry as well as future STEM professionals" [10] (p. 159). Teachers play a decisive role in the successful implementation of STEM Education [1], so they must be supported in the development of their Content Knowledge (CK) and Pedagogical Content Knowledge (PCK) [15] in teacher education. Thus, this study aims to examine the effects of a STEM program on pre-service primary teachers' CK and PCK. Theoretical Framework The discourse in the field of science education in recent years has consistently referred to the importance "of cross-curricular 21 st century skills such as collaboration, critical thinking, problem solving, design and engineering skills, creativity, and ICT literacy" in early years [16] (p. 89). The combination of an integrated STEM approach with IBL creates an excellent opportunity for the development of these skills, to which communication is added [17]. However, despite the potential of STEM integration for motivating students to pursue scientific careers, studies point to some concerns about how STEM integration is carried out and about the possibility of science content and process learning be lost in a hasty STEM teaching [18]. Nonetheless, research demonstrates that the success of STEM Education depends on the teachers' self-confidence, their perceptions, the importance they attribute to it, their attitudes, etc. [19][20][21][22]. Several studies focus on this subject. For example, one study [22] with 25 prospective teachers and 21 science teachers, sought to understand the participants' perceptions about the use of a project methodology based on a STEM approach that combined Project-Based Learning (PjBL) for science teaching. Pre-service teachers and teachers attended an eight-hour continuing education workshop on STEM-PjBL, during which they built toys to address physics topics with students: forces, sound, thermodynamics, electricity, etc. The results revealed that the students' involvement in this type of activity promoted their motivation for science classes. However, despite recognizing these benefits, the lack of time, resources, and training in STEM Education was pointed out as impediments to its implementation in the classroom. Kim and Bolger [19] described research in which 119 pre-service teachers (PSTs) developed a lesson plan that included an interdisciplinary STEM approach as part of a course they were attending. The results showed significant gains for PSTs in relation to the perception of their ability to create materials for STEM education, the confidence and commitment to develop such classes in their future practice, and the awareness of the potential of content integration to help students learn in a more interesting and meaningful way. Thibaut et al. [1], based on an extensive literature review on STEM integration (iS-TEM) and social constructivist view on learning, propose a framework containing five key principles (integration of STEM content, problem-centered learning, IBL, design-based learning, and cooperative learning) that describe the practices underlying STEM integration. According to this model, STEM content integration must be explicit to help students develop their knowledge and skills in the different STEM disciplines. In an integrated curriculum, content from more than one discipline is explicitly addressed, and the same emphasis is given to two or more disciplines [6,7]. In this respect, Roehrig et al. [23] propose a distinction between content integration and context integration. The first perspective focuses on merging disciplines into a single activity or unit, while the second focuses on the contents of a discipline and uses the others as a context to make the content more relevant. Regardless of the STEM approach adopted, it is fundamental for its operationalization that teachers have support in the development of their CK and PCK [15] during initial education. Shulman [15] identified CK (or often also called subject matter knowledge or SMK) as the knowledge about the subject matter to be learned or taught and PCK as the knowledge about pedagogy that applies to teaching specific content. The first dimension of teachers' professional knowledge (CK) comprises the body of knowledge and processes that are a prerequisite for the development of the latter (PCK) [15]. Despite its undeniable importance, in research on teacher education, little attention has been paid to how teachers need to understand the content they teach [24]. Teaching a subject requires more than Content Knowledge alone. It requires a process of transformation of content into Content Knowledge, and, for that, it is necessary to develop PCK [25]. Shulman defines PCK as a "special amalgam of content and pedagogy that is uniquely the province of teachers, their own special form of professional understanding" [15] (p. 8). Many studies focus dominantly on teachers' PCK, which has resulted in a variety of different models [25]. For instance, for Magnusson et al. [26], PCK is the result of a transformation of other domains of knowledge, which implies being more than the sum of the parts, and is conceptualized as being built through the process of planning, reflecting, and teaching a specific content. For these authors, PCK is determined by the content to be taught, the context in which that content will be taught, and how the teacher uses his/her experience. Although multiple views about the PCK coexist in the literature about these knowledge type, this study aims to shed light on the development processes when PSTs are engaged in a STEM program. Knowing that Content Knowledge influences teacher confidence and practices [27] and that lack of CK is particularly common in primary teachers [28], initial teacher education programs for early-level teachers should focus on science topics in which PSTs have difficulties and IBL, to produce considerable progress in their PCK [29]. Indeed, research has shown that primary school teachers are not well prepared to engage their students in inquiry and problem-based learning approaches, which makes it essential to support teachers in developing their PCK [29,30]. Literature on STEM practices and the development of CK and PCK in initial education is scarce, highlighting the lack of investment in this field. This research intends to contribute to this gap, studying the effects of STEM activities, according to the IBL methodology, in the CK and PCK of primary PSTs about sound concepts. The sound topic was chosen because it is present in the curriculum guidelines from the first levels of schooling [31], involving fundamental concepts for the learning of complex physics concepts [32], and numerous studies reveal the persistence of alternative conceptions in students [33][34][35][36][37][38] and future teachers [32,39,40]. Research Aim and Questions In this sense, this study aims to examine the effects of carrying out and planning STEM activities on the theme of sound in the CK and PCK of primary PSTs. To accomplish it, two research questions were formulated: 1. How do PST's CK and PCK evolve after participating in a STEM program? 2. What features of the STEM program influenced the development of the CK and PCK? Methods A qualitative study with an interpretative orientation [41] was developed to explore the effects of a STEM program on sound on primary PSTs' CK and PCK. Participants In this case, 18 pre-service primary teachers were enrolled in the last year of the teacher education program in a Portuguese higher education institution. The study sample was composed only of females, and none had previous experience with teaching science before their enrolment in the teacher education program. The justification for the sample being only composed of future female teachers is due solely to the fact that there were no male students attending this course when the study was carried out. In all classes, the PSTs worked in self-selected groups of three (G1-Group 1), four (G2-Group 2; G4-Group 4), five (G3-Group 3) and two (G5-Group 5) ( Table 1). STEM Program In the first part of the program, the PSTs attended a course that included one physics content module for over 10 h, in which they could deepen their knowledge of sound concepts by performing STEM activities. Five activities were developed following the guidelines proposed by Thibaut et al. [1], as described in Table 2. Table 2. STEM activities. Production of sound After reading a small text and watching a video about the importance of acoustics in our lives, the PSTs plan a hands-on experiment to discover how sound is produced. After that, the whole class discusses and shares the results obtained by each group. In the last task, the PSTs explore a tuning fork and reflect on the definition of frequency. Propagation of sound A challenge is presented to the PSTs as to whether it would be possible to hear a concert on the moon. Next, they are asked to read a text about the measurements of the velocity of sound and solve some exercises using the mathematical equation. The last challenge focuses on the speed of sound in different mediums (solids and air), taking the example of tracking a train approaching from far away. Attributes of sound An adaptation of the 5E lesson proposed by Adams et al. [42] is presented to the PSTs, comprising the construction of a musical instrument to explore different attributes of sound (pitch, intensity, and timbre). The PSTs also simulate a wind instrument to study if the variation in sound pitch is related to the size of the column of air. After that, the PSTs explore the oscilloscope function by installing the app Physics Toolbox Sensor Suite ® [43] on their smartphones. This app uses the device sensors to collect and display data and allows the PSTs to study wave shape and plot the signal against the time-lapse. Furthermore, they explore the simulation "Longitudinal periodic wave | spring" [44]. Sound waves Two PHET simulations "Wave on a String" [45] and "Sound" [46] are explored to study different waves and characterize sound waves. After that, there is a moment to share and discuss the main conclusions of the groups, and the PSTs answer some questions about the attributes of sound. In the end, PSTs analyze whether the speed of sound depends on wave frequency and amplitude, using the "Simple Wave Simulator Interactive" [47]. Behavior of sound waves Starting from a text and a problem about sound reflection, PSTs are challenged to solve it by planning an experiment. After this experiment, the PSTs develop a new investigation to study other behavior of sound waves (refraction or absorption). This activity also comprises exploring the concepts of reverberation and echolocation through the analysis of practical examples. Finally, after viewing some videos exemplifying the Doppler effect, the PSTs explain the phenomenon, explore a computer simulation [48], and discuss possible scenarios for that to occur. Afterwards, the PSTs enrolled in a science methods course in which, over a total of 17 h, they were introduced to the topics listed in Table 3. Data Collection and Analysis This research study was approved by the Institutional Review Board (ethics board). All participants signed informed consent forms and were given the opportunity to withdraw at any time. Data collection strategies included PSTs' assignments, lesson plans, observation, and a focus group interview. During the classes in the Physics course, we collected the PSTs' worksheets and took observation field notes, including a detailed description of the PSTs' actions and classroom situations. After the observations, the researcher reflected on the lessons based on the field notes taken while in class. At the end of the semester, the PSTs completed an assessment test, the analysis of which contributes to an in-depth understanding of their CK development. The second part of the STEM program focused on the development of the PSTs' PCK, and for that, they planned collaboratively one STEM activity. In addition to the lesson plans, the PSTs were encouraged to write down their reflections during the science methods course. These reflections about the lesson plan the PSTs developed aimed to understand their perspectives about the benefits and challenges of STEM integration. In some cases, the PSTs had the opportunity to carry out the STEM activity in teaching practice, which contributed to a better understanding of the PSTs' views. Lastly, a focus group interview was conducted with nine PSTs who volunteered. The interview targeted the PSTs' perspectives about the STEM program and discussed what they experienced during teaching practice. For data analysis, we resorted to content analysis, using a mixed deductive-inductive method. In the first phase, the analysis relies on the PSTs' answers on different assignments (STEM activity worksheets and assessment tests) that were confronted with the explanations/conceptions identified by numerous authors in the literature. During the PSTs' attendance in the science methods course, they planned one STEM activity covering basic sound phenomena and concepts. Accordingly, the analysis employed the predetermined categories, adapted from Thibaut et al. [1] and complemented by Roehrig et al.'s [23] distinction between content and context integration, as summarized in Table 4. The analysis of the PSTs reflections and the transcription of the focus-group interview and inductive coding method was used [50]. The categories that emerged from the data represent the features of the STEM program that enacted the development of the CK and PCK. Results generated through the different data collecting methods were compared and discussed among the two researchers. Table 5 synthesizes the main conceptions revealed by the primary PSTs concerning the issues of sound (CK) that were explored on STEM activities carried out during the physics course. Table 5. PSTs' answers relate to sound subjects. Activities/Subjects Explanations Groups Production of sound vibration of the sound source resulting from an action G3 sound as an entity G1-G4 Propagation of sound sound propagation requires a medium G1-5 sound is fast in solid, as it is denser G2, G5 Attributes of sound in wind instruments, a higher pitch is related to a small column of air G3 sound intensity and sound pitch are the same G1, G2, G4 Sound waves confuse frequency with amplitude G2 speed of the sound is dependent on frequency and amplitude G1, G2, G4, G5 Behavior of sound waves Doppler effect is directly connected to distance between source and observer G3, G5 In the first STEM activity, the PSTs proposed an explanation for the production of sound caused by the vibration of the sound source. In the assignment of a group (G3), it was possible to identify the idea that the vibration of the sound source is a result of an action. Overall, the PSTs' answers point to an interpretation of sound as an entity. Regarding the second STEM activity, the PSTs described sound propagation at a microscopic level, revealing that they were aware of the need for a material medium for sound to propagate. Two groups (G2 and G5) presented a wrong explanation for the fact that sound propagates faster in solids, associating the speed of sound with the density of the material. As for the sound attributes (third STEM activity), some participants (G3) found it difficult to conclude that in wind instruments a higher pitch is related to a small column of air. In the hands-task of exploring a tuning fork, three groups (G1, G2 and G4) could not distinguish the concepts of pitch and sound intensity. The analysis of the fourth STEM activity worksheets confirmed the persistence in G2 of the misunderstandings about two sound waves characteristics-amplitude and frequency. Most groups considered that the speed of sound depends on those parameters and did not realize that the frequency depends on the source, not the medium. However, the analysis of the test answers showed that these scientific inconsistencies were overcome, as all PSTs were able to distinguish between amplitude and frequency, determine the relationship between the intensity and amplitude of the sound and between the frequency and the pitch of the sound, and to characterize these sound attributes in graphical representations. The PSTs' answers were thoroughly correct about reflection, refraction, absorption, and other sound phenomena explored in the last STEM activity (behavior of sound waves). Nonetheless, about the Doppler effect, two groups (G3 and G5) incorrectly justified that it is directly related to the distance from the source to the observer. PCK of the Primary PSTs The results obtained regarding the analysis of lesson plans were summarized (Table 6), according to the predetermined categories. The STEM lesson planned by G1 focused on the subject of sound propagation. First, students should touch with an arch of a violin on Chladni plates covered with sand and watch the patterns caused by the metal vibration. Then, students are challenged to design a cymatics device to create "pictures of sound" (i.e., wave patterns). In the end, they play a team game in which they must recognize different sounds produced by the other groups and animal and other familiar sounds played on YouTube ® . There are some connections to mathematics content, but it was not mentioned in the lesson plan. The same happened with engineering content, although the lesson allowed students to engage in a hands-on design challenge that was not sufficiently highlighted. G2 presented a lesson about sound production and the factors that affect it. Students must pose questions, predict, plan, and investigate. Additionally, students should create a graph based on the data collected throughout the experiment and discuss it. A second activity consists of exploring the most appropriate materials to absorb sound. All the proposed activities involve students in open-ended and real-world problems and working in small groups. Math learning goals were explicit, although not so relevant as science learning goals. Students visualize a video to become familiar with the theme, so technology was present, but merely as a resource. G3 proposed a simplified version of Carrier, Scott, and Hall's [51] lesson, in which students explore the sound concepts of frequency and amplitude as they learn about various species' calls and their uses for survival. In this activity, students listen to recordings of animal calls and match them to the corresponding visual representations of sound (spectrograms). These can be found in the audio files of the software Raven Lite ® [52]. G3 explicitly emphasized mathematics learning goals, content, and practices. However, they did not apply equal attention to both disciplines (mathematics and science). This lesson focused on open-ended, real-world, authentic problems that engage students in a hands-on group activity. G4 provided a learning environment that engage students in a hands-on activity to develop new understandings regarding the reflection of sound. Students worked in groups. There was a concern to identify students' misconceptions and confront their previous ideas through discussion and some assessment exercises (assessment sheet) at the end. G4 did not establish mathematics learning goals and contents, although the planned STEM activity included knowledge of angles. Furthermore, this lesson plan integrates technology by including a video to engage the students with the theme and explain some underlying concepts. The problem is relevant (Is the repeated sound heard in the same way as the incident?) and open-ended, but the way it is presented does not sustain a real-world context. G5 planned a STEM lesson that was adapted from a 5E lesson developed by Merricks and Henderson [53]. This activity explores sound propagation and acoustic communication using an inquiry-based and problem-based approach. The first part of the activity (Engage) starts with a dialogue with students to understand their previous conceptions, and then they must develop a model of waves. In the Explore phase, students investigate how waves travel; for that, they must build "cup phones". Later, students explore the free software Audacity ® (Explain) to examine human's ability to detect sound and to develop knowledge about sound features. The final part of the activity (Elaborate) immerses students in the study of the role of acoustic communication for other living organisms. Through the analysis of oscillograms produced by different species of frogs, students can distinguish the species and locate them in their habitats in a scenario provided by the teacher. In all activities, the students work in groups, are involved in authentic, open-ended, and real-world problems, and in design challenges. Although the lesson planned included the content of all STEM disciplines, besides science, G5 did not mention it in the learning goals. For instance, the lesson comprehends graphs analysis (mathematics), artefacts development (engineering), and the use of computer software (technology). In short, only one group (G5) integrated the areas of mathematics and engineering (albeit in terms of context). G1 and G4 also integrated one of these two dimensions in the same way. Finally, G2 and G3 integrated mathematics content, although not as much attention is given to this area as to science. Features of the STEM Programme That Influenced the Development of the CK and PCK The analysis of the PSTs' reflections and the transcription of the focus group interview emerged several features of the STEM program that influenced the development of CK and PCK, as synthesized in Table 7. The integration of STEM content was the most prominent feature of the STEM approach. All PSTs seem to value the "curricular flexibility that it allows for the teacher" (E13, reflection), but they also pointed out numerous difficulties in content integration. For instance, they mentioned that they struggled to articulate different STEM disciplines is a "very demanding challenge for a PST, both in planning and in its implementation in the classroom", and they "found it difficult to have time to implement the activity during the classroom practice" (E9, reflection). The integration of different STEM disciplines requires from the teacher, according to one of the participants, a "great capacity for reflection, creativity and time" (E15, reflection). During the focus group interview, the PSTs also mentioned these concerns, and they all agreed that the most challenging discipline to integrate into their lesson plans was engineering. When they were asked about the reasons for that to happen, they explained: E17: I think it's the most difficult because we don't have that much knowledge and, therefore, we're not so comfortable with this discipline. E15: I think it also depends a lot on the content you work on. Not all science topics are suitable to articulate with engineering. (Focus group interview) Despite being such a demanding and time-consuming process, some PSTs demonstrated that this experience helped them to overcome the problems they had encountered when planning and/or implementing STEM lessons: After carrying out this planning and subsequent implementation in the classroom, I believe that there was a change in my thinking regarding this type of activity, since I considered that they were more complex. (E6, reflection) STEM activities are a type of activity that can be a little complex to plan, as they have a certain number of characteristics that are not always possible to have. However, in my opinion, they are activities that enable students to understand the integration of various content areas. That, in turn, facilitates the teachers. (E17, reflection). Despite the work that requires planning a STEM activity, the gains in students' learning overcome the teacher's difficulties. This means that no matter how demanding the teacher's work in planning is, namely, interconnecting STEM areas ( . . . ), building a plan that meets the content that is intended to be addressed, predicting possible students' difficulties, adapting the assessment instruments to the various phases of the activity, and managing the groups throughout the activity ( . . . ). Creativity, meaningful learning, interdisciplinarity, motivation, involvement, and active participation are positive aspects that the STEM activity develops in students. (E15, reflection) For one PST, "the greatest difficulty for students was the acquisition of scientific terms to designate the events arising from the experiments" (E6, reflection). During the interview, one PST goes further and talks about the potential of these activities to integrate content from subjects other than STEM, which is very convenient in primary education. I think that STEM activities in the classroom, in addition to mathematics, engineering and technology can integrate other subjects, in other words, the fact that I work on subjects of mathematics does not mean that I cannot work the subject of language. I think that starting from a STEM activity, the teacher can proceed to different curriculum disciplines. (E15, Focus group interview) The results indicate that the integration of technology caught the attention of most PSTs, as illustrated by the following statements. With so many technological resources at our disposal, if we ignored the potential of these resources, we would be contributing to a school out of step with current reality. For children, sooner or later, contact with technologies is inevitable. Teachers must take a responsible and informed attitude using the technological resources to enhance learning in the most diverse areas. (E11, reflection) Nowadays, children grow up and are exposed, at an early age, to the digital world. It is known today that technologies increasingly have both positive and negative impacts on society. Both the school and us, as future teachers, must be attentive and organized to obtain pedagogical advantages from this situation, that is, use technologies as a strategy to improve the teaching and learning process in our classes, as it is something very stimulating to arouse interest of children in the desire to learn. (E13, reflection). However, when we confront the PSTs' perspectives about technology integration that arose from the analysis of these documents with their lesson plans, what prevails is the use of the technology as a mere resource, as demonstrated by the following extract: The role that technology played in this activity was solely and exclusively through the interactive manual. We provided several explanatory videos, which we played as we explained, that is, the students did not have the opportunity to manipulate technological resources. (E10, reflection) This last statement reflects the experience of teaching practice, in which most PSTs were forced to change their initial lesson plan deeply. This need for change, among other factors, is "due to the lack of resources in schools, this task becomes an even greater challenge, as it is not possible to secure sufficient resources" (E7, reflection). The transcription of the focus group interview also revealed that these reasons forced the PSTs to practically suppress the use of technology by students: E15: It would provide them more time to use ICT. In my case, I used ICT at the end of the activity and then the time was short, and they did everything in a hurry. E17: Many times, ICT is involved, but we are the ones who use them. We don't have time, so let's be us to be faster. ( . . . ) and schools don't have the resources. (Focus group interview) Inquiry-Based Learning In the second category, the PSTs mentioned numerous aspects related to inquiry-based learning, particularly planning and experiments. One PST highlighted that: The students were able to achieve these goals and acquire the knowledge because they had experimented, because they were the ones planning what they had to do and if they found that it didn't work for them to do it again until it did. The fact that it is the students who reach the desired result, without being given all the information, allows this increase in learning. (E17, reflection) Nevertheless, some stages of the scientific inquiry could be problematic for students, according to some experiences in teaching practice. For instance, besides the great enthusiasm in the discovery and hands-on moments, students did not appreciate analyzing and interpreting data and drawing conclusions, as described in the following statement. When they [students] made the discovery, the degree of excitement was very high, and the level of satisfaction and well-being was also seen, which remained until we asked them to stop and draw conclusions from what had happened. (E10, reflection) Some PSTs' experience during teaching practice also revealed some problems with another moment of the inquiry, which is predicting results, as the following examples indicate. Another of the students' difficulties were related to the concern with having all their predictions correct, and the fact that after each experiment they verified a difference between the predictions and the conclusions, they were frustrated because they were wrong. In this way, I intervened to make them understand that predictions are not used as an object of evaluation but as a way of verifying what the students considered before reaching conclusions. (E6, reflection) For them [students] it did not make much sense to be predicting results without having carried out the experiment, so this was a complex process, for them to understand that in this part of the activity, they were not manipulating, but only observing. This difficulty arose since these students are not used to carrying out an investigation, and never being asked about their previous ideas, that is, they never confronted the results with their previous ideas. (E1, reflection) These issues observed during the implementation of STEM activities, according to several PSTs, are related to the lack of experience of students in planning and carrying out investigations, besides the small amount of time to perform it, as exhibited in the following statements. I felt that the group had a lot of difficulties in following the steps to reach the result. On the other hand, these difficulties may have occurred due to the lack of experiences that the class had. Another difficulty was the time for the activity, which, in my view, passed very quickly, and I was so involved that I ended up not having time to assess all the children. (E10, reflection) The main challenges I faced with this activity were quite a few, as it is an activity that, despite being extremely enriching for teachers and students, is also incredibly demanding for both parties. ( . . . ) One of the problems that exists in the primary education is the time given to implement hands-on experiments, which is very short, so trying to fit these activities into the schedule is unfortunately not always possible. (E1, reflection) Therefore, some PSTs mentioned the need to provide more guidance to students from the beginning until they become more acquainted with all the processes involved in an inquiry activity (posing questions; planning and investigating; collecting, analyzing, and interpreting data/information): "if students are used to carrying out investigations, and became more autonomous, they can develop all the steps with little guidance" (E6, reflection). This discussion also came up during the interview: It's a little difficult at first, until they get used to it. They were always "how can I do this?" or "How do I do this?". They wanted us to provide all the steps. We noticed that they are not used to it. The first few times, the probability of going wrong is great, but then, it starts to get better. (E10, focus group interview) The PSTs were introduced to the 5E-instructional model (Bybee et al. 2006) and used it to plan their STEM activities, which was very challenging. But despite some difficulties in using this instructional model, as the following extract demonstrates, the PSTs recognize its usefulness for the teacher in the process of planning a STEM activity. Through the activities developed, it is possible to face challenges and learning. In the first case, it is related to all the stages for the construction of the lesson plan, considering the 5E model, that is, all the five, six phases included in the STEM plan. Building a plan that contains all or part of these phases turns out to be a constraint as well as the integration of the STEM disciplines because, although possible, it is complicated to carry out tasks of this magnitude that take a long time and several classes, to work on the same content and still find different strategies for all these phases. However, after developing this STEM plan, I ended up also having a new perspective about the constraints presented above. Because the 5E model ended up being a kind of "guide" for the teacher to know what each of the phases should contain and the benefits for the child's development through it. (E13, reflection) An aspect present in focus group interview transcription and in the reflections was that "communicating during inquiry or design tasks is very difficult for students" (E15, focus group interview), especially when they describe their results or conclusions to the entire class. Communicating information also related to other features of STEM activities, such as collaborative learning and design-based learning. Another aspect discussed during the interview was that for students interpreting the results based on the experiment is a hard task: "in explaining. They try and see what worked and then have a hard time explaining why it happened in such a way. You may have understood but explaining and arguing I think is where they have the most difficulties". (E18, focus group interview) Collaborative Learning This was the third most common feature of the STEM program in the PSTs reflections. More than half of the participants mentioned it as a strength and/or a challenge. For instance, one PST stated that "It not only benefits students' learning as well as the relationship between teacher-student and student-student, and values such as cooperation, teamwork, and respect for the opinion of others" (E13, reflection). For E11, collaborative learning is as challenging as technology integration, as she pointed out that: The teacher must be aware that he must teach his students working as a team. This is a skill that can be learned like any other and not should be taken for granted. ( . . . ) In my view, the main difficulties of students are related to collaborative work and the ability to work with a computer. I believe that a flexible classroom in terms of furniture placement would facilitate working in groups or in pairs. As for computers, the solution will involve investing more in visits to the library to use computers to search for information, carrying out and disseminating group work. (E11, reflection) PST's reflections about their teaching practice experience indicate that students were not used to collaborating, so PSTs had to be very active in monitoring the groups and resolving frequent conflicts, as described in the following assertions. Another of the difficulties felt was group work, but this difficulty occurs due to the specificity of the class, as it is a group of very complicated boys, very striking and special personal experiences, and sometimes it is reflected in conflicts between elements of the class. Working in groups in this class is always a delicate and time-consuming process, which takes a while to get into the work rhythm. (E1, reflection) Makes group management difficult. Because most of the time, the students are talking to their colleagues, and this made the interventions I had to do or my colleague difficult. Students could not understand what we were saying as they were talking. I believe this was a learning experience because we had to use a strategy to demonstrate to students the importance of speaking down. (E17, reflection) Meaningful/Motivating/Engaging Context One of the most emphasized features of STEM activities was the focus on meaningful, real, and open-ended problems. This real-life context allows more meaningful learning to students, makes it easier for teachers to "contextualize the various areas and highlight their practical use in everyday life" (E11, reflection), and will "arouse students' interest and attention" (E4, reflection). About this, E13 gives an interesting example in the following description. Let's imagine that the teacher is facing a problem within the classroom, which is student noise, through everyday life the teacher can perfectly alert students to the situation and make them think about what kind of sound it is. That is, whether the reflected sound is heard in the same way as the incident sound. (E13, reflection) During the focus group interview PSTs reflected on the STEM activities they performed in the first part of the STEM program and come to the consensus that, "It could have been easier to perform more structured activities, but we weren't as involved" (E17) "would not be that meaningful" (E5). These ideas demonstrate their agreement that this engaging context helped them build their own knowledge (CK). Student-Centered Pedagogies There were several references to the use of student-centered pedagogies. According to some PSTs, in a STEM activity, students "take the lead of their own learning" (E11, reflection); "is the center of their own learning, active, critical, explorer, and autonomous" (E15, reflection). The next statement exemplifies the non-traditional roles assumed by teachers and students in this type of activity. These types of approaches require from the teacher a very different attitude from what we are used to, placing the student as the protagonist of their learning. In this case, the teacher assumes the role of moderator and facilitator, making available to the students the necessary means so that they can build their own learning while respecting the interests, needs and rhythms of each one. (E11, reflection) Design-Based Learning This category refers to the use of engineering design. In their reflections, eight PSTs mentioned this feature as either strength and/or challenge of a STEM unit. For example, E7 stated that engineering tasks "allows the student to focus on solving the problem, articulating the mathematical and scientific knowledge previously worked on" and that "this was what they liked the most". (E17, reflection) Two G5 students had the opportunity to implement a STEM activity in teaching practice and emphasized, above all, the integration of engineering and the involvement of students in a prototype design task, as demonstrated in the following citations. Children design something and have to change it, improve it, and it is in this analysis of what they have done that they are also able to acquire more knowledge ( . . . ) and apply it in other situations. (E17, focus group interview) I emphasize the involvement of students in the entire process, from the construction of materials, to verify and discuss the results, that is, the students' involvement is much greater than in any other type of activity. And this is quite advantageous. (E18, focus group interview) Despite this, during the focus group interview, all respondents expressed that the most difficult area to include in STEM activities is engineering, due either to their limited experience at this level or to the students' lack of familiarity with this type of tasks. 21st Century Skills Some PSTs emphasized that STEM activities promote the development of 21st-century skills, as the following excerpts exemplify. This is a methodology that aims for the student to become the protagonist of their own knowledge, thus developing in the student extremely important skills for their future life, such as leadership, resilience, conflict resolution, collaboration, these are fundamental skills in your future life. (E1, reflection) I believe that STEM activities are corrupting traditional formfs of teaching that are increasingly out of step with the realities present in schools. The implementation of this type of activity can stimulate the development of knowledge in science, technology, engineering, and mathematics areas, but also other transversal skills (linguistic knowledge, critical thinking, teamwork, decision-making, and creativity) in a more interactive, practical, and autonomous way. (E14, reflection) The focus group interview encouraged the participants' reflection about the positive effect of the STEM program in learning, highlighting the practical nature of STEM activities, the students' active role, and the content integration of different STEM disciplines. The PSTs recognize not only the importance of carrying out STEM activities for learning the different disciplines, but also consider that they have contributed to learning how to plan and carry out investigations. The following statement evidence the importance PSTs attribute to this experience of articulation between STEM areas for student learning. On the one hand, it is beneficial because students learn in action, they learn to handle, and it allows them to develop skills that even older people do not yet have, for example, programming. I can't, I mean I can but it's not immediate, I have to think. I think that working on this with children from an early age is important for them to learn, however, I think that schools are not prepared yet. (E7, focus group interview) They also added that students are not used to carrying out practical activities of the investigative type or to working collaboratively. The biggest obstacle to the integration of technology, in their opinion, are the few resources available in schools. Discussion and Conclusions As in the study developed by Küçüközer [32], it was possible to identify aspects of sound production as a phenomenon based on the action or physical properties of the source, although PSTs did not mention it in the conclusions. Most groups exhibited the misconception that sound is an entity distinct from the medium through which it propagates, such as a "material or substance" [34] (p. 247) and/or an "entity carried and transmitted by the molecules of the medium" [32] (p. 1891). All groups evidenced the correct idea that for sound to propagate, it requires a material medium, which contrasts with the study carried out by Küçüközer [32], in which most PSTs gave an incorrect explanation. Another wrong idea presented by the PSTs was that the velocity of sound is associated with the density of the material [40]. On the topic of sound attributes, the participants presented difficulties similar to those identified in other studies [33,37,38], namely in distinguishing pitch from the sound intensity. In the following STEM activity (sound waves), one group still demonstrated the persistence of misunderstandings about these two sound-wave parameters (amplitude and frequency), which is one of the most common misconceptions in the topic of sound [33]. Another problem was identified in the subject of sound waves, which was the fact that they consider that the velocity of sound depends on the frequency and the amplitude, a wrong idea that, according to Barniol and Zavala [33], is related to inadequate use of the equation v = fλ. The PSTs did not realize that frequency depends on the source and not on the medium (such as velocity) because they attributed it to the objects and not wave properties [38]. Regarding the behavior of sound waves, the students demonstrated a good understanding of reflection, refraction, and absorption, including correctly identifying the most appropriate materials for acoustic insulation, contrary to the results obtained by Bolat and Sözen [40]. Nonetheless, about the Doppler Effect, the PSTs' answers revealed incorrect links that were made between some of the concepts, such as in the study developed by Mosabala [36]. Results for participants' PCK (Table 3) revealed that problem/inquiry-based learning and collaborative work are present in all lesson plans, while design-based learning received little attention. Although design engineering process through integrated lessons is well recognized as a key to increasing student motivation and knowledge in science and/or mathematics, probably due to little preparation of teachers' candidates in primary level [11], only two groups included this STEM subject. The results revealed that all the STEM lessons developed by the PSTs did not meet the efforts to integrate technological literacy in primary curriculum [13] because technology is used merely as a resource. In most of the lesson plans (except G2 and G3), the other STEM disciplines were just used to make the content of sound more relevant or as a resource [23]. For instance, engineering, as in the lesson planned by G1, is a vehicle to provide a real-world context for learning science and mathematics [7,23]. Only two STEM activities (G2 and G3) presented explicit assimilation of concepts from more than one discipline, according to Satchwell and Loepp [6] view about integrated curricula. That integration between science and mathematics is particularly important in primary education [12]. More evident was the concern of the PSTs in establishing connections to real-life situations [6,7]. The attributes of Thibaut et al.'s [1] model common to all STEM lessons were problemcentered learning, inquiry-based learning, and cooperative learning. Other aspects mentioned in Thibaut et al.'s [1] review, such as assessment, is not very detailed by most groups. Only two groups (G1 and G3) stand out for proposing a scoring rubric, as claimed by Satchwell and Loepp [6]. The results point to the overcoming of scientific inconsistencies about sound, including the indistinction between the concepts of amplitude and frequency, which supports the idea that an integrative approach to STEM, based on active methodologies, contributes positively to the development of CK of the PSTs of the first levels of schooling [39]. Another conclusion of this study is that due to the inaccurate scientific mental models that have been identified, it is crucial to continue addressing the topic of sound in pre-and in-service continuing education initiatives. The STEM integration program also positively affected the development of the PCK of the primary PSTs, as demonstrated by the lesson plans developed and the perspectives highlighted by the participants interviewed. Yet, the aspects less achieved in the lesson plans that the PSTs' developed reinforce the need to continue to implement and improve STEM activities, with emphasis on the integration of different STEM disciplines [6,7], particularly, to overcome deficits in primary technology [13] and engineering practices [11]. Furthermore, according to Gresnigt et al. [12], the better way to solve the problem of lack of attention on science and technology in primary education is to integrate those disciplines with mathematics. It is important to make connections between disciplines to motivate students to learn, as it is to connect learning activities to real-life situations [6,7]. With these results, the authors conclude that teacher education play a critical role for the appropriation of the main attributes of STEM education. Indeed, it seems evident that engaging PSTs in carrying out and planning STEM lessons favors an effective development of PSTs' knowledge (CK and PCK). Hence, it is recommended that teacher education in addition to focus on STEM integration frameworks in method courses, include opportunities for PSTs to strengthen their knowledge about complex physics topics. This investment in teacher training is even more pressing in primary education, given the low level of confidence of teachers to develop and implement STEM lessons [20]. Additionally, future work, should focus on PSTs' PCK and CK enactment in classroom practice. The findings of the current research are valid for the studied group. Studies focusing on PSTs' professional knowledge development through a STEM program using a larger group are to be pursued in future investigations. Author Contributions: All authors contributed equally to this project. All authors have read and agreed to the published version of the manuscript. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: Due to ethical issues, the data collected and analyzed in this study are not available to outside researchers. Conflicts of Interest: The authors declare no conflict of interest.
2022-04-06T15:20:26.506Z
2022-04-04T00:00:00.000
{ "year": 2022, "sha1": "139957b0667e43188d40b241ac8d6cc34f5aaa22", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-7102/12/4/258/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "967dce6a1b38737866b775f1e62ce5acd7fcf1c2", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
252496009
pes2o/s2orc
v3-fos-license
Additional treatment with mistletoe extracts for patients with breast cancer compared to conventional cancer therapy alone – efficacy and safety, costs and cost-effectiveness, patients and social aspects, and ethical assessment Background: Chemotherapy is often used in the treatment of breast cancer in women. Side effects such as diarrhea, fatigue, hair loss, fever or disturbances in blood formation impair the women’s quality of life. An essential treatment goal of the accompanying mistletoe therapy (MT) used in complementary medicine is to improve the health-related quality of life during cancer therapy. Aim and methods: The HTA report on which this article is based examines the medical efficacy and safety, costs and cost-effectiveness, patient and social aspects, and ethical aspects of MT in women with breast cancer. Systematic reviews were conducted for this purpose. The search period of the literature search ranged from 2004 to October 2020. Results: A total of 2 evidence-based medical guidelines, 3 randomized trials assessing efficacy and 1 additional non-randomized intervention trial, as well as 3 observational studies assessing safety, a cost analysis, 12 cross-sectional studies on patient aspects and 17 articles on ethical evaluation were included. Improvements in health-related quality of life compared to the control group were small to moderate. Due to the high risk of bias in the studies, it is possible that the difference is not caused by MT. One study with a small sample size showed no effect on progression-free survival after 5 years. Studies on the effect of MT on overall survival are lacking. In seven studies, local skin reactions of low and moderate severity were reported in a median of 25% (range 5 to 94%) of patients, and mild to moderate systemic reactions in a median of 2% (range 0 to 8%) of patients. A comparative cost analysis from Germany reported significantly lower medical costs within 5 years after surgery for patients with MT than for patients without MT, but the underlying observational study did not control for systematic bias. With regard to patient aspects, the frequency of use and the reasons for use from the patient’s or practitioner’s point of view were mainly investigated. A median of 25% (range 7 to 46%) of patients with breast cancer and 29% (range 29 to 79%) of treatment providers use MT. The main motivations of patients for use were to reduce side effects, strengthen the immune system and take an active role in the treatment process. Patients felt insufficiently advised. Studies on other aspects are lacking. The ethical evaluation was able to identify 6 overarching themes; the central challenge is the insufficient evidence on efficacy and safety. survival are lacking. In seven studies, local skin reactions of low and moderate severity were reported in a median of 25% (range 5 to 94%) 2016 was 112.2 women per 100,000, and the agestandardized incidence of death was 23.4 women with breast cancer per 100,000 [2]. The mean (median) age of onset is 64 years, and the relative survival rate after ten years is 82%, indicating that the chances of cure and survival are relatively good [2]. In addition to surgery and depending on the tumor status, chemotherapy can be used to treat breast cancer. Chemotherapy is usually associated with side effects such as diarrhea, fatigue, hair loss, fever, and blood formation disorders. This significantly impairs the patients' quality of life during treatment [3], [4]. Mistletoe preparations are used for the concomitant treatment of cancer in Germany. A major goal of treatment is to improve the health-related quality of life during therapy, which is limited due to the toxicity of chemotherapy [3]. Mistletoe therapy is classified as complementary or integrative medicine [5]. The biochemical mechanisms of action of mistletoe preparations are attributed to mistletoe lectins and viscotoxins, which have a non-specific immunostimulant and a cytostatic effect [6], [7]. Anthroposophical mistletoe extracts without standardized lectin content and mistletoe preparations registered as herbal remedies with standardized lectin content are sold on the German market [8]. All mistletoe preparations are available without prescription. The costs of mistletoe therapy as part of palliative treatment for patients with metastatic breast cancer are covered by the statutory health insurance. Mistletoe therapy as a part of adjuvant treatment for non-metastatic breast cancer is not covered due to the uncertain evidence currently available. The HTA report by Lange-Lindberg et al. [9] investigated mistletoe therapy as an adjuvant treatment to reduce toxicity of chemotherapy of malignant diseases. It concluded that only in the case of breast cancer were there indications that an adjuvant mistletoe therapy could improve the patient's quality of life. The current German medical guideline on early detection, diagnosis, and therapy of breast cancer states that mistletoe therapy does not improve the survival of patients with breast cancer, and an improvement in quality of life is questionable due to insufficient data [4]. This raises the question of whether there is new evidence on medical efficacy and safety that can clarify these questions. In addition, the present article aims to systematically examine costs and cost-effectiveness, patients' aspects and social implications, and ethical questions associated with concomitant mistletoe therapy. Research questions 1. The following research questions were investigated to evaluate medical efficacy and safety: Does the administration of mistletoe preparations reduce patient-relevant side effects of conventional chemotherapy and improve health-related quality of life in patients with breast cancer compared to chemotherapy without concomitant therapy with mistletoe prepara-tions? Does the administration of mistletoe preparations in addition to conventional chemotherapy influence progression-free survival or overall survival in patients with breast cancer? 2. To evaluate economic aspects, the following research questions were investigated: What are the differences in the costs of concomitant mistletoe therapy and what is the cost-effectiveness of concomitant mistletoe therapy compared to treatment without mistletoe preparations? 3. The evaluation of patients and social aspects examined the following research questions: What are the attitudes, experiences, perceptions, and knowledge of patients and professionals regarding concomitant mistletoe therapy? Are there barriers to accessing mistletoe therapy for interested patients? What does communication about concomitant mistletoe therapy look like, and are there particular points that should be communicated to ensure adequate uptake? 4. The ethical evaluation examined the following research questions: Which ethical aspects on an individual, social, and professional level are relevant for mistletoe therapy as a concomitant therapy in patients with non-metastatic or metastatic breast cancer compared to conventional therapy alone? What is the outcome of an evaluation and weighing of the identified ethical aspects and challenges regarding the use of mistletoe therapy as concomitant therapy in patients with breast cancer, i.e., under what conditions is the use of concomitant mistletoe therapy in adjuvant and palliative treatment of breast cancer ethically justifiable? Evaluation of efficacy and safety Methods Main inclusion and exclusion criteria for primary studies, systematic reviews, HTA reports, and evidence-based guidelines Published randomized controlled trials (RCT) on the efficacy and safety of adjuvant or palliative mistletoe therapy in patients with breast cancer were included that examined outcomes on at least one of the following outcome measures: adverse effects of standard therapy, health-related quality of life, overall survival, progressionfree survival, adverse effects of mistletoe therapy. For the assessment of adverse effects of mistletoe therapy, observational studies were also included. In addition, systematic reviews and HTA reports with literature searches from 2004 onwards that met the above inclusion criteria as well as evidence-based clinical practice guidelines for the treatment of breast cancer with statements on complementary medicine from 2008 onwards were included. Literature search, selection, assessment of study quality, data extraction, and evidence synthesis A systematic literature search was conducted in the following databases: MEDLINE, EMBASE, Cochrane Library, and HTA Database of the Centre of Reviews and Dissemination. The search period ranged from January 1 st , 2004 to March/April 2017. Between April 2017 and October 2020, further publications were identified through a continuous update function in PubMed. Search terms for the disease (breast cancer) and for the therapy (mistletoe preparations) were combined in the form of free text and database-specific thesaurus terms. An Internet search was conducted using the Grey Matters checklist of the Canadian Agency for Drugs and Technologies in Health [10]. Guidelines were also searched for in three guideline databases. All references were selected independently by two authors using the predefined inclusion and exclusion criteria. Differences were resolved through discussion. In addition, a hand search was conducted in the bibliographies of the identified included systematic reviews, HTA reports, primary studies, and guidelines. Cochrane's Risk of Bias tool [11] was used to assess potential bias of the RCT, and the Appraisal of Guidelines for REsearch & Evaluation (AGREE) tool [12] was used to assess the methodological quality of the guidelines. Identified HTA reports and systematic reviews were used to search for additional primary studies. From the included studies and guidelines, the relevant characteristics and results were extracted by one person and checked for accuracy by a second. The results from the primary studies on the efficacy and safety of concomitant mistletoe therapy in breast cancer were presented in evidence tables and figures and described in summary texts. Due to the heterogeneity of the studies, no metaanalyses were performed. The recommendations of the clinical guidelines on concomitant mistletoe therapy are described in text form. Efficacy No study on the effect of concomitant mistletoe therapy on overall survival was identified. A three-arm RCT [24], [25] with a small sample size comparing two different mistletoe preparations with a placebo control showed no effect of concomitant mistletoe therapy on progression-free survival after five years: 72.4 and 67.9% with concomitant therapy and with the mistletoe preparations, 78.6% in the control group (p-value compared to the control group 0.551 and 0.746, respectively). Three RCT [19], [20], [21], [22] (719 patients) investigated the change in health-specific quality of life and adverse symptoms of the standard therapy after 15 and 18 weeks. Four different validated quality of life instruments (FACT-G [Functional Assessment of Cancer Therapy -General] [27], [28], [29], GLQ-8 [Global Life Quality] [30], [31], Spitzer analog scale [31], [32], EORTC QLQ-C30 [European Organisation for Research and Treatment of Cancer] [33], [34], [35], [36]) were used. In one study [20] (352 patients), the FACT-G questionnaire was used. The group difference of the total score (range 0 to 80 points, control group score in placebo group at baseline 50 points), after 15 weeks was four points (p-value <0.0001) in favor of the mistletoe group. A value for a minimal clinically relevant difference is not given. Two studies [19], [20] (598 patients) used the GLQ-8 symptom scale, a visual analog scale measured in mm. In one study [19] (272 patients), the change in the GLQ-8 symptom scale (range 0 to 800 mm) after 15 weeks was 30 mm (p-value 0.0121) in favor of the mistletoe group. The values at baseline ranged from 128.9 to 171.5 mm [19] in four study arms. In the second study [20] (352 patients), the difference in favor of the mistletoe group was 40 mm (p-value <0.01). At baseline, the value was 150 mm. The indication of a minimal clinically relevant difference was missing. However, the difference was considered clinically relevant by the study authors. The Spitzer analog scale questionnaire on quality of life (theoretically possible range 0 to 100) showed a change in favor of the mistletoe group after 15 weeks in the same two studies. The difference was 5.8 and approximately 5.0 mm, respectively. The values at baseline varied between 29 and 46.4 mm [20] in the study arms. Again, there is no statement about a clinically relevant minimum difference. One study [21], [22] (95 patients) investigated two mistletoe preparations in parallel versus a control group and used the EORTC QLQ-C30 questionnaire. The EORTC QLQ-C30 consists of five functional scales, one total health subscale and nine symptom scales, which are reported individually (range 0 to 100 points each). A difference of at least five points is considered clinically relevant [33], [34], [35], [36]. After 18 weeks, both groups with mistletoe preparations had statistically significant values (p-value <0.05) above five points difference from the control group in three functional scales (role, social, and emotional) and five symptom scales (nausea and vomiting, insomnia, loss of appetite, diarrhea, and financial problems). The differences compared to the control group ranged from 6.01 to 14.09 points [21], [22]. Risk of bias in studies All studies were adequately randomized and have low data loss in the analysis. For two studies [19], [20], it is unclear whether group allocation was concealed. One study [21], [22] was unblinded and two studies [19], [20] were double blinded with a placebo. However, due to the frequent local reactions, unblinding is likely. Because health-related quality of life is a patient-reported outcome, there is a high risk of performance and detection bias. The risk of bias due to attrition was low or unclear, and the risk of bias due to other causes was low. Discussion In summary, there is no evidence on the effect of concomitant mistletoe therapy on overall survival, and little evidence that concomitant mistletoe therapy has no effect on progression-free survival after five years. The improvements in health-related quality of life compared to the control group are small to moderate. Since there was a high risk of bias in all studies due to the lack of or inability to maintain blinding, it is possible that the difference is not caused by the concomitant mistletoe therapy. New randomized studies on health-related quality of life should try to solve the central methodological problem of maintaining double blinding. The present review on clinical efficacy and safety is subject to several limitations. The comprehensive systematic literature search was conducted in 2017. However, automatically updated notifications from one of the bibliographic databases and the review of current systematic reviews [37], [38], [39], [40], [41] allowed us to confirm that no additional recent studies (with a censor date of December 2020) are available. Full texts of two RCT [42], [43] identified in previous reviews were not available, so we did not report the results from those studies. These studies had small case numbers ranging from 17 to 46 patients, poor reporting quality, and a high risk of bias. Accordingly, inclusion or exclusion of these studies would not change the outcome of the evaluation. Due to the heterogeneity of the studies, we did not consider it appropriate to conduct a meta-analysis for efficacy or safety outcomes. However, the statistical uncertainty that can be reduced by a meta-analysis was not the main limitation. Our main limitation was the high potential for systematic bias in the effect sizes. Main inclusion and exclusion criteria for studies For the assessment of costs and cost-effectiveness of concomitant mistletoe therapy, we applied the same inclusion and exclusion criteria as for the assessment of efficacy and safety for study population, intervention, and comparator intervention. Outcome measures were the additional costs of concomitant mistletoe therapy, as well as additional costs per life year gained, or quality-adjusted life year. All health economic study types were included. The evaluations had to relate to the German-speaking region. Literature search, selection, assessment of study quality, data extraction, and evidence synthesis Literature search and selection were conducted analogously to the assessment of clinical efficacy and safety. Standardized extraction forms were used for data extraction, and the criteria catalog for the methodological quality of health economic studies of the German Scientific Working Group Technology Assessment in Health Care [44] was used for quality assessment. Study characteristics and results were summarized in evidence tables. Results In the literature search, 243 references were identified after removing duplicates. After screening of titles and abstracts, five full-text articles [45], [46], [47], [48], [49] were reviewed, and one study [45] was included in the information synthesis. A comparative cost analysis of concomitant mistletoe therapy was identified based on 2005 prices and with therapy data from between 1990 and 2000. The underlying multi-center, retrospective cohort study [50] investigated the efficacy and safety of concomitant mistletoe therapy in patients with breast cancer during routine follow-up in 53 randomly selected hospitals and medical practices in Germany. Direct medical costs in inpatient and outpatient settings as well as indirect costs for a loss of productivity of up to 90 days were collected after surgery and the completion of adjuvant chemotherapy or radiotherapy. In total, the data of 741 patients were analyzed -167 patients with concomitant mistletoe therapy, 514 patients with standard therapy without concomitant mistletoe therapy, 60 patients who had switched between standard therapy alone and concomitant mistletoe therapy [45]. The average total costs within five years for a patient with concomitant mistletoe therapy were 4,504 euros, compared to 9,996 euros for a patient with standard therapy. The difference was mainly due to inpatient costs and lost productivity costs. Data on statistical uncertainty were not reported. The study has many limitations in data quality, but the main weakness is that the observational study [50] on which the economic evaluation is based did not control for systematic bias with the study design. Discussion Studies on the cost-effectiveness of concomitant mistletoe therapy were not found. The large cost difference in favor of concomitant mistletoe therapy in the comparative cost analysis cannot be attributed to a causal effect of mistletoe therapy due to the high risk of bias, as the data are based on an observational study [50] without any control for systematic bias due to confounders. However, data on the level of education as well as other prognostic factors that are known confounders were not collected in the study. It therefore remains unclear whether an accompanying mistletoe therapy can reduce the costs of illness in the follow-up care of breast cancer. As long as the question of the clinical effectiveness of concomitant mistletoe therapy is uncertain, the question of cost-effectiveness cannot be answered. Main inclusion and exclusion criteria for studies All published studies on mistletoe therapy in patients with breast cancer that refer to the German-speaking region and have reported results on relevant outcome measures on patients and social aspects were included. Relevant outcome measures are the following: use of mistletoe therapy, knowledge, attitude, acceptance, satisfaction, experiences, expectations of users of mistletoe therapy, access to mistletoe therapy, type and extent of communication and information on mistletoe therapy, and mistletoe therapy evaluation by patients. The outcome measures could be reported from the perspective of the patients, from the perspective of their caregivers or family members, or from the perspective of the treating health professionals. Quantitative and qualitative study types without further restriction could be included. Literature search, selection, assessment of study quality, data extraction, and evidence synthesis For the literature search, in addition to the cross-domain search and the comprehensive search for gray literature described above, a systematic database search was conducted in twelve databases from the fields of medicine, economics, sociology, and psychology. All references were selected independently by two authors using the predefined inclusion and exclusion criteria. Differences were resolved through discussion. In addition, a hand search was conducted in the bibliographies of the included primary studies. The checklists of the Critical Appraisal Skills Program [51], [52] were used in original or adapted form to assess the risk of bias. Relevant characteristics and results were extracted from the included studies by one person and checked for accuracy by a second. Results for quantitative primary studies were summarized and presented in evidence tables. The qualitative studies were analyzed using qualitative content analysis. Three studies [57], [58], [60] with 654 professionals (mostly physicians) report results on the frequency of use of CIM and concomitant mistletoe therapy and on characteristics of the professionals who use them. A median of 29.3% (range: 29.3 to 79.2%) use concomitant mistletoe therapy. CIM and concomitant mistletoe therapy users of the treating professionals are typically older and work in private practice or have a higher hierarchical status in the hospital. Seven studies [56], [59], [57], [58], [60], [65], [66] describe the respondents' attitudes towards concomitant mistletoe therapy and what benefits they expect from its use, including two cross-sectional studies [56], [59] with 197 patients with breast cancer, three cross-sectional studies [57], [58], [60] with 654 treating professionals, and two qualitative studies [65], [66] with 43 patients with breast cancer. In one study [56], trust in the counselor and perceived competence regarding CIM was important for 90% of CIM users because it made them feel that the use of CIM was good for them. In another study [59], patients want to leave nothing untried (47%), want an active role in treatment (47%), want to complement conventional therapy (31%), want to have a gentle treatment free of side effects (18%), or did not respond to conventional therapy and mistletoe therapy (3.6%) [59]. Specifically, the use of mistletoe therapy is associated with the expectation of having fewer side effects with conventional cancer therapy and stimulating the immune system [56], [59], [66]. Three studies [57], [58], [60] investigated the reasons given by health professionals for or against the use of concomitant mistletoe therapy. With few exceptions, only the study by Kalder et al. [57] also provides quantitative data. Reasons for the application of mistletoe therapy among healthcare professionals were the patients' wish (82%), the patients' motivation (62%), the expansion of their own range of services (59%), their own conviction (55% [57] and 66% [60]) or the belief in the ineffectiveness of conventional therapy (46%). Reasons against using them include that time is lost (32.9%), unconventional methods are too expensive (30.4%) and the use of conventional methods is discouraged (27.3%). In addition, specific expertise and personnel are lacking [58]. An online cross-sectional study [56] with 80 patients and two qualitative studies with [65], [66] 34 patients report results on the assessment of patient information and doctor-patient communication. Seventy of the patients thought that the consultation time on CIM was not sufficient and only 53% thought that the doctor was well informed about CIM [56]. The qualitative studies showed that patients would like personal advice on CIM or mistletoe therapy from their attending physician, and not only advice but also not mentioning CIM or mistletoe therapy can be interpreted as advice. No study reported access restrictions. As a particular challenge in the application of mistletoe therapy, only physicians in the hospital setting reported in one study that CIM is not part of routine care and reimbursement schemes, which makes its application hardly possible and billable [58]. Risk of bias and methodological quality of studies The results of the cross-sectional studies are considered valid. It is questionable whether the recruitment of the samples in the individual study centers was suitable to obtain representative results for all patients with breast cancer in Germany. The respective study setting and the self-selection of the participants could have influenced the results. The transferability of the results from the studies to the target population of the HTA report this article is based on is thus unclear, and is reinforced by deficiencies in data quality, such as a non-transparent description of patient characteristics in an anonymous online survey. The qualitative studies [65], [66], [67] show different methodological deficiencies. For example, the relationship between respondents and interviewers was not adequately considered or described in any of the studies, and the data analysis was not reported sufficiently in many places. Overall, the evidence base for all outcomes from the patient's perspective, except for the frequency of utilization, is low, with 197 patients in cross-sectional studies and 43 in qualitative studies. The evidence base on the perspective of the treating professionals is somewhat better, with three studies [57], [58], [60] and 654 participants. However, two [57], [60] of the three studies are already 20 years old. Discussion Approximately one quarter of patients with breast cancer make use of concomitant mistletoe therapy. In the setting of the identified cross-sectional studies from the treating perspective, about 29% of healthcare professionals offer concomitant mistletoe therapy to their patients. Deficits became apparent in knowledge about mistletoe therapy, communication, and patient information. Patients feel insufficiently informed about mistletoe therapy and would like more and longer personal advice from a competent specialist, preferably from an oncologist or general practitioner. The possibility to obtain information independently on the Internet does not replace this need for counseling. This lack of knowledge was also stated by the treating professionals who commented on their own uncertainty regarding mistletoe therapy due to the unclear evidence on its efficacy and the lack of knowledge about complementary methods in general. Mistletoe therapy or CIM in general should be proactively discussed in doctor-patient consultations. The goal for doctor-patient communication should be to address the need for counseling, determine which need should be or is fulfilled by mistletoe therapy, and address that need in therapy planning. The S3 Guideline of the Association of the Scientific Medical Societies "Complementary Medicine in the Treatment of Oncological Patients" [68] can provide the basis for evidence-based decisions in doctor-patient communication. When developing, using, and recommending information for patients, the Internet should not be disregarded as a source of information. There is a lack of high-quality surveys of patients on all patient aspects, except for the use of mistletoe therapy, as well as more recent surveys of treating professionals. Additional high-quality surveys of both patients and healthcare professionals could form the basis for improved patient information and communication processes. Limitations of the present analysis are that it may not have been possible to identify all relevant studies through the literature search. There is too little data available on knowledge, attitudes, acceptance, satisfaction, experiences, expectations, access, type and scope of communication, information on mistletoe therapy available to patients, and evaluation of this information by patients. Whether the included studies are representative of patients with breast cancer and the treating professionals in Germany is questionable. Ethical evaluation Methods The ethical evaluation consisted of three parts. In the first part, a search was conducted to determine whether there are existing recommendations or guidelines for dealing with the ethical aspects of mistletoe therapy in breast cancer. In the second part, a systematic review was conducted with the aim of identifying ethical aspects of mistletoe therapy in breast cancer. For this purpose, the included literature was evaluated using methods of qualitative content analysis. In a third part, the results of the literature review were assigned to the ethical principles relevant to medicine and public health (i.e., benefit, harm, efficiency (costs), justice, self-determination, and legitimacy) using an ethical framework. In addition, further aspects were added based on theoretical reflection. The following databases were used for the literature search: PubMed, PhilPapers, Sowiport, and Ethicsweb. Ethical aspects were identified using the principles of the framework mentioned above (i.e., something is an ethical aspect if the aspect has a relation to benefit, harm, justice, etc.). The ethical aspects extracted from the included literature were categorized using the framework with which they were identified; and by means of inductive categorization (i.e., formation of superordinate categories based on comparison of the material found), they were classified into specific subcategories. Results Seventeen professional articles [69], [70], [71], [72], [73], [74], [75], [76], [77], [78], [79], [80], [81], [82], [83], [84], [85] were included. In the evaluation of these articles, 22 ethical aspects were identified. Through the supplementary theoretical reflection, four additional ethical aspects were identified that were not represented in the included literature. Similarly, the literature search did not yield any specific contributions on professional ethical aspects. No further aspects were added from the other domains. This made it possible to identify a total of 26 ethical aspects that could potentially be relevant for concomitant mistletoe therapy in breast cancer. The 26 ethical aspects were divided into six main categories. These correspond to the ethical principles used (benefit, harm, self-determination, justice, efficiency (cost), and legitimacy). For the principles 'justice' and 'legitimacy', no ethical aspects could be assigned. For a more concrete thematic classification of the aspects, eight subcategories were also formed. Of the 26 aspects, 21 were classified as ethical risks (risk of insufficient consideration of an ethical principle) and five as ethical challenges (balancing between ethical principles required). These ethical aspects were concretized and combined into eight criteria (what is ethically required?) and four conflicts (what needs to be weighed?). In summary, six overarching themes can be identified based on the aspects and criteria/conflicts: General topic(s) that may in principle also apply to other therapies in the field of complementary and integrative medicine or to conventional therapies include: 1. Difficulties in informed consent to therapy (e.g., what content is required to ensure sufficient information, or how to ensure freedom of consent from too much influence, especially from outside). 2. A possible danger of a "therapeutic misunderstanding", e.g., an erroneous conviction on the part of a patient that a therapy used only palliatively is part of a causal therapy (i.e., serves to treat the breast cancer). 3. Difficulties in weighing the potential harms and benefits of the therapy, including interactions and side effects, or failure to do so. In addition, there are more specific issues that arise particularly in concomitant mistletoe therapy, including: 4. Problems of quantitatively and/or qualitatively insufficient evidence to assess potential benefits and harms and underlying problems of evidence generation for mistletoe therapy. 5. The possible lack of communication or information between physicians, complementary/integrative medicine practitioners, and/or patients as to whether mistletoe therapy is or has been started and with which modalities. 6. Difficulties in dealing with possible placebo or nocebo effects in mistletoe therapy. Discussion A pivotal point in the ethical evaluation of palliative as well as adjuvant mistletoe therapy for breast cancer is the problematic evidence situation regarding efficacy and safety. On the one hand, some ethical aspects or criteria/conflicts relate directly to this. For example, all aspects that have to do with the assessment and weighing of potential benefits and harms are directly related to efficacy and safety of mistletoe therapy. On the other hand, other ethical aspects or criteria/conflicts are dependent on the evidence situation, such as the conditions for successful informed consent to a therapy (e.g., information about potential benefits and harms, information about the evidence situation or the state of science) or the communication between doctors and complementary/integrative medicine practitioners due to possible discrepancies in the assessment of effectiveness. In these decisions -at least as long as the potential for harm is considered minimal and financing is unproblematic -, the autonomy of the patients will be ethically decisive to a particular extent in each individual case. Furthermore, in the case of a possible adjuvant (non-palliative) mistletoe therapy, which must be paid out of pocket by the patients, additional questions could arise regarding the fairness of the legal regulation since the statutory health insurance only reimburses mistletoe therapy as a palliative, not as an adjuvant therapy. Beyond the individual case, decisions must also be made at the political level as to whether reimbursement of palliative mistletoe therapy for breast cancer should continue in view of the uncertainties surrounding its effectiveness. At the institutional level, it is unclear whether this uncertainty can be resolved in the medium and long term through several new and/or better studies. However, in the case of palliative as well as adjuvant mistletoe therapy for breast cancer, special emphasis must be placed on the danger of both an unfair assessment of the potential benefits and harms and/or insufficiently neutral education about the therapy due to prejudices based on its affiliation with complementary or integrative medicine. In this context, both possible negative and positive prejudices (e.g., due to ideological convictions) must be considered, as an uncritical and overlyoptimistic assessment of the benefit potentials and a distorted explanation of the benefits to the patient could be judged as ethically problematic. Summary discussion, conclusions, and recommendations The HTA report on which this article is based contains a systematic literature review on efficacy and safety, costs and cost-effectiveness, patients and social aspects, and ethical evaluation. Within the scope of the HTA report, no randomized controlled trials on the clinical efficacy of concomitant mistletoe therapy regarding overall survival in patients with breast cancer could be identified. One study [24], [25] with a small sample size showed no difference in disease-free survival after five years between patients with and without concomitant mistletoe therapy. There is evidence from three randomized controlled trials [19], [20], [21], [22] that side effects of chemotherapy -as measured by symptom scales -are reduced, and health-related quality of life -as measured by functional scales -is increased. However, the effects are rather small to moderate. It is uncertain whether these effects could be due to systematic bias of the only subjectively measurable outcome measures due to insufficient blinding in the studies. Known side effects of mistletoe therapy, such as mild and moderate local reactions, that were recorded in these three RCT [19], [20], [21], [22] and four other non-randomized studies [13], [14], [15], [16], [17], [18], [23] are common but of low magnitude. Possible interactions between anticancer drugs and mistletoe extracts, which could be due to immune stimulation, were not investigated in the included studies. There are no sufficiently valid studies on the costs and cost-effectiveness of concomitant mistletoe therapy. Given this overall uncertain evidence, the extension of the funding of mistletoe therapy to adjuvant therapy by statutory health insurance cannot be recommended. Beyond individual cases, decisions must also be made at the political level as to whether reimbursement of palliative mistletoe therapy for breast cancer should continue in view of the uncertainties surrounding its efficacy. Although mistletoe therapy is approved and available without prescription, it is hoped that further randomized trials will be conducted to reduce the uncertainty regarding efficacy in improving health-related quality of life and to capture possible interactions between anticancer drugs and mistletoe extracts as possible side effects. Efficacy in terms of overall survival and progression-free survival should also be investigated. A review on the methodological challenges of randomized trials of mistletoe therapy and corresponding approaches to solving them could then be used to develop an adequate study design. Representative surveys should also be conducted on knowledge, attitudes, acceptance, satisfaction, experiences, expectations, access, type and extent of doctorpatient communication, and information on mistletoe therapy.
2022-09-25T05:29:36.903Z
2022-07-14T00:00:00.000
{ "year": 2022, "sha1": "882b2e7f33ef0e300ad95eaeb9c6d60a2ed90c91", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "882b2e7f33ef0e300ad95eaeb9c6d60a2ed90c91", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
224350912
pes2o/s2orc
v3-fos-license
Theoretical study of NH and CH acidities of toluidine isomers – dependence on their oxidation states This is an open access publication under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0). Please note that the reuse, redistribution and reproduction in particular requires that the author(s) and source are credited. The license is subject to the Beilstein Archives terms and conditions: https://www.beilstein-archives.org/xiv/terms. The definitive version of this work can be found at https://doi.org/10.3762/bxiv.2020.113.v1 This open access document is posted as a preprint in the Beilstein Archives at https://doi.org/10.3762/bxiv.2020.113.v1 and is considered to be an early communication for feedback before peer review. Before citing this document, please check if a final, peer-reviewed version has been published. This document is not formatted, has not undergone copyediting or typesetting, and may contain errors, unsubstantiated scientific claims or preliminary data. Introduction Thanks to the availability of a variety of modern analytic and physicochemical methods, such as the short-time spectroscopy [1], emission spectroscopy [2] and spectroelectrochemistry [3], it is possible to elucidate in the last time the mechanism of many organic chemical reactions and to indicate and characterize certain short-living species occurring thereby. The important role in this research play the highly-sophisticated quantum-chemical methods [4,5] which enable the theoretical description of molecular chemical and electronic structure, as well as the evaluation of thermodynamic properties. The theoretical calculations can be performed for the gas-phase and the role of solvent environment is standardly covered using the implicit solvent models [6]. Such an example of interesting problems in organic chemistry is the formation of acidic and basic species in the different electronic excited states. For these species, the corresponding dissociation constants and thermodynamic quantities differ mostly from the ones in the electronic ground states [7]. In this context of experimental and theoretical research activities, only few information exists on the estimation of such values for species in their reduced or oxidised states [8]. Similar to the electronic excited species, in the most cases such species are, owing to their open-shell nature, are highly reactive and are able to elude, therefore, their direct indication also. This situation will be more complicated if the corresponding acids or bases possess more than one acidic or basic centre in their molecular framework. This is the case, e.g. for the toluidines, which bear both at their amino and methyl groups protons which can be abstract principally by bases so that these compounds can exhibit, besides of their usual basic properties, also acidic properties. As far as these acidic properties are related to the protons at the N-atoms, they are called NH-acidity, whereas the basic properties which are related to the methyl groups are called CH-acidity. Although the number of synthetic studies were focused on unravelling the products from oxidation of toluidine molecules under different conditions, the potential reactivity of oxidised toluidine isomers were discussed minimally. It is a lack of published experimental and theoretical knowledges about the reactive positions in the N-centred or C-centred species leasing to the CN or CC coupling 4 toluidine products. With respect to these facts, we decided to present the systematic theoretical analysis of parent and deprotonated toluidine and its oxidized species. The partial aims of this study are: (1) to calculate the optimal geometries of studied compounds in the gas-phase and two model environments; (2) to evaluate the reaction Gibb's free energies for selected reaction steps and (3) to estimate the corresponding acidity pKa constants. The obtained theoretical trends and results will be confronted with the results published for various coupling reactions of toluidine isomers. Thermodynamics of Toluidine Acid-Base Reaction Aniline (ANH2) as the simplest representative aromatic amine has basic as well as acidic, i.e. amphoteric, properties which can be quantified by the pK constants (Scheme 1) [9]. Comparing with ammonia, aniline solvated in water is a relative weak base. The corresponding experimental basicity pKb constant of aniline is 9.4 [10] and ammonia in water is 9.2 [11]. On the other hand, the acidity constant of positive charged anilinium ion (pKa), as conjugated acid of the base aniline at 298.15 K, in water is 4.6 and in DMSO is 3.7 (measured in aniline hydrochloride) [12]. Aniline is also a weak acid which can be deprotonated only by very strong bases, e.g. by lithium organyl compounds [13]. The available experimental pKa for heterolytic dissociation of aniline to deprotonated anion (ANH -) in DMSO is 30.6 [14]. The ammonia molecule represents an extremely weak acid due to the high pKa value of 41.0 (in water) and of 38 (in DMSO) [15]. As it is depicted in Scheme 2, the proton abstraction from amino or methyl groups can be occurred in seven possible acid-base reactions. Scheme 2 The mutual comparison of reaction G4 Gibb's energies for reactions 1, 2 and a, b indicates minimal energy differences. The effect of methyl group position toward the amine group is also negligible, the maximal energy differences in solvents are up to the 6 kJ mol -1 . The results for reactions 4 and 5 6 clearly show that the CH acidities of toluidines are, as expected, much weaker than their NH acidities and nearly independently from the substitution pattern. The energetically less preferred dissociation represents the proton abstraction from CH3TNHanion (reaction No 5). Interestingly, the CH acidities of zwitterion species CH2TNH +in solvents are maximal for ortho isomer (reaction No. 3) and it is comparable with the NH acidity of aniline (reaction No 2). Next, the deprotonation of ammonium group in zwitterion species is also associated with the endothermic process (reaction No 2). The comparable reaction Gibb's energies in solvents are indicated for ortho isomer. Finally, our calculations predict that the proton abstraction from CH2TNH2is also possible. The calculated reaction Gibb's free energies for are ranged from 220.2 kJ mol -1 to 234.5 kJ mol -1 for DMSO and 234.4 kJ mol -1 to 266.0 kJ mol -1 for water. The acid-base reactions of arylamines connected with the chemical oxidation of molecules are very often used in the preparation of poly-arylamines as materials with a high electric conductivity [18] or as starting materials for certain important organic dyes, such as Mauveine or Fuchsine [19]. It is from some practical interest to know that the NH and CH acidities of toluidines depend not only from the substitution position of their methyl groups at the aromatic ring but also from the oxidation state of the corresponding compounds. In this context, four possible acid-base reactions (Nos. 8, 9, 10 and 11) initiated upon the electron abstractions were theoretically investigated for toluidines (see calculations follow that in both cases the -type species are more stabilised than the -type species (see Fig. 2S). This is in agreement with calculations performed by other authors for various nitrogencentred radicals [20]. Theoretical pKa Values of Toluidine Oxidation States The theoretical pKa values evaluated from quantum chemical results (see Eq. 1) can lead to the values different from experiment (see Tab. 2S). This error is connected with the insufficient description of solvent effects using the implicit model. Next, the PCM models is employed to calculate solvation energies in high accuracy, it requires parametrization of the shape and size of the dielectric cavity of a molecule [21]. Unfortunately, computational works reported to date rarely involved extensive parametrization for radical species or charged states. To improve the reliability of theoretical pKa values, the approach based on the isodesmic reaction is applied [22]. In this work, we have used the available experimental pKa data for aniline, anilinium ion and aniline cation radical in DMSO and water (see Tab Oxidation States and Reactivity of Toluidine The calculated acidities of toluidine in their different oxidation states (Tab. 2) show that the toluidine radical cations CH3TNH2 +. as primary species generated by the oxidation of toluidines, can be deprotonated both at their NH2 and CH3 moieties. This deprotonation leads to the transformation into N-centred radical CH3TNH . and C-centred radical CH2TNH2. On the other hand, the toluidine dications CH2TNH2 2+ as secondary species generated by the oxidation of toluidines can be deprotonated exclusively at their methyl moieties and transformed thereby into C-centred cationic species CH2TNH2 + . Owing to the highly negative pKa values of reaction 10, the cationic species CH2TNH2 + can be formed also directly from the radical cations CH3TNH2 +. in course of a so-called proton-coupled electron transfer (PCET) process [25]. We would like to note that under usual pHconditions (0 < pH < 9), which are applied in course of the oxidation of aniline and its derivatives, there is no change for the formation of the dicationic species CH3TNH2 2 , although certain authors argue for its existence [25,26,27]. From the data derived follow, that by starting with p-toluidine (Scheme 4), an oxidatively mediated coupling of the N-centred radicals p-CH3TNH • , e.g. with a further p-toluidine, is expected. Indeed, such a coupling occurs at the N-atom and gives rise to the formation of a coupling product of the general structure D1. This compound is able to react with a further p-toluidine molecule yielding the so-called BARSILOWSKY's base T1 [26]. In the second case, namely by reaction of the radical cation CH3TNH2 +. , an oxidatively mediated coupling with p-toluidine is expected to occur at the CH2 moiety and gives rise to the formation of a coupling product of the general structure D2 [27]. This compound can subsequently transformed by further oxidation into the azomethine compound D3, which is able to yield, e.g. by further reaction with aniline, Fuchsine [28]. which the quinone iminium salts D5 is formed [29]. The C-centred cation o-CH2TNH2 + is highly reactive [30] and can be transformed either by reaction with certain nucleophiles into the adducts X2 or by reaction with further o-toluidine or aniline (ANH2), via the dimer D6, into the acridines D7 [31] or the Chrysaniline D8 [32]. Scheme 5 Although the oxidation of m-toluidine is also studied rather intensively, in contrast to the oxidation of o-and p-toluidine, there is only less information on the structure of products and on the mechanism of their formation. Thus, it was stated that by the electrochemical oxidation of mtoluidine, performed in acidic solution, in course of a CN coupling reaction a polymer is formed its structure D10 is similar to the one which is formed by the oxidation of aniline (Scheme 6) [33]. Moreover, similar to the aniline oxidation certain intermediates, such as the 1,4-phenylenediamine derivative D9 and the quinoneimine D10 with n = 1 [34], and the corresponding benzidine derivatives D11 and D12 [35] have been identified. Moreover, similar to the aniline oxidation, a corresponding azobenzene derivative, which can be formed from the radical species m-CH3TNH . in course of a NN coupling, has been identified also. However, there is no information at yet on the formation of products which could be generated from the zwitterion product m-CH2TNH2 +formed by a deprotonation at the methyl group in meta-position. Conclusion In this theoretical study, we have suggested possible acid-base reaction steps occurring during the oxidation of toluidine isomers. The reaction Gibb's energies were calculated using G4 and M062x approaches for the gas-phase, water and dimethylsulfoxid environments. The theoretical pKa values were evaluated for mono-and bi-cationic states using the isodesmic reaction approach with respect to the reference experimental data available for the aniline molecule. The comparison of these values showed that the transformation of toluidines into oxidised states significantly increases the acidity of methyl group. This study indicates that the presence or absence of these deprotonated species in reaction mixture will determine the CN or CC coupling toluidine products. The qualitatively similar theoretical results were also obtained using the reference density functional theory and M062x functional. Computational methods The quantum chemical calculations based on the Density Functional Theory were performed using the Minnesota M062x hybrid functional [36] and 6-311++G** basis set of atomic orbitals [37]. First the optimal geometries of the studied species were found in gas-phase and these optimal geometries were used as the starting geometries for the calculations using Gaussian-4 theory [38]. This theory involves the introduction of an extrapolation scheme for obtaining basis set limit Hartree-Fock energies and it was developed for the calculations of thermochemical properties. The solvent effects contributions in dimethylsulfoxide (DMSO) and in water (WAT) were described using the integral equation formalism version of PCM (IEF-PCM) [39]. Frequency analysis showed no imaginary frequencies confirming the real geometry of the energy minima. All Gibb's free energies were estimated for temperature T = 298.15 K and pressure p = 101325 Pa. These thermodynamic energies are calculated from the combination of energy contributions from various B3LYP and ab initio energies. All calculations were carried out using the Gaussian 16 program package [40]. The molecules and spin densities were visualised using the Molekel progam package [41].
2020-10-18T21:02:40.260Z
2020-10-05T00:00:00.000
{ "year": 2020, "sha1": "90d8fb59761af9b3c34732b22f51eb3df23d672c", "oa_license": "CCBY", "oa_url": "https://www.beilstein-archives.org/xiv/download/pdf/2020113-pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "90d8fb59761af9b3c34732b22f51eb3df23d672c", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
232103104
pes2o/s2orc
v3-fos-license
Effect of Chitosan Coating Incorporated with Artemisia fragrans Essential Oil on Fresh Chicken Meat during Refrigerated Storage The present study was conducted to assess the impact of chitosan coating (1%) containing Artemisia fragrans essential oil (500, 1000, and 1500 ppm) as antioxidant and antimicrobial agent on the quality properties and shelf life of chicken fillets during refrigerated storage. After packaging meat samples, physicochemical, microbiological, and organoleptic attributes were evaluated at 0, 3, 6, 9, and 12 days at 4 °C. The results revealed that applied chitosan (CH) coating in combination with Artemisia fragrans essential oils (AFEOs) had no significant (p < 0.05) effects on proximate composition among treatments. The results showed that the incorporation of AFEOs into CH coating significantly reduced (p < 0.05) pH, thiobarbituric acid reactive substances (TBARS), and total volatile base nitrogen (TVB-N), especially for 1% CH coating + 1500 ppm AFEOs, with values at the end of storage of 5.58, 1.61, and 2.53, respectively. The coated samples also displayed higher phenolic compounds than those obtained by uncoated samples. Coated chicken meat had, significantly (p < 0.05), the highest inhibitory effects against microbial growth. The counts of TVC (total viable counts), coliforms, molds, and yeasts were significantly lower (p < 0.05) in 1% CH coating + 1500 ppm AFEOs fillets (5.32, 3.87, and 4.27 Log CFU/g, respectively) at day 12. Organoleptic attributes of coated samples also showed the highest overall acceptability scores than uncoated ones. Therefore, the incorporation of AFEOs into CH coating could be effectively used for improving stability and shelf life of chicken fillets during refrigerated storage. Introduction Chicken meat with low amount of lipids and low cost of production not only is a rich source of essential amino acids with high biological value but also is an excellent origin of unsaturated fatty acids and minerals for human body [1]. Its high pH and moisture content make it so that, at aerobic conditions, chicken meat is susceptible to lipid and protein oxidations and microbial growth, leading to a decrease in shelf life [2,3]. Moreover, chicken meat is highly perishable by pathogenic bacteria, such as Listeria monocytogenes, Escherichia coli, Campylobacter jejuni, and Salmonella spp. [4]. Today, the major challenge of meat industry is to increase the stability, shelf life, and overall acceptability of the chicken meat by delaying lipid oxidation and preventing microbial growth. The negative health effects associated with the use of sodium nitrate, benzoic acid, and potassium sorbate as chemical preservatives have recently led researchers and meat AFEOs Isolation The gas chromatographic-mass spectrometric (GC-MS) apparatus was used for AFEOs composition (Varian, mod. Saturn 2100T, San Fernando, CA, USA). A fused-silica capillary column (50 m × 0.22 mm, 0.25 µm film thickness) and helium was used as the carrier gas (1 cm 3 /min) were used for compounds separation. Injector and detector temperatures were 280 • C (splitless 20 cm 3 /min) and 260 • C, respectively. Oven condition was 50 • C increased to 250 • C at a rate of 2 • C/min and held for 60 min. The fatty acid methyl ester (FAMEs) were identified by comparison of peaks retention time with standard FAMEs (Sigma-Aldrich, Steinheim, Germany), and the peaks area reported as component percentage [29]. Preparation of Meat Samples The whole experiment was repeated with a separate source of skinless and boneless chicken breast in five batches during three successive days (5 treatments × 5 time periods × 3 repetitions × 3 runs). The raw material (chicken meats) was bought (weighted between 2.5-5 kg) from a local slaughterhouse and transported directly to laboratory in ice boxes. Ten g chitosan (95% deacetylation degree) was dissolved in 1% acetic acid, reached to 1000 mL. Then, AFEOs was mixed at different concentrations (500, 1000, and 1500 ppm). After that, Tween 80 as a surfactant agent was added to treatment solutions and mixed for 1 min. Based on the previous data on 1% chitosan concentration [32], the chicken breast meats were randomly divided into five groups as follows: T1: Negative control; T2: Treated with distilled water; T3: 1% chitosan (CH) coating + 500 ppm AFEOs; T4: 1% CH coating + 1000 ppm AFEOs; T5: 1% CH coating + 1500 ppm AFEOs. All meat samples, cut with a sterile knife (1 × 3 × 6 cm), were immersed in prepared solutions for 1 h at 4 • C, and, finally, the samples were drained for 2 min and packaged in low density polyethylene bags for evaluation of chemical composition, pH, phenolic compounds, total volatile base nitrogen (TVB-N), thiobarbituric acid reactive substances (TBARS) values, color parameters, organoleptic attributes, and microbial counts at 0, 3, 6, 9, and 12 days of refrigerated storage. Proximate Composition and pH Proximate composition of chicken fillet samples, including lipid, ash, protein, and moisture, were determined in triplicate according to Karsli et al. [32]. For evaluation of pH, chicken fillets were homogenized in proportion of 1:10 (w/v) with distilled water and analyzed with a pH meter (Hanna, Methrom, Switzerland). Measurement of Thiobarbituric Acid Reactive Substances (TBARS) The TBARS values of chicken fillets were analyzed according to methodology of Liu et al. [33]. The reactions of thiobarbituric acid with the oxidation products lead to the production of compounds which was measured in a spectrophotometer (Hitachi, Ltd., Tokyo, Japan) at 532 nm. 1,1,3,3-tetraethoxypropane (TEP) was used to prepare the standard curve at concentrations between of 0 to 10 ppm, and the data were expressed as mg malondialdehyde/kg (mg MDA/kg) of chicken meat samples. Determination of Total Volatile Nitrogen (TVB-N) Total volatile nitrogen (TVB-N) of meat samples were evaluated by Kjeldahl method with a vapor distillation according to Goulas and Kontominas [34]. The data were reported as mg/100 g of chicken meat samples. Total Phenolic Content (TPC) According to Liu et al. [33], total phenolic contents of chicken fillets were evaluated using Folin-Ciocalteu reagent. Firstly, 50 g of chicken meat and 100 mL of boiled distilled water were mixed together and left at room temperature for 20 min. After cooling, the obtained solution was filtered and mixed with Folin-Ciocalteau reagent (2.5 mL) and saturated sodium carbonate solution (5 mL) in test tubes. Finally, the solution was vortexed and held in a dark place (1 h). UV-vis spectrophotometer Hitachi U-3210 (Hitachi, Ltd., Tokyo, Japan) was utilized for the evaluation of TPC at 700 nm. Standard curve was prepared with Gallic acid, and the data was reported as mg/100 g of Gallic acid equivalents (GAE). Determination of Color Parameters Color indices (L*: lightness, a*: redness, b*: yellowness) on the surface of the chicken samples were evaluated according to the method proposed by Leon et al. [35] using a simple digital imaging system. The chicken fillets were sized into 1 × 3 × 6 cm thickness to analyze the color. Digital camera with 16 mega-pixels under suitable light at 25 • C and standard plates for instrument calibration were used for capturing the image. Photoshop software was used to analyze the pictures and report the data. Sensory Properties The effects of CH in combination AFEOs on sensory attributes of chicken fillets were evaluated at the end of refrigerated storage. Seventy-two consumers (twenty-four male and forty-eight females) were selected as panelists, all of whom had prior experience about sensory attributes of many kinds of fresh meats. The sensory evaluation consisted of six sessions with twelve panelists for each sitting. A randomized (complete) block design was conducted. The sausage samples were cut into 3-mm thick cubes at room temperature, individually labeled with aleatory numbers and randomly served. Overall acceptability, odor, color, texture, and freshness of chicken fillets were analyzed using hedonic scale (1: really dislike, 5: really like). For increasing accuracy of sensory analysis, between each testing, crackers (unsalted) and water were utilized. Overall acceptability scores were also obtained by average of odor, color, texture, and freshness scores [37]. Statistical Analysis The experimental data resulted from 5 treatments × 5 time periods × 3 repetitions × 3 runs were analyzed using the statistical software SAS (v.9, SAS Institute Inc., Cary, NC, USA). Normal distribution and variance homogeneity had been previously determined (Shapiro-Wilk). Random block design, considering a mixed linear model, including replicate as a random effect and chicken meat treatment and storage time as fixed effects, were used for the evaluation of pH, TVB-N, and TBARS values, phenolic content, color indexes, sensory characteristics, and microbiological counts. ANOVA (p < 0.05), followed by Tukey's test, was used for moisture, protein, fat, and ash contents. Panelists and sessions were used as random effects for the sensory characteristics. All data were expressed as mean values ± standard error in tables and figures, but the results of chemical properties were expressed as mean values ± standard deviation. Gas Chromatography-Mass Spectrometry Analysis The volatile chemical components of AFEOs are shown in Table 1. The data showed that thujone (40.21%) had the highest content and followed by 1,8-Cineole (21.04%), lcamphor (11.87%), and isobornyl alcohol (3.49%). All of the identified volatile component indicated 99.46% of total AFEOs. The results of the present research were similar by Baldino et al. [29] findings on camphor (14.63%) as one of the main component of AFEOs. Other studies reported that carvacrol was a volatile component of AFEOs [38]. These disagreements maybe caused by climate conditions, soil composition, genetic, stage of maturity, cultivars, plant organs, and extraction conditions, as well as the variations in cultivation [38]. Effect of CH-AFEOs Coating on Proximate Composition and pH The proximate composition among treatments showed similar values for ash, fat, protein and moisture contents, which indicates that chitosan and AFEOs had no significant (p > 0.05) effects on chicken fillets composition ( Table 2). The results of present research are in agreement with those observed by Alirezalu et al. [21]. The authors showed that the inclusion of natural antioxidants in ε-polylysine, chitosan, and nisin had no significant effects on frankfurter-type sausage proximate composition. Agregán et al. [39] also reported similar results in the chemical composition of pork patties by applying natural antioxidant (macroalgae Fucus vesiculosus extract). In the same way, de Carvalho et al. [40] evaluated the impact of guarana (Paullinia cupana) seed and pitanga (Eugenia uniflora L.) leaf extracts on lamb patties and reported no significant differences in chemical compositions among treatments. On the other hand, pH values in meat and meat products can highly affected microbial balance and function of bacteriostatic, which can lead to a low shelf life. These values are usually under 6 in fresh meat [41]. The changes in pH values of chicken meat between coated treatments during refrigerated storage are showed in Figure 1. As expected, the pH of the chicken fillet samples increased among refrigerated storage. The production of lactic acid bacteria and the accumulation of alkaline components produced by psychrotrophic bacteria and the autolytic activity of the autochthonous enzymes may be the main reason for the change of pH during storage [42,43]. This aforementioned increase was significantly (p < 0.05) higher in uncoated samples (negative control and treated with distilled water). At day 12, treated samples with distilled water displayed higher values than those obtained in fillets coated with 1% CH + 1500 ppm AFEOs (7.01 vs. 5.55, respectively). The antibacterial properties of chitosan and AFEOs could be responsible for the lower pH values observed in coated samples. This impact of chitosan films on pH of meat and meat products are in agreement with the results found by other authors in chilled meat [44]. In the same way, Vaithiyanathan et al. [45] and Berizi et al. [46] reported similar behaviour in chicken meat and other food model systems treated with chitosan in combination with natural preservatives. Effect of CH-AFEOs Coating on TBARS and TVB-N Shelf life and quality attributes of meat and meat products are highly affected by oxidation reactions, particularly lipid and protein [47]. TBARS are used as an important indicator for the measurement of secondary products of oxidation, especially aldehydes, which resulted from the lipid oxidation of polyunsaturated fatty acids [48]. The effects of chitosan-based coating with AFEOs are displayed in Table 3. TBARS values increased continuously during refrigerated storage, being samples coated with chitosan and AFEOs (T4 and T5) those that displayed significantly (p < 0.05) lower values at the end of storage (1.61 and 1.64 vs. 1.92 and 2.10 mg MDA/kg for T5 and T4, vs. negative control and samples treated with distilled water, respectively). Similar results were reported by Liu et al. [44], who evaluated the impact of chitosan films incorporated with natural preservatives on chilled meat. Jonaidi Jafari et al. [49] studied the effect of chitosan coating with ethanolic extract of propolis on the quality of chicken fillets. The authors reported a less increase of TBARS values in treated samples (<0.6 mg MDA/kg in samples coating with chitosan and 2% of ethanolic extract of propolis) compared to those observed in control (>0.8 mg MDA/kg). These lower TBARS values in coated samples may be related to low availableness of oxygen on meat surfaces or chelating impact of chitosan with metal ions [50]. Furthermore, the high antioxidant properties of AFEOs observed by Orhan et al. [28], would also lead to a less increase in TBARS values during storage. Therefore, as expected, chitosan coatings incorporated with AFEOs allowed to extend the shelf life of meat samples by their antioxidative properties. Similar results were observed by Pabast et al. [51] and Fang et al. [52] in lamb meat and fresh pork using chitosan-based coatings with natural antioxidants. TVB-N value, which mainly includes amines and ammonia, is one of the most important indicators in meat and meat products shelf life [53]. The TVB-N results of chicken samples during refrigerated time are presented in Table 3. In this study, the initial TVB-N values were between 8.7 and 17.9 mg/100 g for treated samples (T5) and samples treated with distilled water (T2), respectively. These values indicate the allowable situation for applied chicken meat. During storage the TVB-N values in all meat samples increased exponentially, with a rate significantly (p < 0.05) higher in untreated samples (182.3 vs. 25.3 mg/100 g for T2 and T5, respectively). According to permitted limit of TVB-N values (25 mg/100 g) in meat and meat products, related to loss of freshness and microbiological contamination, control samples (T1 and T2) exceeded this level on day 3. However, treated samples with CH and AFEOs can effectively reduce the production of volatile nitrogen bases under acceptability limits until day 9 (18.3, 19.7, and 18.3 mg/100 g for samples coated with CH and 500, 1000, and 1500 ppm of AFEOs, respectively). The results of the present work are in agreement with those found by Mojaddar Langroodi et al. [54]. The authors showed that CH coating in combination with other natural antioxidants (Sumac extract and Zataria multiflora Boiss oil) could significantly reduce TVB-N formation. In addition, it can be observed that by increasing the EOs concentration, TVB-N values increased more slowly. At day 12, the coated samples containing 1500 ppm AFEOs displayed significantly lower TVB-N values (25.3 vs. 28.2 and 54.3 mg/100 g for samples coated with CH and 500, 1000, and 1500 ppm of AFEOs, respectively). The results of TVB-N values are in paralleled with microbiological results. In fact, the TVB-N results observed among treatments are in agreement with the changes observed in pH, since the antibacterial properties of chitosan and AFEOs could be responsible for the lower pH values in coated samples. Therefore, the lower microbial growth observed in treated samples would lead to lower TVB-N values [49,55]. Effect of CH-AFEOs Coating on TPC Phenolic compounds, which have potential techno-functional, antioxidant, and antimicrobial properties, are highly present in natural sources like plants extracts and EOs [56]. The effects of chitosan coating in combination with AFEOs on phenolic content of chicken meat are shown in Figure 2. At day 0 of storage, phenolic content in chicken samples coated with chitosan and AFEOs ranged from 30.10 to 41.70 mg GA/100 g, whereas the phenolic content in negative control samples was significantly (p < 0.05) lower (28.20 mg GA/100 g). The highest phenolic content in treated samples is related to the fact that phenolic compounds are one of the main components of EOs [10]. During the storage period, phenolic compounds in all meat samples decreased significantly (p < 0.05). However, treated samples continued to be those that showed the highest contents at day 12, displaying values between 22.20 and 25.20 mg GA/100 g, while negative control and meat treated with distilled water reached to 20 and 20.60 mg GA/100 g, respectively. The decrease in phenolic compounds observed in chicken samples could be attributed to oxidation reactions that take place during storage period [47]. Similar results were found with the use of type of coating materials and natural extracts in meat products. In this regard, Alirezalu et al. [20] evaluated the effects of εpolylysine in combination with natural plant extracts (olive leaves, green tea, and stinging nettle) in frankfurter-type sausage. The authors observed that the samples treated with mixed plant extracts showed significantly higher amounts of phenolic contents compared to control (9.80 vs. 0.07 mg GA/100 g for treated sausages samples and control samples on day 45 of storage, respectively). Similar results with natural plant extracts (rosemary or Chinese mahogany) in fresh chicken sausage were reported by Liu et al. [33]. Effect of CH-AFEOs Coating on Color Parameters Color is one of the most important parameters in meat and meat products quality, since its stability could compromise the sensory properties of the product and therefore the consumer acceptance [57]. The color indexes (L*: Lightness, a*: Redness and b*: Yellowness) of chicken meat samples were significantly (p < 0.05) affected by both coating and refrigerated period (Table 4). L* values of all samples decreased during refrigerated period (Table 4); however, the rate of this reduction was significantly (p < 0.05) lower in coated samples. The antioxidant and antimicrobial properties of CH and AFEOs would lead to higher L* in coated samples. At day 12, chicken samples coated with CH + 1500 ppm AFEOs and treated with distilled water showed the highest (36.38) and lowest (25.83) values, respectively. These results are in agreement with those found by Alirezalu et al. [21], who reported a similar trend for lightness in sausages treated with chitosan in combination with other natural antioxidants. CH coating + 1500 ppm AFEOs. a-e Mean values in the same row not followed by a common letter differ significantly (p < 0.05). A-D Mean values in the same column not followed by a common letter differ significantly (p < 0.05). All meat samples revealed a reduction in a* during refrigerated period. The formation of free radicals from lipid oxidation and met-myoglobin may be the main reasons for the reduction of a* values [14,58]. Higher a* values were observed in coated samples compared to those found in negative control, which as mentioned above may be due to the high antioxidant properties of CH and AFEOs. A similar trend in the reduction of a* value in lamb burgers treated with natural extracts was reported by De Carvalho et al. [40]. Regarding yellowness, this parameter is highly affected by the enzymatic browning reactions that occur during the refrigerated storage of meat samples [59]. However, samples coated with CH and high concentration of AFEOs showed significantly (p < 0.05) higher b* values than those found by negative control samples at the end of storage (23.66 vs. 20.38 for T5 and T1, respectively). Effect of CH-AFEOs Coating on Microbiological Analysis The results of TVC, coliform, molds, and yeasts are shown at Table 5. At day 0, TVC counts in treated samples ranged between 2.27 and 2.33 Log CFU/g, which is significantly lower than those obtained for negative control (4.48 Log CFU/g). These initial bacterial numbers reflect the high antimicrobial properties associated with the use of CH coating and AFEOs in meat samples. Chitosan coating containing AFEOs led to approximately 3 Log CFU/g reduction in TVC from those obtained by control. Increase in the thickness of the chitosan coating not only have inhibitory effects against microbial growth but also could maintain the quality and stability of samples. However, it had been proved that 1% chitosan could also have efficient impacts on meat quality and shelf life. Considering the acceptable limitations of TVC counts (6 Log CFU/g) in fresh poultry meat [60,61], the samples coated with CH in combination with the highest dose of AFEOs displayed acceptable levels at the end of storage time (Table 5), which reflects the possibility of using this coating to extend the shelf life of a highly perishable product, such as fresh chicken meat, ensuring its safety. The results of the present study are in agreement with those found by Jonaidi Jafari et al. [49] on chicken fillets coated with chitosan and ethanolic propolis extract. Bazargani-Gilani et al. [62] also evaluated the effects of chitosan edible coating with plant EOs on chicken breast meat, also reporting the possibility of using the combination of chitosan and EOS to extend the storage time by 10 or 15 days the storage time, which are in agreement with the results found in the present work. Cationic property of chitosan allows to electrostatic interaction between NH3 group (as a positive charges) on of the glucosamine monomer in chitosan molecules and microbial cell membrane (negative charges) led to the leakage of intracellular components could be the reason of antimicrobial properties of chitosan coating, which has been described by Duan et al. [63]. In the other hand, the selective permeability of chitosan [58], which decrease the oxygen transfer to the meat and meat products might be the main reason of extended stability and shelf life. The meat and meat products surfaces are highly susceptible for molds and yeasts growth, which can lead to spoilage and negative impacts on safety and organoleptic attributes. The chicken meat samples coated with CH + 1500 ppm AFEOs displayed significantly (p < 0.05) higher inhibitory effects against molds and yeasts during storage. At the beginning of storage, molds, and yeasts ranged between 1.0 and 3.66 Log CFU/g for samples coated with CH + 1500 ppm AFEOs and distilled water, respectively, which increased significantly (p < 0.05) reaching values between 4.27 and 8.02 Log CFU/g at day 12, respectively. In the case of coliforms, a group of microorganisms known as hygienic quality indicators in meat and meat products [64], the counts increased during storage. The rate of this increase was significantly (p < 0.05) lower in coated samples with CH + AFEOs (especially in 1500 ppm AFEOs), displaying values after 12 days of storage of 3.87 Log CFU/g compared to values of 8.58 and 8.84 Log CFU/g observed in negative control samples. To sum up, CH coating + 1500 ppm AFEOs showed the highest antimicrobial activities against TVC, coliforms, molds, and yeasts. The results of present work are in agreement with those reported by Alirezalu et al. [21], who support the use of chitosan (1%) in combination with plant extracts as antimicrobial ingredients in frankfurter-type sausage. Similar results were obtained by Berizi et al. [46] with the combination of chitosan edible coating and pomegranate peel extract. Effect of CH-AFEOs Coating on Sensory Properties The effects of CH coating with AFEOs on organoleptic properties of meat samples are illustrated in Figure 3a,b. The results observed on day 0 showed that coated meat samples with CH and AFEOs had a negatively effect on sensory attributes. Despite at the beginning of storage, the highest and lowest sensory scores were for negative control and CH containing 500 ppm AFEOs, and the scores changed as storage progressed since the samples coated with CH and AFEOs displayed significantly (p < 0.05) higher scores in all of the attributes evaluated. This could be associated with the higher microbial growth and oxidation reactions that occur in negative control, resulting in a sharply decrease during storage of its sensory properties in comparison with coated samples. Again, the results showed that samples coated with CH + 1500 ppm AFEOs displayed the best results, so this coating could significantly preserve sensory attributes of fresh chicken meat during storage. These results corroborate those previously found by Kanatt et al. [65], who reported that CH coating has no negative effects on organoleptic characteristics of meat and meat products. Furthermore, similar results were previously found by Petrou et al. [66] in chicken breast meat coated with chitosan and oregano oil. . Sensory properties of chicken meat coated with chitosan containing AFEOs at day 0 (a) and day 9 (b) during storage at 4 • C. T1: Negative control; T2: Distilled water; T3: 1% CH coating + 500 ppm AFEOs; T4: 1% CH coating + 1000 ppm AFEOs; T5: 1% CH coating + 1500 ppm AFEOs. a-d Mean values among meat samples not followed by a common letter differ significantly (p < 0.05). Conclusions The results of the current research revealed that chitosan-based coatings with AFEOs allow deceleration of the microbial growth and the undesirable chemical reactions that occur in meat during storage and, therefore, can extend the shelf life of chicken fillets. The presence of natural antioxidant and antimicrobial components in the composition of AFEOs and chitosan are the main responsible for these characteristics. Coated samples remained within acceptable range of quality-chemical factors, such as TBARS, TVB-N, and pH, for longer time. The outcomes of this study showed that coating based chitosan with 1500 ppm AFEOs had the best inhibitory effects on the oxidative activity and microbial growth. The results also revealed that chitosan coating incorporated with 1500 ppm AFEOs can significantly prolong the stability of chicken breast meat and could be suggested as potential coating materials in meat and meat products. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
2021-03-04T06:16:47.241Z
2021-02-26T00:00:00.000
{ "year": 2021, "sha1": "a1b4af2fb4623f5e44099b28217fdf06fd062275", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/13/5/716/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "39890a60a074284ac4c6e17e2f3998745cc81ff1", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
256503844
pes2o/s2orc
v3-fos-license
Stone's theorem for distributional regression in Wasserstein distance We extend the celebrated Stone's theorem to the framework of distributional regression. More precisely, we prove that weighted empirical distribution with local probability weights satisfying the conditions of Stone's theorem provide universally consistent estimates of the conditional distributions, where the error is measured by the Wasserstein distance of order p $\ge$ 1. Furthermore, for p = 1, we determine the minimax rates of convergence on specific classes of distributions. We finally provide some applications of these results, including the estimation of conditional tail expectation or probability weighted moment. Introduction Forecast is a major task from statistics and often of crucial importance for decision making. In the simple case when the quantity of interest is univariate and quantitative, point forecast often takes the form of regression where one aims at estimating the conditional mean (or the conditional quantile) of the response variable Y given the available information encoded in a vector of covariates X. A point forecast is only a rough summary statistic and should at least be accompanied with an assessment of uncertainty (e.g. standard deviation or confidence interval). Alternatively, probabilistic forecasting and distributional regression (Gneiting and Katzfuss, 2014) suggest to estimate the full conditional distribution of Y given X, called the predictive distribution. In the last decades, weather forecast has been a major motivation for the development of probabilistic forecast. Ensemble forecasts are based on a given number of deterministic models whose parameters vary slightly in order to take into account observation errors and incomplete physical representation of the atmosphere. This leads to an ensemble of different forecasts that overall also assess the uncertainty of the forecast. Ensemble forecasts suffer from bias and underdispersion (Hamill and Colucci, 1997) and need to be statistically postprocessed in order to be improved. Different postprocessing methods have been proposed, such as Ensemble Model Output Statistics (Gneiting et al., 2005), Quantile Regression Forests (Taillardat et al., 2019) or Neural Networks (Schulz and Lerch, 2021) among others. Distributional regression is now widely used beyond meteorology and recent methodological works include deep distribution regression by Li et al. (2021), distributional random forest by Ćevid et al. (2022) or isotonic distributional regression by Henzi et al. (2021). The purpose of the present paper is to provide an extension to the framework of distributional regression of the celebrated Stone's theorem (Stone, 1977) that states the consistency of local weight algorithm for the estimation of the regression function. The strength of Stone's theorem is that it is fully non-parametric and model-free, with very mild assumptions that covers many important cases such as kernel algorithms and nearest neighbor methods, see e.g. Györfi et al. (2002) for more details. We prove that Stone's theorem has a natural and elegant extension to distributional regression with error measured by the Wasserstein distance of order p ≥ 1. Our result covers not only the case of a one-dimensional output Y ∈ R where the Wasserstein distance has a simple explicit form, but also the case of a multivariate output Y ∈ R d . The use of the Wasserstein distance is motivated by recent works revealing that it is a useful and powerful tool in statistics, see e.g. the review by Panaretos and Zemel (2020). Besides this main result, we characterize, in the case d = 1 and p = 1, the optimal minimax rate of convergence on suitable classes of distributions. We also discuss implications of our results to estimate various statistics of possible interest such as the expected shortfall or the probability weighted moment. The structure of the paper is the following. In Section 2, we present the required background on Stone's theorem and Wasserstein spaces. Section 3 gathers our main results, including the extension of Stone's theorem to distributional regression (Theorem 2), the characterization of optimal minimax rates of convergence (Theorem 3) and some applications (Proposition 2 and the subsequent examples). All the technical proofs are postponed to Section 4. Stone's theorem In a regression framework, we observe a sample (X i , Y i ), 1 ≤ i ≤ n, of independent copies of (X, Y ) ∈ R k × R d with distribution P . Based on this sample and assuming Y integrable, the goal is to estimate the regression function Local average estimators take the form with W n1 (x), . . . , W nn (x) the local weights at x. The local weights are assumed to be measurable functions of x and X 1 , . . . , X n but not to depend on Y 1 , . . . , Y n , that is For the convenience of notation, the dependency on X 1 , . . . , X n is implicit. In this paper, we focus only on the case of probability weights satisfying Stone's Theorem states the universal consistency of the regression estimate in L p -norm. Theorem 1 (Stone (1977)). Assume the probability weights (3) satisfy the following three conditions: ] for all n ≥ 1 and measurable g : Then, for all p ≥ 1 and (X, Conversely, if Equation (4) holds, then the probability weights must satisfy conditions i) − iii). Remark 1. Stone's theorem is usually stated in dimension d = 1. Since the convergence of random vectorsr n (X) → r(X) in L p is equivalent to convergence in L p of all the components, the extension to the dimension d ≥ 2 is straightforward. Furthermore, more general weights than probability weights can be considered: condition (3) can be dropped and replaced by the weaker assumptions that W ni (X) → 1 in probability. Such general weights will not be considered in the present paper and we therefore stick to probability weights. The reader can refer to Biau and Devroye (2015) for a complete proof of Stone's theorem together with a discussion. Example 1. The following two examples of kernel weights and nearest neighbor weights are the most important ones in the literature and we refer to Györfi et al. (2002) Chapter 5 and 6 respectively for more details. • The kernel weights are defined by if the denominator is nonzero, and 1/n otherwise. Here the bandwidth h n > 0 depends only on the sample size n and the function K : R k → [0, +∞) is called a kernel. In this case, the estimator (1) corresponds to the Nadaraya-Watson estimator of the regression function (Nadaraya, 1964;Watson, 1964). We say that K is a boxed kernel if there are constants R 2 ≥ R 1 > 0 and M 2 ≥ M 1 > 0 such that Theorem 5.1 in Györfi et al. (2002) states that, for a boxed kernel, the kernel weights (5) satisfy conditions i) − iii) of Theorem 1 if and only if h n → 0 and nh k n → +∞ as n → +∞. • The nearest neighbor (NN) weights are defined by where the number of neighbors κ n ∈ {1, . . . , n} depends only on the sample size. Recall that the κ n -NN of x within the sample (X i ) 1≤i≤n are obtained by sorting the distances X i − x in increasing order and keeping the κ n points with the smallest distances -as discussed in Györfi et al. (2002) Chapter 6, several rules can be used to break ties such as lexicographic or random tie breaking. Theorem 6.1 in the same reference states that the nearest neighbor weights (6) satisfy conditions i)−iii) of Theorem 1 if and only if κ n → +∞ and κ n /n → 0 as n → +∞. Example 2. Interestingly, some variants of the celebrated Breiman's Random Forest (Breiman, 2001) produce probability weights satisfying the assumptions of Stone's theorem. In Breiman's Random Forest, the splits involve both the covariates and the response variable so that the associated weighs W ni (x) = W ni (x; (X l , Y l ) 1≤l≤n ) are not in the form (2). Scornet (2016) considers two simplified version of infinite random forest where the associated weights W ni (x) do not depend on the response values and satisfy the so call X-property, that is they are in the form (2). For totally non adaptive forests, the trees are grown thanks to a binary splitting rule that does not use the training sample and is totally random; the author shows that the probability weights associated to the infinite forest satisfy the assumptions of Stone's theorem under the condition that the number of leaves grows to infinity at a rate smaller than n and the leaf volume tends to zero in probability (see Theorem 4.1 and its proof). For q-quantile forest, the binary splitting rules involves only the covariates and the author shows that the weights associated to the infinite forest satisfy the assumptions of Stone's theorem provided the subsampling number a n satifies a n → +∞ and a n /n → 0 (see Theorem 5.1 and its proof). Wasserstein spaces We recall the definition and some elementary facts on Wasserstein spaces on R d . More details and further results on optimal transport and Wasserstein spaces can be found in the monograph by Villani (2009), Chapter 6. For p ≥ 1, the Wasserstein space W p (R d ) is defined as the set Borel probability measures on R d having a finite moment of order p, i.e. such that It is endowed with the distance defined, for Q 1 , Q 2 ∈ W p (R d ), by where Π(Q 1 , Q 2 ) denotes the set of measures on R d × R d with margins Q 1 and Q 2 . A couple (Z 1 , Z 2 ) of random variables with distributions Q 1 and Q 2 respectively is called a coupling. The Wasserstein distance is thus the minimal distance Z 1 − Z 2 L p = E[ Z 1 − Z 2 p ] 1/p over all possible couplings. Existence of optimal couplings is ensured since R d is a complete and separable metric space so that the infimum is indeed a minimum. Wasserstein distances are generally difficult to compute, but the case d = 1 is the exception. A simple optimal coupling is provided by the probability inverse transform: for i = 1, 2, let Q i ∈ W p (R), F i denotes its cumulative distribution function and F −1 i its generalized inverse (quantile function). Then, starting from an uniform random variable U ∼ Unif(0, 1), an optimal coupling is given by . Therefore, the Wasserstein distance is explicitly given by When p = 1, a simple change of variable yields 3 Main results Stone's theorem for distributional regression We now present the main result of the paper which is a natural extension of Stone's theorem to the framework of distributional regression. Given a distribution (X, Y ) ∼ P on R k × R d , we denote by F the marginal distribution of Y and by F x its conditional distribution given X = x. This conditional distribution can be estimated on a sample (X i , Y i ) 1≤i≤n of independent copies of (X, Y ) by the weighted empirical distribution where δ y denotes the Dirac mass at point y ∈ R d . For probability weights satisfying (3),F n,x is a probability measure and can be viewed as a random element in the complete and separable space W p (R d ). We recall that the Theorem 2. Assume the probability weights satisfy conditions i) − iii) from Theorem 1. Then, for all p ≥ 1 and (X, Conversely, if Equation (12) holds, then the probability weights must satisfy conditions i) − iii). It is worth noticing that so that Theorem 2 implies Theorem 1 in a straightforward way. The proof of Theorem 2 is postponed to Section 4. It first considers the case d = 1 where the Wasserstein distance is explicitly given by formula (9). Then, the results is extended to higher dimension d ≥ 2 thanks to the notion of max-sliced Wasserstein distance (Bayraktar and Guo, 2021) which allows to reduce the convergence of measures on R d to the convergence of their uni-dimensional projections (a precise statement is given in Theorem 4 below). Rates of convergence We next consider rates of convergence in the minimax sense. Note that similar questions and results have been established in Pic et al. (2022), where the second order Cramér's distance was considered, i.e. We focus here on the Wasserstein distance W p (F n,X , F X ) and consider only the case d = 1 and p = 1 which allows the explicit expression (10). The other cases seem harder to analyze and are beyond the scope of the present paper. Our first result considers the error in Wasserstein distance when X = x is fixed. Then, The first term corresponds to an approximation error due to the fact that we use a biased sample to estimate F x . The more regular the model is, the smaller the approximation error is. The second term is an estimation error due to the fact that we use an empirical mean to estimate F x . This estimator error is smaller if the distribution error has a lower dispersion (as is exactly equal to κ so that this quantity is often referred to as the effective sample size and the estimation error is proportional to the square root of the expected reciprocal effective sample size. In view of Proposition 1, we introduce the following classes of functions. The definition of the class together with Proposition 1 entails that the expected error is uniformly bounded on the class D(H, L, M ) by As a consequence, Proposition 1 allows to derive explicit bounds uniformly on D(H, L, M ) for the kernel and nearest neighbor methods from Example 1. For the sake of simplicity, we consider the uniform kernel only. Corollary 1. LetF n,X be given by the kernel method with uniform kernel K(x) = 1 { x ≤1} and weights given by Equation (5). Corollary 2. LetF n,X be given by the nearest neighbor method with weights given by Equation (6) and assume P ∈ D(H, L, M ). Then, wherec k depends only on the dimension k and is defined in Biau and Devroye (2015, Theorem 2.4). One can see that consistency holds -i.e. the expected error tends to 0 as n → +∞ -as soon as h n → 0 and nh k n → +∞ for the kernel method and κ n /n → 0 and κ n → +∞ for the nearest neighbor method. The next theorem provides the optimal minimax rate of convergence on the class D(H, L, M ). We say that two sequences of positive numbers (a n ) and (b n ) have the same rate of convergence, noted a n ≍ b n , if the ratios a n /b n and b n /a n remain bounded as n → +∞. Theorem 3 is the counterpart of Pic et al. (2022, Theorem 1) where the minimax rate of convergence for the second order Cramér's distance has been considered. The strategy of proof is similar: i) we prove a lower bound by considering a suitable class of binary distributions where the error in Wasserstein distance corresponds to an absolute error in point regression for which the minimax lower rate of convergence is known; ii) we check that the upper bound for the kernel and/or nearest neighbor algorithm has the same rate of convergence as the lower bound, which proves that the optimal minimax rate of convergence has been identified. In particular, our proof shows that the kernel method defined in Equation (5) reaches the minimax rate of convergence in any dimension k ≥ 1 with the choice of bandwidth h n ≍ n −1/(2H+k) ; the nearest neighbor method defined in Equation (6) reaches the minimax rate of convergence in any dimension k ≥ 2 with the number of neighbors κ n ≍ n H/(H+k/2) . Remark 2. Our estimate of the minimax rate of convergence holds only for d = p = 1 and we briefly discuss what can be expected in other cases. When p = 1 and d ≥ 2, one may hope to use the strong equivalence between the max-sliced Wasserstein distance and the Wasserstein distance (Bayraktar and Guo, 2021, Theorem 2.3.ii). This requires to estimate the expectation of a supremum over the sphere and this line of research is left for further work. When p > 1, even in dimension d = 1, it seems difficult to obtain bounds for the Wasserstein distance of order p without very strong assumptions. Bobkov and Ledoux (2019) consider the rate of convergence of the empirical distributionF n = 1 n n i=1 δ Y i for an i.i.d. sample Y 1 , . . . , Y n with distribution F on R. A first consistency result (Theorem 2.14) states that E[W p p (F n , F )] → 0 as soon as F has a finite moment of order p ≥ 1. Regarding rates of convergence, they show (Corollary 3.9) that for p = 1 the standard rate of convergence holds, i.e. On the other hand, rate of convergences for higher order p > 1 require the condition where f is the density of the absolutely continuous component of F . They show (Corollary 5.5) that the standard rate holds, i.e. E[W p p (F n , F )] = O(n −p/2 ), if and only if J p (F ) < ∞. However, this condition is very strong: it does not hold for the Gaussian distribution or for distributions with disconnected support. Applications We briefly illustrate Theorem 2 with some applications and examples. In statistics, we commonly face the following generic situation: we are interested in a summary statistic S with real values, e.g. quantiles or tail expectation, and we want to assess the effect of X on Y through S, that is we want to assess S Y |X=x . Assuming that S is well-defined for distributions on R d with a finite moment of order p ≥ 1, it can be seen as a map S : In this generic situation, our extension of Stone's theorem directly implies the following proposition. Recall that M p (µ) is defined in Equation (7). denotes the continuity set of the statistic S : W p (R d ) → R. Then weak consistency holds, i.e. S n,X −→ S Y |X in probability as n → +∞. If furthermore the statistic S admits a bound of the form then consistency holds in L p/q , i.e. . For a distribution G on R, we define the associated quantile function It is well-known that the weak convergence G n d → G implies the quantile convergence G −1 n (α) → G −1 (α) at each continuity point α of G −1 . Equivalently, considering P(R) endowed with the weak convergence topology, the α-quantile statistic S α (G) = G −1 (α) is continuous at G as soon as G −1 is continuous at α. In view of this, we let C = {G ∈ P(R) : G −1 continuous on (0, 1)} and assume that the conditional distribution satisfies P(F X ∈ C) = 1. Then weak convergence holds for the conditional quantiles, i.e. F −1 n,X (α) → F −1 X (α) in probability. Note that no integrability condition is needed here because we can apply Proposition 2 on the transformed data (X i ,Ỹ i ) 1≤i≤n , whereỸ i = tan −1 (Y i ) is bounded so that convergence in Wasserstein distance is equivalent to weak convergence. If furthermore Y is p-integrable, then the bound implies the strengthened convergencê Example 4. (tail expectation) The tail expectation above level α ∈ (0, 1) is the risk measure defined for G ∈ W 1 (R) by The name comes from the equivalent definition which holds when G −1 is continuous at α. One can see that so that S α is Lipschitz continuous with respect to the Wasserstein distance W 1 . As a consequence, the conditional tail expectation S α (F x ) can be estimated in a consistent way by the plug-in estimator S α (F n,x ) since Example 5. (probability weighted moment) A similar result holds for the probability weighted moment of order p, q > 0 defined by ( Greenwood et al. (1979)). The name comes from the equivalent definition which holds when G −1 is continuous on (0, 1). One can again check that the statistic S p,q is Lipschitz continuous with respect to the Wasserstein distance W 1 since Example 6. (covariance) We conclude with a simple example in dimension d = 2 where the statistic of interest is the covariance between the two components of Y = (Y 1 , Y 2 ) given X = x. Here, we consider Considering square integrable random vectors Y = (Y 1 , Y 2 ) and Z = (Z 1 , Z 2 ) with distribution G and H respectively, we compute were the last line is a consequence of Cauchy-Schwartz inequality. We have the upper bounds and, choosing an optimal coupling (Y, Z) between G and H, Altogether, we obtain, This proves that S is locally Lipschitz and hence continuous with respect to the distance W 2 . Taking H = δ 0 , we obtain |S(G)| ≤ M 2 (G) 2 and the bound (14) holds with q = 2. Thus Proposition 2 implies that the plug-in estimator is consistent in absolute mean for the conditional covariance Proof of Theorem 2 Proof of Theorem 2 -case d = 1. We first consider the case when Y is uniformly bounded and takes its values in [−M, M ] for some M > 0. Then, it holds and the generalized inverse functions (quantile functions) are bounded in absolute value by M . As a consequence, In this lines, we have used Equations (9) and (10) and the local weight estimator associated with the sample ( An application of Stone's theorem with p = 1 yields E |F n,X (z) − F X (z)| −→ 0, as n → +∞, whence we deduce, by the dominated convergence theorem, The upper bound (15) finally implies We next consider the general case when Y is not necessarily bounded. We define similarly Y M 1 , . . . , Y M n the truncations of Y 1 , . . . , Y n respectively. The conditional distribution associated with Y M is The local weight estimation built on the truncated sample iŝ By the triangle inequality, By the preceding result in the bounded case, for any fixed M , the second term converge to 0 as n → +∞. We next focus on the first and third term. For fixed X = x, there is a natural coupling between the distribution F n,x andF M n,x given by (Z 1 , Z 2 ) such that Clearly Z 1 ∼F n,x and Z 2 ∼F M n,x and this coupling provides the upper bound Let us introduce the function g M (x) defined by Using the fact that, conditionally on X 1 , . . . , X n , the random variables Y 1 , . . . , Y n are independent with distribution F X 1 , . . . , F Xn , we deduce The condition i) on the weights in Stone's Theorem then implies Because |Y −Y M | p converges almost surely to 0 as M → +∞ and is bounded by 2 p |Y | p which is integrable, Lebesgue's convergence theorem implies We deduce that the first term satisfies where the convergence is uniform in n. We now consider the third term. Since Y M is obtained from Y by truncation, the distribution functions and quantile functions of Y and Y M are related by As a consequence We deduce where the convergence is uniform in n. We finally combine the three terms. The sum can be made smaller than any ε > 0 by first choosing M large enough so that the first and third terms are smaller than ε/3 and then choosing n large enough so that the second term is smaller than ε/3. This proves Equation (12) and concludes the proof. In order to extend the proof from d = 1 to d ≥ 2, we need the notion of sliced Wasserstein distance, see Bayraktar and Guo (2021) for instance. Let S d−1 = {u ∈ R d : u = 1} be the unit sphere in R d and, for u ∈ R d , let u * : R d → R be the linear form defined by u * (x) = u · x. The projection in direction u of a measure µ on R d is defined as the pushforward µ • u −1 * which is a measure on R. The inequality |u · x| ≤ x implies that µ • u −1 * ∈ W p (R) for all µ ∈ W p (R d ) and u ∈ S d−1 . The sliced and max-sliced Wasserstein distances between µ, ν ∈ W p (R d ) are then defined respectively by where σ denotes the uniform measure on S d−1 and In plain words, the sliced and max-sliced Wasserstein distance are respectively the average and the maximum over all the 1-dimensional Wasserstein distances between the projections of µ and ν. The following result is crucial in our proof. Theorem 4 (Bayraktar and Guo (2021)). For all p ≥ 1, SW p and SW p are distances on W p (R d ) which are equivalent to W p , i.e. for all sequence µ, µ 1 , µ 2 , . . . ∈ W p (R d ) Proof of Theorem 2 -case d ≥ 2. For the sake of clarity, we divide the proof into three steps: 1) we prove that the result holds in max-sliced Wasserstein distance, i.e. E[SW p p (F n,X , F X )] → 0; 2) we deduce that W p (F n,X , F X ) → 0 in probability; 3) we show that the sequence W p p (F n,X , F X ) is uniformly integrable. Points 2) and 3) together imply E[W p p (F n,X , F X )] → 0 as required. Step 1). For all u ∈ S d−1 , the projectionF n,X • u −1 * is the weighted empirical distributionF An application of Theorem 2 to the 1-dimensional sample (Y i · u) i≥1 yields Note indeed that E[|Y | p ] < ∞ implies E[|Y · u| p ] < ∞ and that the conditional laws of Y · u are the pushforward of those of Y , i.e. L(Y · u | X) = F X • u −1 * . We next consider the max-sliced Wasserstein distance. Regularity in the direction u ∈ S d−1 will be useful and we recall that the Wasserstein distance between projections depends on the direction in a Lipschitz way. More precisely, according to Bayraktar and Guo (2021, Proposition 2.2), for all µ, ν ∈ W p (R d ) and u, v ∈ S d−1 (recall Equation (7) for the definition of M p (µ), M p (ν)). The sphere S d−1 being compact, for all ε > 0, one can find K ≥ 1 and u 1 , . . . , u K ∈ S d−1 such that the balls B(u i , ε) with centers u i and radius ε cover the sphere. Then, due to the Lipschitz property, the max-sliced Wasserstein distance is controlled by Elevating to the p-th power and taking the expectation, we deduce The first term converges to 0 thanks to Eq. (17), i.e. The second term is controlled by a constant times ε p since (by property i) of the weights) and (by the tower property of conditional expectation). Letting ε → 0, the second term can be made arbitrarily small. We deduce E[SW p p (F n,X , F X )] → 0. Step 2). As a consequence of step 1), SW p (F n,X , F X ) → 0 in probability, or equivalentlyF n,X → F X in probability in the metric space (W p (R d ), SW p ). Theorem 4 implies that the identity mapping is continuous from The continuous mapping theorem implies thatF n,X → F X in probability in the metric space (W p (R d ), W p ). Equivalently, W p (F n,X , F X ) → 0 in probability. Step 3). By the triangle inequality, with δ 0 the Dirac mass at 0. Furthermore, for any µ ∈ W p (R d ), In order to prove the uniform integrability of the left hand side, it is enough to prove that M p p (F X ) is integrable and M p p (F n,X ), n ≥ 1, is uniformly integrable. (18) We have Since the sequence M p p (F n,X ) converges in L 1 , it is uniformly integrable and the claim follows. Proof of Proposition 1, Corollaries 1-2 and Theorem 3 Proof of Proposition 1. The proof of the upper bound relies on a coupling argument. Without loss of generality, we can assume that the Y i 's are generated from uniform random variables U i 's by the inversion method -i.e. we assume that U i , 1 ≤ i ≤ n, are independent identically distributed random variables with uniform distribution on (0, 1) that are furthermore independent from the covariates X i , 1 ≤ i ≤ n and we set Y i = F −1 X i (U i ). Then the sample (X i , Y i ) is i.i.d. with distribution P . In order to compareF n,x and F x , we introduce the random variablesỸ i = F −1 x (U i ) and we definẽ By the triangle inequality, In the right hand side, the first term is interpreted as an approximation error comparing the weighted sample (Y i , W ni (x)) to (Ỹ i , W ni (x)) where thẽ Y i have the target distribution F x . The second term is an estimation error where we use the weighted sample (Ỹ i , W ni (x)) with the correct distribution to estimate F x . We first consider the approximation error. A similar argument as for the proof of Equation (16) implies Introducing the uniform random variables U i 's, we get where the equality relies on Equation (9). Note that this control of the approximation error is very general and could be extended to the Wasserstein distance of order p > 1. We next consider the estimation error and our approach works for p = 1 only. By Equation (10), Applying Fubini's theorem and using the upper bound we deduce Collecting the two terms yields Proposition 1. Proof of Corollary 1. For the kernel algorithm with uniform kernel and weights (5), we denote by the number of points in the ball B(X, h n ) with center X and radius h n . If N n ≥ 1, only the points in B(X, h n ) have a nonzero weight which is equal to 1/N n . If N n = 0, then by convention all the weights are equal to 1/n. Thus we deduce because the distance to X for the points with non zero weight can be bounded from above by h n if N n (X) ≥ 1 and by √ k otherwise (note that √ k is the diameter of [0, 1] k ). Next, we use the fact that, conditionally on X = x, N n (x) has a binomial distribution with parameters n and p n (x) = P(X 1 ∈ B(x, h n )). This implies where the first inequality follows from Györfi et al. (2002, Lemma 4.1) and the second one from Györfi et al. (2002, Equation 5.1) where the constant c k = k k/2 can be taken. Similarly, In view of these different estimates, Equation (13) entails Proof of Corollary 2. For the nearest neighbor weights (6), there are exactly κ n non-vanishing weights with value 1/κ n whence Furthermore, the κ n nearest neighbors of X satisfy X i:n (X) − X ≤ X κn:n (X) − X , i = 1, . . . , κ n . In view of this, Equation (13) entails where the last line relies on Jensen's inequality. We conclude thanks to Biau and Devroye (2015, Theorem 2.4) stating that so that property b) of Definition 1 is equivalent to Similarly as in Pic et al. (2022, Lemma 1), one can show that a general prediction with values in R can always be improved (in terms of Wasserstein error) into a binary prediction with values in {0, B}. Indeed, for a given predictionF n,x , the binary predictioñ This simple remark implies that, when considering the minimax risk on the restriction of the class D(H, L, M ) to binary distributions, we can focus on binary predictions. But for binary predictions, showing that the minimax rate of convergence for distributional regression in Wasserstein distance is equal to the minimax rate of convergence for estimating the regression function E[Y |X = x] = Bp(x) in absolute error under the regularity assumption (19) . According to Stone (1980Stone ( , 1982, a lower bound for the minimax risk in L 1 -norm is n −H/(2H+k) (in the first paper, we consider the Bernoulli regression model referred to as Model 1 Example 5 and the L q distance with q = 1). Proof of Theorem 3 (upper bound). For the kernel method, Corollary 1 states that the expected Wasserstein error is upper bounded by Lh H n + M (2 + 1/n)c k (nh k n ) −1/2 + Lk H/2 c k (nh k n ) −1 . Minimizing the sum of the first two terms in the right-hand side with respect to h n leads to h n ∝ n 1/(2H+1) and implies that right-hand side is of order n −H/(2H+k) (the last term is negligible). This matches the minimax lower rate of convergence previously stated previously and proves that the optimal minimax risk is of order n −H/(2H+k) . For the nearest neighbor method, minimizing the upper bound for the expected Wasserstein error from Corollary 2 leads to with a corresponding risk of order whence the nearest neighbor method reaches the optimal rate when k ≥ 2. Proof of Proposition 2 Proof of Proposition 2. The first point follows from the fact that composition by a continuous application respects convergence in probability. Indeed, as the estimatorF n,X converges to F X in probability for the Wasserstein distance W p , S(F n,X ) converges to S(F X ) in probability. In order to prove the consistency in L p/q , it is enough to prove furthermore the uniform integrability of |S(F n,X ) − S(F X )| p/q , n ≥ 1. With the convexity inequality of power functions as p/q ≥ 1, Equation (14) entails |S(F n,X ) − S(F X )| p/q ≤ 2 p/q−1 |S(F n,X )| p/q + |S(F X )| p/q ≤ 2 p/q−1 (aM q p (F n,X ) + b) p/q + (aM q p (F X ) + b) p/q ≤ 2 2(p/q−1) a p/q M p p (F n,X ) + a p/q M p p (F X ) + 2b p/q . (18) implies the uniform integrability of |S(F n,X ) − S(F X )| p/q , n ≥ 1, which concludes the proof.
2023-02-03T06:42:49.666Z
2023-02-02T00:00:00.000
{ "year": 2023, "sha1": "74074df4dae5ee3905e8afa697ac6cda976be254", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "74074df4dae5ee3905e8afa697ac6cda976be254", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
244397065
pes2o/s2orc
v3-fos-license
Ageing policy in Poland during the COVID-19 pandemic Introduction and objective. Poland is engaged in the implementation of activation programmes for seniors at governmental as well as non-governmental levels. Among these programmes may be mentioned, ‘Active+’, ‘Senior+’, ‘Care 75+’, and ‘Senior Caritas’. The COVID-19 pandemic highlighted the need for the inclusion of seniors into social life, and concern about their health. An important challenge for social and ageing policy is the provision of proper standards of care and health protection, especially during an increased sanitary regime. The aim of the study was analysis of the ageing policy strategy and the quality of life of seniors before and during the COVID-19 pandemic. Review methods. The study was conducted by the method of analysis of data in the area of national initiatives concerning activation programmes for seniors implemented during 2020–2021. The starting point was the well-established definition of the quality of life by the WHO. Abbreviated description of the state of knowledge. The analysis performed showed a multitude of factors determining the needs of seniors at the time of the pandemic, which often differed from those observed earlier. The latest studies of the quality of life of the elderly in Poland demonstrated that nearly 60% of respondents assessed their psychological condition as worse than before the pandemic. Another problem was the issue of physical activity, which was limited by more than 62% of seniors, and difficulties with access to health care system services. Summary. It seems necessary to implement forms of assistance which could be adjusted to the changing epidemiological conditions, in order to improve the quality of life of persons who, in the near future, will constitute a considerable percentage INTRODUCTION AND OBJECTIVE The past century has provided people with an increase in life span by more than 30 years. Researchers agree on the fact that a profound social revolution is underway -the Age Revolution [1,2]. The synergistic effect of reducing infant mortality, combined with a low or even negative natality rate and an increase in life expectancy, has resulted in an important reconstruction of the demographic structure of many countries. The ageing process is characterized primarily by a decreased physical and psychological fitness performance. During this time, a clear increase in morbidity is observed, there occur problems with general motor skills, mobility and communication, which frequently lead to regression of social roles and, consequently, exclusion and marginalization of the individual. There is then an increased risk of the occurrence of the phenomenon of age discrimination, so-called ageism [3]. Attention should be paid to the fact that the needs of the elderly clearly correlate with the quality of life perceived by them. According to the definition by the World Health Organization (WHO) of 2002, active ageing is: 'the process of optimizing opportunities for health, participation, and security in order to enhance quality of life as people age' [4]. This definition widely deals with activity biased mainly on the goal which is an improvement in the quality of life of the seniors themselves. In this view of active ageing it was emphasized that activity should not be associated exclusively with occupational activity or physical condition, because it refers to any domain of life: social, economic, cultural, spiritual, or civic [5,6]. According to J. Czapiński and P. Błędowski, the factors which to the greatest extent decide about the quality of life of seniors are primarily: state of health, family situation, satisfaction of material needs (including accommodation), and provision of social and family support [7]. These factors become of the utmost importance at senior age. This results from hindered access to cultural and care facilities, as well as the lack of information concerning available health programmes, organized cultural and educational events, or transport and economic difficulties, which make it impossible to participate in this type of initiatives outside the place of residence. These problems may concern especially those who are lonely, non-mobile, or are unable to take advantage of ad hoc assistance from their family or acquaintances [8]. The COVID-19 pandemic highlighted the need for inclusion of seniors into social life, and concern about their health. From 14 March 2020, Polish citizens were told not to leave their houses unless absolutely necessary. Later, from 12 -25 March 2020, the government closed all institutions for seniors activities [9]. Quite often, seniors have difficulties with using the developing information technologies. This concerns, among other things, use of the Internet which offers a wide range of information about events organized in the immediate vicinity, or convenient forms of contact with the closest persons by means of instant messengers [10]. While using the research tools assessing the needs of the seniors, it is possible to delineate the direction of intervention related with an effective educational and activation management. An individual approach to elderly persons seems to be the key to the recognition of their actual needs which significantly determine the quality of life, and may also result in the limitation of the phenomenon of escalation of the phenomenon of gerontophobia occurring in society [11]. It is justifiable to analyze the strategy of ageing policy, with consideration of the period of pandemic, in order to assess the accessibility of the seniors to socio-cultural and health promoting initiatives. REVIEW METHODS Analysis of ageing policy programmes and quality of life of seniors. The study was conducted by the method of data analysis -desk research. This study of the social activation of seniors in Poland focuses on analysis of national initiatives concerning the programmes of social activation of seniors before and during the COVID-19 pandemic. In addition, the quality of life of the elderly during the pandemic was examined. Articles from journals qualified for review included those which provided information concerning interventions biased on recognition of the quality of life of Polish senior citizens at the time of the pandemic. The starting point was the already well-established definition of the quality of life by the WHO, according to which the Quality of Life (QoL) is: 'individuals' perception of their position in life in the context of the culture and value systems in which they live and in relation to their goals, expectations, standards and concerns' [12]. The indicators of the QoL for the elderly are usually individual factors, such as health, physical activity, socio-economic stability and personal control, as well as network factors, such as social life, family relations, care network and support system [13]. As a consequence of such an understanding of the concept of the quality of life, the literature review focused on consideration of key words included in the above-mentioned definition, taking into account seniors living in Poland and period of the coronavirus pandemic in 2020-2021. Articles in Polish and English were sought using the terms: 'elderly', 'senior', 'wellbeing', 'satisfaction with life', 'quality of life', 'Poland', 'COVID-19', 'coronavirus", 'pandemic', 'ageing', 'elderly', 'loneliness', and 'social isolation' (key words searched individually and combined with operator 'and'). Studies contained in this literature review were sought in four databases: Web of Science, Google Scholar, PubMed, and Scopus. The inclusion criteria were: articles about the quality of life, articles assessing the impact of the pandemic on the lives of seniors in Poland, and scientific articles published in 2020-2021. The exclusion criteria were: research carried out in countries other than Poland, age groups under 60, unsystematic reviews, letters, comments and editorials, case reports or case series with less than 30 people. STATE OF KNOWLEDGE Ageing policy programmes. Ageing policy is a part of social policy which places the elderly in the focus of interest. It may be assumed that ageing policy is: 'all intentional actions of public administration bodies at all levels, and other organizations and institutions which pursue initiatives shaping conditions for dignified and healthy ageing'. The task of ageing policy is primarily creating such opportunities for the elderly which would support them in an active, independent ageing, with consideration of understood health, care, or rehabilitation needs, as well as social activation or occupational, educational, or cultural activity [14]. The Active Ageing Index (AAI) serves to define directions for constructing policy for an active and healthy ageing. This index has been developed within a joint project carried out by the European Commission's Directorate General for Employment, Social Affairs and Inclusion, and the European Commission and the United Nations Economic Commission for Europe, in close association with a number of political initiatives functioning at the European level. AAI is an index consisting of 22 individual indicators, and is obtained by combining score results from four domains, i.e. employment, participation in social life, independent, healthy and safe life, opportunities, and conditions favouring an active ageing [15]. Based on the four factors included in the index of the quality of life AAI, it is possible to assess Polish ageing policy. The so-called Madrid International Plan of Action on Ageing is also noteworthy, coordinated by the Unites Nations, in which the most important areas of social actions with respect to the elderly were formulated. While synthetizing the provisions contained in the document, the team of P. Błędowski characterized a network of actions on behalf of seniors which included the following areas: construction of a society friendly for people at any age, education of the elderly in order to compensate for disproportions in the development of individual regions, strengthening intergenerational ties, concern about good state of health and wellbeing of the elderly, and provision of care for persons who are not independent [16]. The basic legal Act which regulates the necessity for conducting evaluation of Polish ageing policy is the Act in the matter of the elderly of 11 September 2015, which imposes the duty to monitor and issue an annual report concerning the situation of the elderly by public administration bodies, State organizational entities, and other organizations engaged in shaping the situation of the elderly. As indicated in Article 3 of the above-mentioned Act, the monitoring of the situation of the elderly covers: 'demographic situation, income situation, housing conditions, occupational activity, family situation and structure of households, situation of the disabled, social and citizenship activity, educational and cultural activity, sports and recreational activity, state of health, availability and level of social services, equal treatment and counteracting age discrimination' [17]. The key directions of the national ageing policy have been included in the document Social policy with respect to the elderly until 2030. Safety. Participation. Solidarity [18]. Its goal is to provide complex services assisting the elderly with functioning in the social environment. Social policy with respect to the elderly until 2030, is the first government document delineating specified areas of actions and indicates entities directly responsible for their implementation. It AAEM Annals of Agricultural and Environmental Medicine considers solutions in all the most important spheres of life of the elderly, among others, in the field of safety and health, counteracting loneliness, active participation in social life, as well as adjustment of infrastructure to the needs and capabilities of such persons. The document includes implementation of many actions with respect to seniors within several fields, the most important of which are: shaping the positive perception of ageing in society, participation in social life, health promotion, prevention of diseases, access to diagnostics, treatment and rehabilitation, and education for old age. In addition, the Programme contains proposals for actions addressed directly to dependent seniors. Detailed elaborations in the area of ageing policy on the national level have been developed by the Minister of Family and Social Policy. The currently implemented national strategy for ageing policy aimed at the social activation of seniors focuses on the following three programmes. Multi-annual programme on behalf of the Elderly 'Active+' for 2021-2025. This programme was sanctioned by Resolution No. 167 by the Council of Ministers of 16 November 2020 on the establishment of a multi-annual programme on behalf of the Elderly 'Active+' for 2021-2025. The Programme considers the directions of actions resulting from the above-mentioned document adopted by the Council of Ministers entitled: Social policy with respect to the elderly until 2030. Safety. Participation. Solidarity. The main goal of the Programme 'Active+' is provision for the elderly of dignified, safe and active old age by increasing participation of the elderly persons in all domains of social life [19]. The initiative preceding the Programme 'Active+' was the Government Programme on behalf of Social Activity (Social Activity of the Elderly -ASOS). The main goal of this Programme was supporting non-government organizations by providing funding for the projects for the elderly implemented by these organizations in four priority areas: education of the elderly, social activity promoting intra-and inter-generational integration, social participation and social services for the elderly [20]. During the Sejm session of 17 March 2021, the Deputy Minister of Family, Labour, and Social Policy, S. Szwed, evaluated that the Programme still enjoys great interest, and assured that the programme would be continued in 2021-2025 [21,22]. Multi-annual programme 'Senior+' for 2021-2025. The programme was established by Resolution No. 191 of the Council of Ministers of 21 December 2020 on the establishment of a multi-annual programme 'Senior+' for 2021-2025, which is the continuation of the multi-annual programme for 2015-2020. The goal of the Programme is to increase the active participation of seniors in social life by providing financial support for local government units. The funds allocated for this purpose serve for the creation or expansion of infrastructure of the support centres in the local environment, and increasing the number of places in 'Senior+' support centres. Programme 'Care 75+'. The strategic goal of the programme 'Care 75+' for 2021 is improvement in the availability of care services, including specialist care services for persons aged 75 and over who live in communes with a population of up to 60,000 inhabitants. This programme is an element of State social policy in the field of provision for the above-mentioned persons of support and assistance adequate to the needs and capabilities resulting from their age and state of health, within care services, including specialist care services, improvement in the quality of life of seniors, and financial support for communes in the area of provision of care services [23]. Ageing policy is an important element of the functioning of social policy. At the time of the COVID-19 pandemic, an important challenge for social policy, including ageing policy, is the provision of appropriate standards of care and health protection for seniors. During the 15th meeting of the Ageing Policy Committee on 13 April 2021, it was indicated that at the time of the pandemic, the ageing policy programmes had been implemented in 94% of the programmes assumed. In association with prevention, counteraction and control of coronavirus infections, the decision was made to suspend activities under government programmes for social activation of seniors, i.e. Daycare Homes Senior+, and Clubs Senior+. Despite closing care facilities for seniors it was possible to implement programmes included in the projects ASOS and 'Senior+' [24]. The project Support Senior is closely adjusted to the pandemic conditions [25]. This Programme assumes financial support for communes in the area of organization and implementation of the support service, consisting especially in the delivery of shopping containing essentials, including food products, and personal hygiene products for seniors in need. The Programme had been planned for implementation until 31 December 2021. Within the Programme, the Ministry of Family and Social Policy also created the Solidarity Assistance Corps for the Seniors [25]. A helpline dedicated to seniors was also launched; by calling the indicated phone number older persons may ask for help with activities requiring leaving the house which are hindered by the presence of pandemic. The National Institute of Freedom -Centre for Civil Society Development, supported by sixteen partners from each province, is responsible for the implementation and financing of the Solidarity Assistance Corps for the Seniors. It is noteworthy that within the project steps have been undertaken enabling work during the pandemic, by adjusting adequate safety measures for both volunteers and target recipients. Home quarantine created the need for including seniors in social life and rendered the need for concern about their health even more visible. Within the national ageing policy at the time of pandemic, the Ministry of Family and Social Policy also supported the prophylactic programme of the Ministry of Health: 'Active senior at home'. The Polish Chamber of Physiotherapists, together with the Ministry of Health, developed a set of safe exercises which, on assumption, would facilitate the undertaking of activity by seniors. The exercises are available on the special website: fizjoterapiaporusza.pl, and YouTube channel of the Ministry of Health. Every day on this website there appears one film with an exercise, together with a commentary by an expert, and guidelines concerning the number of repetitions or pace [26]. with frequent going out may obtain help form a younger age group, which offers, e.g. carrying out basic grocery shopping, while young parents who need temporary care for their children, may use their free time and experience of their older neighbours. Such solutions enable making contact and becoming acquainted with unrelated people at various ages, which may result in a reduction in the scale of age segregation, minimization of the phenomenon of loneliness, or debunking numerous stereotypes concerning individual age groups. The intergenerational project, on assumption, should be of an inclusive character which would develop the capabilities of its participants, thus creating conditions to use multiple resources available to representatives of various age generations. Such a solution would be of immediate benefit for all participants in the project, which is one of the main goals of the concept of intergenerational projects [27,28]. Actions in the field of intergenerational cooperation cover several domains of life, including education focused on mutual learning, acquisition of experiences and skills. On the one hand, this may be unique content passed down traditionally from generation to generation, whereas on the other hand, may be technical skills much needed nowadays, related, e.g., with the operation of a computer or smartphone. In such actions may be engaged, e.g., business entities which are biased on the transfer of knowledge, experiences, and skills between experienced and new employees. An important area of actions are also tourism and recreation. Their scope includes, among other things, organization of hiking and walking tours, allowing the learning of history directly from 'living witnesses', i.e. seniors who have been living in a given area for decades and remember important historical events and changing architecture. The subsequent element of intergenerational projects is the sphere of culture, including customs, becoming acquainted with diverse value systems, regional traditions or local cuisine, the recipes of which are often passed down from generation to generation. An example of such actions may be the joint implementation of various projects in the form of 'performance', or making handicrafts characteristic of a given region. In the case of large cities, the issue of proper organization of living quarters is very important and in such a way that they do not create intergenerational barriers. A frequent phenomenon is the accumulation of seniors in older residential areas, while the young generation prefers new buildings. It is therefore crucial to create spheres consisting of mixed generations which would enable mutual assistance and support adequate to the needs of its users. A considerable number of currently implemented intergenerational projects connects the above-mentioned spheres of life. An example of such actions is participation together on a trip, which is finalized by electronic documentation in the form of photographs or films. While the seniors are the guides of the trip, the younger participants offer assistance with downloading, processing or publishing electronic material. Such a project combines tourist, cultural and educational elements [22,23,29]. Intergenerational volunteering is most often based on supporting individuals from other age groups who need constant care. These may be actions provided by institutions or private households. An example of correct practices within the range of intergenerational projects, among others, is Senior Caritas, the goal of which is improvement of the quality of life of seniors and volunteers by activation, support, and creation of intergenerational bonds in local communities. During the pandemic, young volunteers offered their assistance with regular supply of meals for the most needy seniors, provided remote tutoring concerning modern technological solutions, created a daily magazine in the form of a blog, as well as organizing weekly meetings on-line. One of the important long-term assumptions associated with the project is maintaining personal relationships between its participants, especially during the period of social isolation caused by the COVID-19 pandemic [30]. The subsequent intergenerational project in line with the assumptions of ageing policy is the project Socially active. The creators of this concept is the Gdańsk Foundation 'You too can do everything', and an Independent Secondary School in Gdańsk. The goal of the project is the creation and maintenance of intergenerational bonds, based on appropriate mechanisms of intergenerational relations, mutual respect, and creation of a positive image of seniors in society. The achievement of this goal is possible due to systematic workshop actions in small groups, or direct cooperation between participants of various ages. The project is practically cost-free. Adolescents attending secondary school play the role of volunteers, while the duties demanding the greatest engagement are organizational and conceptual issues concerning the topics of meeting which would be of interest to both parties [22]. Internet and telephone consultation point: Students for Seniors is a project launched at the beginning of 2020 as the answer to the seniors' demand for drug and activity consultations, resulting from a limited access to health care facilities or activation centres. Students offer their help by phone or instant messengers, conduct a health interview, give advice (previously consulted with a teacher), or arrange the date of the next consultation. If necessary, they also carry out monitoring of the recommended solutions. The variety of reports allow the students to acquire comprehensive knowledge of health problems, due to which the project is in line with the assumptions of the intergenerational concept, and the consultations play the role of practical classes which were suspended at universities in 2020 [31]. The study conducted by the IPSOS among beneficiaries of the project: 'For everyday shopping' shows that 84% of seniors evaluate their contact with a volunteer as necessary or very necessary. In turn, 82% of volunteers achieved high or very high satisfaction with their work. In 2021, the project which consists in year-round assistance for the elderly and lonely, annually covers more than 10,000 seniors and 3,300 volunteers, thus by nearly a half as many as in 2018. This results from the fact of an increasing number of people at old age in society, their difficult financial situation, and the demand for contact and activation [32]. a Social Support Scale, and a Scale of Pandemic-Related Difficulties) also demonstrated that anxiety and uncertainty associated with the spread of the COVID-19 pandemic were a significant predictor of fear and depressive symptoms among persons aged 45-59 and 60-85 [33]. However, researchers engaged in a study conducted among Polish and German populations entitled: 'Psychological coping, possibilities of crisis intervention and post-operative care in facilities and institutions for adults, parents, and children' arrived at different conclusions in the context of the effect of pandemic on the psychological status of seniors. The researchers used validated self-report questionnaires and indicated that persons from the oldest age groups coped with the period of pandemic better than those from younger age groups. The results demonstrated that the higher the assessment of the quality of life, wellbeing, and satisfaction with life among the elderly who participated in the study, may be associated with both their education (the majority reported higher education -61.7%, i.e. more than in the total sample), and financial stability (the majority being entitled to oldage pension). Opposite to younger persons, the pensioners were not threatened with the loss of employment. A higher assessment of the quality of life, wellbeing, and satisfaction with life in the examined sample of the elderly was also related with a lower level of psychological fear. Despite the satisfactory results, the researchers also indicated that it is necessary to implement various forms of assistance in improving psychological resources favouring the quality of life of the elderly, including the reduction of stress, as well as methods based on cognitive behavioural therapy. This is because creative and social actions which maintain affiliation to a social group, support a positive ageing process [34]. Another aspect which determined the quality of life of seniors was the issue of physical activity and using public space at the time of the pandemic. According to the results of research conducted using the Computer-Assisted Web Interview (CAWI) method, more than 62% of seniors limited their physical activity, which unequivocally correlated with the general wellbeing of the elderly. It is noteworthy that the lack of physical activity in combination with hindered access to the health sector may have significantly contributed to the increase in mortality in this age group. A study conducted between April-May 2020 concerning the needs and concerns accompanying the elderly in the light of changes in social life caused by the presence of coronavirus COVID-19, confirmed an intensification of the feeling of fear while using public space (55% of seniors declared such concerns) [35]. The feeling of fear while using public space was significantly related with the limitation of social contacts. A high percentage of seniors in the study (60%) declared that the most important problem during the pandemic was lack of direct contact with close persons and other people, and longing for nature during the period of social isolation. The oldest seniors (aged over 80) declared that limitations concerning the possibility of going shopping independently created a psychological load resulting from the isolation. Among other problems, the seniors also indicated longing for an additional occupational activity, contact with physicians, and the possibility to travel [35]. The above-mentioned data were also confirmed by a previously mentioned study in which nearly a half of respondents (46%) reported that during the time of the pandemic they reduced their social contacts, which consequently decreased their quality of life. 34.6% of respondents declared that they experienced considerably greater irritation and psychological anxiety, compared to the period before the pandemic. As a result of the COVID-19 pandemic, health care systems worldwide had to reorganize the majority of their services in order to adjust to many unprecedented circumstances. Despite the fact that some European countries expanded the right to medical services financed from public resources, there occurred new barriers in access to health care. A study aimed at establishing whether during the first months of the pandemic it came to discrimination connected with outcome in the area of unsatisfied health care needs among the elderly in Europe, showed the alarming phenomenon of the limitation of seniors' access to health care services. The results of this study carried out using the data from the Survey on Health, Ageing and Retirement in Europe (SHARE) suggest that Poland, Italy, and Greece should pay attention to the possible problems associated with the refusal of medical care, which occurred during the first months of the pandemic. A delay in making diagnosis and treatment may ultimately be translated into adverse health effects, deteriorated quality of life, or even enhancement of social and economic inequalities in health which, especially among the elderly, in the coming decades will affect an increasing number of the European population. As indicated, health policy should continue to guarantee an equitable access to healthcare, and also focus on areas outside the health sector (education, employment, social protection, etc.) in order to provide the healthy ageing of the population [36]. Within the study carried out in Poland in 2020, the degree to which the elderly used the Internet was also investigated. According to the data, 71.2% of persons aged over 60 did not use the Internet, and only every fifth person used the Internet systematically. Those who used the computer regularly described their psychological condition in more positive terms than those who did not use modern technologies. Despite this, the respondents mentioned that the Internet is an ideal way to solve the problem of loneliness, or developing own competences. The pandemic affected seniors from many different aspects: in a direct way, due to the risk of infection and death, and also indirectly -considering the barrier which increased the feeling of loneliness and isolation caused by impossibility to use public spaces [35,37]. While analyzing the results of studies carried out during the pandemic, it should be considered how to modify the parameters of ageing policy in order to adjust them to unpredictable situations. The population in Poland by biological age groups (65 and over), as of 30 June 2021, was 7,175,237. It should be considered that in 2050 the number of seniors in Poland will increase from the present 25% up to nearly 40%, which will make Polish society one of the oldest in Europe [38,39]. An important challenge for social and ageing policy is the provision of proper standards of care and health protection, especially during an increased sanitary regime. Recommendations under the auspices of the Polish Psychiatric Association provide very helpful guidelines in the area of adjustment of ageing policy to the current epidemiological challenges. The presented recommendations refer to seniors with dementia, and indicate that social health is the key to the provision of the needs of seniors during pandemic. These recommendations indicate that maintenance of an efficient network of contacts, participation in meeting and group activities, may be the source of higher cognitive performance of seniors. In addition, in persons with dementia, social relations are an important factor exerting an effect on slowing progression of the disease. This is also associated with the tendency towards intensive activation of this group by the organization of interventions and psychosocial programmes aimed at the reduction of phenomena perceived as negative, such as social isolation and stigmatization [40]. In addition to the above recommendations, it was indicated that a psychoeducational management plan adjusted to the group of seniors is also indispensable [41]. SUMMARY The COVID-19 pandemic allowed a critical evaluation of ageing policy. Analysis of eight available studies conducted during the pandemic showed a decline in the perceived quality of life of seniors, and a deepening social isolation, which are important factors determining the state of psychological, social, spiritual, and somatic state of health of seniors [42]. The review of six governmental and three nongovernmental social activation programmes emphasized the need for the continuation of already existing programmes, as well as implementation of new programmes activating seniors who, from year-to-year, constitute an increasingly larger population. In carrying out the strategy of ageing policy it is important to use the approach of Evidence Based Medicine, Evidence Based Public Health and Evidence Based Health Promotion (EBPH, EBHP) [43,44]. Limited financial resources require consideration of the catalogue of best practices, which will contribute to the reduction of programmes with low efficiency and unreasonable costs. Therefore, it will be possible to provide adequacy of the actions associated with the social activation of seniors undertaken. In the implementation of the ageing policy it is worth noting and considering the actual needs of seniors determining their quality of life, which will enable delineation of the direction of an effective intervention adjusted to the current sociodemographic situation.
2021-11-18T16:12:58.161Z
2021-11-16T00:00:00.000
{ "year": 2021, "sha1": "d64beda5d6a63a22150860314860c0ecce51ecad", "oa_license": "CCBYNC", "oa_url": "http://www.aaem.pl/pdf-143559-69785?filename=Ageing%20policy%20in%20Poland.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b0159f034a763949529a184c75cec19ef71daee2", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
17302368
pes2o/s2orc
v3-fos-license
Possibility of reflectionless tunneling crossed transport at normal metal / superconductor double interfaces We investigate one dimensional models (the Blonder, Tinkham, Klapwijk model and a tight-binding model) of non local transport at normal metal / superconductor (NS) double interfaces. We find a negative elastic cotunneling crossed conductance, strongly enhanced by additional scatterers away from the interfaces, suggesting the possibility of reflectionless tunneling non local transport at double NS interfaces with contacts having a sufficiently small extension. Introduction. -Single electron tunneling in a superconductor is prohibited if the applied bias voltage is smaller than the superconducting gap. However, an electron in the spin-up band can be reflected as a hole in the spin-down band, a phenomenon called Andreev reflection [1]. A charge 2e is transmitted in the superconductor at each Andreev reflection, so that the conductance of a highly transparent normal metal / superconductor (NS) contact is doubled compared to the one of the corresponding NN contact. The equilibrium properties of the superconductor (such as the value of the self-consistent superconducting gap) are modified by a normal electrode connected to the superconductor, a phenomenon called the inverse proximity effect. It is expected that most of the inverse proximity effect takes place on a length a if the area of the contact a 2 is much smaller than the superconducting coherence length ξ [2]. The influence of the inverse proximity effect on transport properties can then be neglected, and a single channel, ballistic, one-dimensional model with a step-function variation of the superconducting gap captures the essential physics of localized interfaces, as shown by Blonder, Tinkham, and Klapwijk (BTK) [2]. Moreover, BTK introduce a repulsive potential at the NS interface, characterized by the dimensionless parameter Z, being the strength of the repulsive potential normalized to the Fermi energy. Transparent interfaces correspond to Z = 0 and tunnel interfaces correspond to Z ≫ 1. Disorder in the normal metal modifies strongly subgap transport at a single normal metal / insulator / superconductor (NIS) interface [3,4]. The conductance can be enhanced by orders of magnitude by constructive interferences in which an electron can "try" the tunneling process a huge number of times [3]. This effect due to scattering by disorder is already present in simple double barrier one-dimensional models. Melsen and Beenakker [5] [22]. The current Ia through electrode "a" is determined in response to a voltage V b on electrode "b", with Va = 0. double junction in one dimension, with, in the BTK language, barrier parameters Z 1 (for the NIN interface) and Z 2 (for the NIS interface). The conductance, averaged over the Fermi oscillations, shows a maximum for a value of Z 1 comparable to Z 2 as Z 1 increases while Z 2 ≫ 1 is fixed [5]. This enhancement of the conductance shows that the double barrier model captures multiple scattering as in a disordered system. We address here similar effects for non local transport in NISIN junctions [6][7][8][9][10][11][12][13][14][15][16][17][18][19][20] in which the normal electrode "a" is at potential V a , the electrode "b" is at potential V b , and the superconductor is at potential V S (see Fig. 1). The non local conductance G a,b (V b ) contains the information on how the current I a (V b ) in electrode "a" depends on the voltage V b on electrode "b": [9,13]. The superconductor is taken as the reference voltage (V S = 0), and we focus on the case V a = 0. Such devices have been realized in two recent experiments, performed in Karlsruhe by Beckmann et al. with ferromagnets [21], and in Delft by Russo et al. with a NISIN trilayer [22]. A sizeable crossed signal is measured in the latter [22], which is surprising in view of lowest order perturbation theory in the tunnel amplitudes predicting an exact cancellation between the electron-electron and electron-hole channel crossed conductances [13]. We take as a working hypothesis that non local transport with normal metals is described by higher order contributions in perturbation theory in the tunnel amplitudes. These were already evaluated in Ref. [17] within microscopic Green's functions for localized interfaces. This approach was continued in Ref. [18] to account for extended interfaces with a large normal metal phase coherence length, giving rise to weak localization. Our task here is to investigate related issues in simple one dimensional models in the spirit of Ref. [5]. Blonder, Tinkham, Klapwijk (BTK) approach to a NISIN junction. -Let us first consider a one dimensional model of NISIN double interface within the BTK approach [16] (see Fig. 2a). The gap of the superconductor is supposed to have a step-function variation: ∆(z) = ∆θ(z − R/2)θ(R/2 − z), and we suppose δ-function scattering potentials at the interfaces: V (z) = Hδ(z + R/2) + Hδ(z − R/2) [2]. The two-component wave-functions are given by where ψ 1 (z), ψ 2 (z) and ψ 3 (z) correspond respectively to z < −R/2, −R/2 < z < R/2 and R/2 < z, and u 2 0 = 1 − v 2 0 = 1 + i √ ∆ 2 − ω 2 /ω /2 are the BCS coherence factors. We introduce the parameter Z = 2mH/h 2 k F . The unknown coefficients a, b, a ′ , b ′ , c, d, c ′ , d ′ are determined from matching the wave-functions and their derivatives [2]. Assuming R ≫ ξ, we expand a ′ and b ′ to first order in exp (−R/ξ), to find the transmission coefficients at ω = 0. We deduce the first non vanishing term in the large-R, large-Z expansion of the non local transmission: having a sign dominated by elastic cotunneling, in agreement with the Green's function approach in which the first term in expansion of the non local transmission appears at order T 4 exp (−2R/ξ), where the large-Z normal transmission coefficient is proportional to Z −2 [2]. In the case of highly transparent interfaces corresponding to Z = 0, we find no crossed Andreev reflection: a ′ (ω) = 0, in agreement with the Green's function approach in Ref. [17]. The elastic cotunneling transmission coefficient for Z = 0 is given by NINISININ junction. -To describe multiple scattering in the normal electrodes, we consider now two additional scatterers at positions z 1 = −L 1 /2 in the left electrode and z 2 = L 2 /2 in the right electrode, described by the potentials V ′ (z) = H ′ δ(z −z 1 )+H ′ δ(z −z 2 ), and leading to the barrier parameter Z ′ = 2mH ′ /h 2 k F (see Fig. 2b for the definitions of Z and Z ′ ). We average numerically the non local transmission coefficient over the Fermi oscillation phases The variations of the crossed conductance at zero bias as a function of Z ′ for a fixed Z are shown on Fig. 3, as well as the corresponding crossed conductance for the NINISIN junction. The integration over the microscopic Fermi oscillation phases for the latter involves a double integral so that the accuracy is larger than for the NINISININ junction involving a triple integral. As it is visible from the curves (a) -(f) on Fig. 3 corresponding to an increasing precision in the evaluation of the integrals, the crossed conductance for Z 1 = 0 has not converged to the limiting value obtained for the NINISIN junction, meaning that the change of sign in the crossed conductance at small Z 1 for the NINISININ junction is an artifact related to the lack of precision in the evaluation of the triple integral (the crossed conductance at Z 1 = 0 for the NINISIN junction is indeed negative). The variation of the crossed conductance on Fig. 3 shows a strong enhancement by the additional scatterers, suggestive of reflectionless tunneling, as for a NIS interface [5]. Green's functions. -Now we consider the same one dimensional geometry within Green's functions, and first evaluate the normal and superconducting Green's functions with appropriate boundary conditions. In one dimension, the Nambu Green's function of a superconductor at distance R and energy ω is given bŷ with with s = √ ∆ 2 − ω 2 and ξ(ω) =hv F /s, where T is the bulk hopping amplitude of the one dimensional tight-binding model, and v F the Fermi velocity. The Green's functions on the finite segment [α, β] can be deduced from Eq. (9) by introducing a self-energy that disconnects the chain [23]. With the notations on Fig 4, we find Similar expressions are obtained for g 2,2 α,β andĝ α,α . Fig. 5 shows the Green's functions result for the variation of the crossed conductance of the NINISININ junction as a function of t ′ for a fixed t (see Fig. 2d). The numerical convergence is much faster than for the corresponding BTK model calculation because of the reduced dimension of the matrix to be inverted. We find the same feature as for the BTK model: the crossed conductance is enhanced by additional scatterers, as in reflectionless tunneling. Imposing the same normal conductance in the BTK and in the tight-binding models leads to the relation leading to a good (but not perfect) agreement for the crossed conductance when the tightbinding and BTK results are rescaled on each other. Conclusions. -To conclude, we have investigated simple one-dimensional models consisting of NISIN double interfaces, with additional scatterers away from the two interfaces, in the spirit of Ref. [5]. We find a strong enhancement of the crossed conductance by the additional scatterers, suggesting that non local transport at localized double NIS interfaces is enhanced by orders of magnitude, like in reflectionless tunneling at a single NIS interface. The geometry studied here is such that the Thouless energy associated to the dimension of the structure parallel to the interfaces is larger than the bias voltage. This reflectionless tunneling regime is not expected to correspond to the experiment in Ref. [22] because of the extended interfaces in this experiment, but may be probed in future experiments with disordered normal metals and interfaces of reduced extension. Finally, we also evaluated the crossed conductance as a function of energy, and found no sign change when the energy is increased for the BTK model: the crossed conductance is dominated by elastic cotunneling at all energies.
2014-10-01T00:00:00.000Z
2006-01-19T00:00:00.000
{ "year": 2006, "sha1": "82c5e9dda848d6080b71ee2f8a7ce6aabe23389a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "82c5e9dda848d6080b71ee2f8a7ce6aabe23389a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
220335142
pes2o/s2orc
v3-fos-license
Education and wealth inequalities in healthy ageing in eight harmonised cohorts in the ATHLOS consortium: a population-based study in key but few studies have examined factors associated with inequalities in trajectories of health and functioning across countries. The aim of this study was to investigate trajectories of healthy ageing in older men and women (aged ≥ 45 years) and the effect of education and wealth on these trajectories. Methods This population-based study is based on eight longitudinal cohorts from Australia, the USA, Japan, South Korea, Mexico, and Europe harmonised by the EU Ageing Trajectories of Health: Longitudinal Opportunities and Synergies (ATHLOS) consortium. We selected these studies from the repository of 17 ageing studies in the ATHLOS consortium because they reported at least three waves of collected data. We used multilevel modelling to investigate the effect of education and wealth on trajectories of healthy ageing scores, which incorporated 41 items of physical and cognitive functioning with a range between 0 (poor) and 100 (good), after adjustment for age, sex, and cohort study. (SD 10·1) age of were women. The earliest year of baseline data was 1992 and the most recent last follow-up year was 2015. Education and wealth affected baseline scores of healthy ageing but had little effect on the rate of decrease in healthy ageing score thereafter. Compared with those with primary education or less, participants with tertiary education had higher baseline scores (adjusted difference in score of 10∙54 points, 95% CI 10∙31–10∙77). The adjusted difference in healthy ageing score between lowest and highest quintiles of wealth was 8∙98 points (95% CI 8∙74–9∙22). Among the eight cohorts, the strongest inequality gradient for both education and wealth was found in the Health Retirement Study from the USA. Interpretation The apparent difference in baseline healthy ageing scores between those with high versus low education levels and wealth suggests that cumulative disadvantage due to low education and wealth might have largely deteriorated health conditions in early life stages, leading to persistent differences throughout older age, but no further increase in ageing disparity after age 70 years. Future research should adopt a lifecourse approach to investigate mechanisms of health inequalities across education and wealth in different societies. Introduction Due to a decrease in health status and an increase in noncommunicable diseases, disability, and care dependence in later life, the rapid growth of the size of the older population is at present set to increase the burden on already stretched health and social care services. 1 To address the potential effect of population ageing, the concept of healthy ageing, defined by WHO as "the process of developing and maintaining the functional ability that enables wellbeing in older age", 2 has become a key topic in policy planning and health research. Functional ability focuses on having the capabilities that enable all people to meet their basic needs; learn, grow, and make decisions; be mobile; build and maintain relationships; and contribute to society. This concept is made up of the interaction between intrinsic capacity, which combines all of an individual's physical, mental, and psychosocial capacities, and environmental characteristics, which form the context of an individual's life. This latest concept highlights the need to focus on positive aspects of ageing and the importance of considering both individual and contextual factors that might support health and functioning in later life. By contrast, traditional concepts in medical research (such as frailty, accumulated deficits, e387 www.thelancet.com/public-health Vol 5 July 2020 or multimorbidity) have generally focused on negative aspects of health and the identification of underlying biological and pathological abnormalities in older people. 3,4 Previous research on health inequalities has investigated a wide range of outcomes such as specific chronic diseases, multimorbidity, frailty and disability, mortality, and life expectancy, [5][6][7] and has consistently shown socioeconomic inequalities in these health outcomes associated with factors such as education, occupational class, and income reported. To provide a nuanced under standing of healthy ageing, an assessment of how the process of maintaining health and functioning differs across socioeconomic groups is important. A systematic review has summarised risk and protective factors related to healthy ageing, 8 and several studies were identified that reported a positive effect of education and income on ageing outcomes, suggesting the existence of health inequalities in later life across different socioeconomic positions. However, existing studies have used diverse measures and analytical methods, leading to problems in study comparability and the assessment of factors that could be responsible for variations across countries. To improve understanding of healthy ageing, the Ageing Trajectories of Health: Longitudinal Opportunities and Synergies (ATHLOS) consortium harmonised a wide range of socio demographic, lifestyle, health, and func tioning factors from 17 ageing cohorts across the world. 9 The research team also dev eloped a measure of healthy ageing that incor porated multiple domains of physical and cognitive functioning and provided an indicator for healthy ageing across time and cohorts. 10 Building on the ATHLOS work of data harmonisation and method development, the aim of this study was to investigate the effect of education and wealth on trajectories of healthy ageing and to examine whether health inequalities across education and wealth vary in diverse older populations. Study design and population In this population-based study, we used data from the ATHLOS project. 9 This project gathered 17 ageing studies across the world and harmonised a wide range of lifestyle, social, environmental, physical, and psychological health factors across the different studies. Documentation of the harmonisation process is available online. To estimate longitudinal changes in health status, for the present analysis we excluded cohorts with only one or two survey waves (nine studies, n=192 114) and focused on the remaining eight cohorts with at least three waves of data (n=141 214). This selection comprised the Australian Longitudinal Study of Ageing (ALSA), 11 Aug 15, 2016, with no restrictions on language, time frame, setting, or characteristics of participants, using terms including "healthy ageing" and other relevant terms such as "successful ageing", "positive ageing", "productive ageing", "optimising ageing", "unimpaired ageing", "robust ageing", and "effective ageing", and their review included all longitudinal cohort studies that used "healthy ageing" as a main outcome measure. Because healthy ageing is considered a construct incorporating multiple domains of health, studies were excluded if a single component of healthy ageing (such as cognitive function, quality of life, or wellbeing alone) was used. They assessed risk of bias using the Quality in Prognosis Studies tool. The initial search identified 89 905 publications after removal of duplicates and 65 longitudinal cohort studies met the inclusion criteria. Among the 65 included studies, 25 investigated associations between education and healthy ageing and 14 focused on associations between healthy ageing and income and economic status. The risk of bias was low in these studies. Despite the heterogeneity of measurement methods, high levels of education and income were found to be beneficial to healthy ageing. Although previous studies have suggested these positive associations, the strength of association reported from different cohorts might not be comparable due to variation in measurement methods. Added value of this study Here we used a harmonised dataset of eight longitudinal cohorts from Australia, the USA, Japan, South Korea, Mexico, and Europe. We found low levels of education and wealth to be associated with poorer health at baseline relative to higher levels of education and wealth, but with little effect on the rate of decrease in healthy ageing scores. The gradient of health inequalities at baseline differed across populations and the steepest gradient was found in the study from the USA. Implications of all the available evidence To support maintenance of functional ability and reduce health inequalities in older age, public health policies should incorporate a lifecourse approach and address key determinants and risk factors from early life stages. Future research needs to concentrate on how risk of poor health can accumulate over the lifecourse and investigate how variation in life experience and social, environmental, and cultural factors can affect healthy ageing across different societies. Study (HRS), 14 the Japanese Study of Ageing and Retirement (JSTAR), 15 the Korean Longitudinal Study of Ageing (KLOSA), 16 the Mexican Health and Ageing Study (MHAS), 17 and the Survey of Health Ageing and Retirement in Europe (SHARE). 18 All cohort studies have been approved by the relevant local research ethics committees. This is a secondary data analysis project and so specific ethical approval was not needed. Healthy ageing score Based on the WHO healthy ageing framework, researchers from the ATHLOS consortium reviewed measures of functional ability in the ageing cohorts and identified 41 items related to health, physical, and cognitive functioning. The consortium harmonised these 41 items into binary variables and used item-response theory modelling to generate a common measure for healthy ageing across cohorts. 10 Using the baseline data of all individuals, a two-parameter logistic model was fitted to incorporate all the items and estimate a latent trait score reflecting individual health and functioning level. The estimated parameters from baseline data were applied to follow-up waves and used to generate the scores at different timepoints. The scores were rescaled into a range between 0 and 100; with a higher score indicating better healthy ageing. More detailed information on these scores is in the appendix (pp 4-9). Sociodemographic factors In our analysis we focused on five key factors: age, sex, cohort study, education, and wealth. To align different baseline ages across cohort studies, we centred age to 70 years (ie, calculated as age -70) because one of the cohort studies (ALSA) did not have participants aged 70 years or younger. The datasets harmonised by the ATHLOS consortium provide four levels of education qualification: less than primary education and primary, secondary, and tertiary education. Since some cohort studies had very few or no participants with less than primary education, for our study we combined the first two levels and so the three levels of education we used were low (primary education or less), middle (secondary education), and high (tertiary edu cation). In the ATHLOS harmonised dataset, wealth was a harmonised variable indicating relative position of individuals within specific cohorts. Appropriate measures for personal or household income and finance (such as property, pension, or insurance) were identified and divided into quintiles within cohorts (quintile 1 [Q1] being the most deprived; quintile 5 [Q5] being the most affluent). In the ATHLOS harmonised dataset, comparable information on wealth was not available in Seniors-ENRICA and therefore for this specific analysis we only included the other seven cohort studies. More detailed information on harmonisation is in the appendix (pp 10-11). Analytical strategy Since multilevel modelling can be more flexible when incorporating time variation in follow-up waves across different cohort studies, 19 we used a random-effect model that used a multilevel modelling framework to investigate trajectories of healthy ageing scores and examine the effect of sociodemographic factors accoun ting for nonindependence of repeated measures over time. The model was fitted to estimate fixed and random effects of intercept (baseline scores) and slope (change per year) by years of follow-up, allowing an unstructured covariance matrix of intercept and slope. To examine the effect of baseline age and sex on the trajectories, we included linear and quadratic terms of age and the interaction between age and sex in different models. In the first model (model 1), we investigated the effect of age on baseline score and rate of decrease in score, in the second model we assessed the effect of sex on baseline score accounting for age (model 2A), and the effect of sex on rate of decrease in score accounting for age (model 2B). According to the descriptive information of healthy ageing scores, the gaps in healthy ageing scores increased in older age groups and varied between men and women (appendix p 8). Thus, we fitted a quadratic term of age and interaction between age and sex to fully account for their effects on the trajectories. We added a variable indicating cohort studies to the model, including age and sex to investigate potential variations across the eight cohort studies adjusting for these two basic demographic factors (model 3A and 3B; appendix p 12). We also added two socioeconomic factors, education and wealth, to the adjusted model including age, sex, and study, and we examined their effects on intercept (model 4A for education, model 5A for wealth) and slope estimates (model 4B for education and model 5B for wealth). To investigate whether education and wealth might have different effects on healthy ageing across different cohorts and sexes, we further include their interaction terms regressing on intercept and slope. We also included both education and wealth in one model to test whether their effects on trajectories of healthy ageing scores were independent. To examine whether specific chronic conditions might explain health ineq ualities, we identified five types of harmonised chronic diseases (including cardiovascular diseases, hyper tension, diabetes, chronic respiratory diseases, and joint disorders) at baseline and added them to the best model including demographic and socioeconomic factors. To investigate whether the effect of education varied across birth cohorts, we included interaction terms between birth cohort and education in the modelling. We used descriptive statistics to present baseline demographic information of the participants. For results of multilevel modelling, we present estimated intercept (baseline scores) and regression coefficients with 95% CIs. To visualise the modelling results, we estimated healthy ageing scores given specific age, sex, See Online for appendix or cohort study and present scores by age or years of follow-up. We assessed model fitness using the Bayesian information criterion, 20 with lower values indicating a better model fit. To contextualise the inequality findings, we obtained country-level Gini coefficients for populations aged 65 years or older from the Organisation for Economic Co-operation and Development to compare with the score differences across education and relative wealth levels and present the data in scatter plots. We did several sensitivity analyses. We added quadratic terms of years of follow-up to the mixed models to investigate potential non-linear trajectories. Maximum likelihood estimation should provide unbiased estimates given the assumption of a missing-at-random mechanism. 21 Since the proportions of missing data on education (n=2789 [2•0%]) and wealth (n=4519 [3•3%]) were small in relation to the whole study population, here we report results of analyses related to education or wealth based on participants with complete information on education or wealth. Loss of statistical power was unlikely to be an issue given the large size of the study sample. We also found the distributions of education and wealth levels to be similar across follow-up waves (appendix p 13). To account for potential missing-not-at-random data due to mortality, we fitted a joint model of longitudinal data on healthy ageing scores and survival data on all-cause mortality combining multilevel mod elling and para metric Weibull survival regression. 22 We present the results of joint models as hazard ratios (HRs) with 95% CIs. We did all analysis using Stata (version 15.1) and all analyses were based on the ATHLOS harmonised dataset (version 1.7). Role of the funding source The funder had no role in study design, data collection, data analysis, data interpretation, or writing of the report. The corresponding author had full access to all the data in the study and had the final responsibility for the decision to submit for publication. Results Among the eight cohorts (n=141 214), the earliest studies started from 1992 (table 1). The two larger cohorts, SHARE and HRS, recruited over 30 000 participants while ALSA and Seniors-ENRICA had less than 3000. The length and frequency of follow-up varied across studies. Most studies had follow-up every 2 years for a period of 10 years. The median follow-up period was 6 years (IQR 2-11). ALSA has the most waves of data collection, with 13 waves over two decades, whereas JSTAR only had three waves over 4 years. The associations between trajectories of healthy ageing scores, age, and sex are shown in table 2. For participants aged 70 years (model 1), the baseline score was estimated to be 68•25 points (95% CI 68•13 to 68•37) and the rate of decrease in score was -1•11 (95% CI -1•13 to -1•09) per year ( figure 1, table 2). Older age (model 1) was associated with a lower intercept in linear (-0•65, 95% CI -0•66 to -0•64) and quadratic (-0•02, -0•02 to -0•02) terms of age than was younger age. Men had higher scores than women (model 2A; estimated difference in score between men and women of 4•36, 4•18 to 4•54) 18 19 20 and this difference increased with 1 year increase in baseline age (0•05, 0•04 to 0•07; figure 1, table 2). The rate of decrease in score was slightly greater in men than in women (-0•02, 95% CI -0•04 to 0•00) but the effect size was small (model 2B). After adjusting for age and sex, variation in intercept and slope was found across cohort studies (figure 1; appendix p 12). Compared with HRS, a higher baseline score was found in JSTAR (estimated score difference between cohorts of 8•38, 95% CI 7•92 to 8•83) and a lower baseline in MHAS (-2•85, -3•15 to -2•56; appendix p 12). Rates of decrease in score were generally higher in HRS and MHAS than in the other cohort studies. The lowest BIC was found in model 2A and the value further decreased when adding cohort study in modelling (table 2). The associations between trajectories of health status, education, and relative wealth are reported in table 3. Both education and relative wealth had a strong influence on the baseline scores but had little effect on the rate of decrease in score after adjusting for age, sex, and cohort study. Participants with middle level (5•66, 95% CI 5•49-5•83) and high level of education (10•54, 10•31-10•77) had higher baseline scores than those with low level education (60·18, 59·96-60·41). A higher level of wealth was associated with higher baseline scores and the difference between the least and most affluent quintiles was 8•98 points (95% CI 8•74-9•22). The effect of education and relative wealth on baseline scores varied across cohort studies. ELSA, HRS, MHAS, and SHARE had larger variation across education levels (figure 2). In these cohorts, participants with a middle level of education had higher baseline scores (by approximately 6 points) than those with low level education, and the difference increased to nearly 10 points for those with high level education. In JSTAR, Seniors-ENRICA, ALSA, and KLOSA, the estimated difference in score between those with high level and low level education was less than 6 points. Although most studies showed increasing baseline scores from the least to the most affluent quintiles, ELSA, HRS, and SHARE had steeper gradients than the other cohort studies (figure 2). Due to small numbers of participants in ALSA in the third and fourth wealth quintiles, the 95% CIs were very wide. When we included both education and relative wealth in one model, the effect sizes remained similar across all cohort studies (appendix p 14). Education and relative wealth had similar effects on the trajectory of healthy ageing scores in both men and women with very clear gradients from lowest to highest levels of education and relative wealth (appendix p 15). Furthermore, adding chronic conditions did not reduce the gaps across education and wealth levels (appendix pp [16][17][18] Goodness of fit BIC 3 778 816 3 778 487 3 703 900 3 702 573 Data are estimated intercept and regression coefficients from multilevel modelling, with 95% CIs in parentheses, unless otherwise stated. For wealth, cohort-specific quintiles range from least affluent (Q1) to most affluent (Q5). Model 4A modelled the effect of education on baseline score; model 4B modelled the effect of education on baseline score and rate of decrease in score; model 5A modelled the effect of wealth on baseline score; and model 5B modelled the effect of wealth on baseline score and rate of decrease in score. BIC=Bayesian information criterion. Table 3: Association between education, wealth, and trajectories of healthy ageing score (adjusted for age, sex, and cohort study) vary across birth cohorts (appendix pp [19][20]. The scatter plot of Gini coefficients and effect sizes of inequalities across education and wealth did not show clear patterns (appendix p 21). The results of sensitivity analyses are provided in the appendix (pp [22][23][24]. Although the quadratic model showed increased goodness of fit, the effect sizes of quadratic terms were small (appendix p 22). The results of joint modelling showed a slightly greater rate of decrease in score than our main analysis when including mortality data in the longitudinal analysis (-1•24, 95% CI -1•25 to -1•22; appendix p 24). A higher baseline score (HR 0•96, 95% CI 0•95 to 0•96) and slower rate of decrease in score (0•57, 0•55 to 0•58) than the main analyses were associated with lower risk of mortality after adjusting for age and sex. Discussion Using a harmonised dataset of eight ageing cohorts from the USA, the UK, Spain, Europe, Australia, Japan, South Korea, and Mexico, we investigated changes in health and functioning over the ageing process and the potential effect of demographic and socioeconomic factors on health trajectories. Baselines scores and the rate of decrease in healthy ageing scores varied across different age groups, by sex, and by cohort study. Education and wealth had a strong effect on baseline scores but almost no influence on the rate of decrease in score. Participants with lower levels of education and wealth generally had lower baseline healthy ageing scores but the effect sizes were different across cohort studies. Among the eight cohorts, the inequality gradients were found to be most pronounced in the HRS. The ATHLOS consortium harmonised data from different ageing cohorts across the world and provides a large sample size for longitudinal analysis. Here we focused on eight population-based cohorts and included participants from different settings. Compared with harmonised datasets in the Gateway to Global Aging Data platform, the ATHLOS consortium incorporated additional cohort studies from Australia and Spain and we generated an indicator for healthy ageing that comprises multiple domains of health and functioning measures across cohorts and follow-up waves. The healthy ageing concept highlights what a person can do in older age rather than what kinds of symptoms and pathological abnormalities might be present in an older patient, which has been the focus of other relevant but distinct concepts such as frailty. 4 Although cognitive and motor reserve also focuses on functioning processes and the neural network, reserve is mainly determined by factors in earlier stages of life. 23 Healthy ageing is considered a process of maintaining functional ability and interactions between individual and environmental factors that can modify this process in later life. Our study had some limitations. Most studies in the ATHLOS consortium from low-income and middle-income countries only had one or two waves of data and could not be included in this longitudinal analysis. Despite the process of data harmonisation, variation in methods of data collection or management across cohort studies might not be completely omitted and should be considered when interpreting the findings. We accounted for variation in follow-up waves with multilevel modelling but only two studies (HRS and ALSA) had 20 years of follow-up and were used to inform trajectories after 10 years of follow-up in the other studies. The linear models might not sufficiently capture changes in the rate of decrease in score particularly in the final 10-year follow-up period. However, rates of decrease in score seemed to be similar in the first 10-year period across cohorts and sensitivity analyses showed similar results. Another modelling approach could use country as a multilevel factor; how ever, only SHARE included multiple countries and so generating specific estimates Figure 2: Differences in baseline healthy ageing score across education levels (A) and wealth quintiles (B) by cohort study, adjusted for age and sex Data points are estimated differences in healthy ageing scores, with whiskers showing 95% CIs. For education, low is primary education or less, middle is secondary education, and high is tertiary education. For wealth, cohort-specific quintiles range from least affluent (Q1) to most affluent (Q5), and Seniors-ENRICA is not included because it did not measure wealth. for each cohort study would be difficult. Measures from different studies might collect slightly different information. Using the example of wealth quintiles, some studies only included a single question of household income while others used a series of questions to collect detailed income and financial information. Given such variation, we were not able to obtain a harmonised variable for absolute wealth and only focused on relative levels. The same issue might also affect items of the healthy ageing score. Variation in measurements might affect assoc iations between education, wealth, and trajectories of healthy ageing. However, we adjusted for cohort study in the analysis and these two socioeconomic factors still had important effects on baseline healthy ageing scores. Although multiple imputation could be used to address missing or unavailable data on education and relative wealth, 24 imputing such a large dataset while accounting for multilevel data structure was too challenging and computationally intensive for this study. However, the effect sizes that we calculated here are unlikely to be overestimates and the statistical power of our study should not be affected given the large study population. Some societal and historical factors such as health systems, welfare policies, or economic crises in different societies might also affect health throughout the lifetime and explain health inequalities in later life. However, these measures were not available in the harmonised dataset. We attempted to include countrylevel Gini coefficients, however, no apparent associations with health inequalities across education and relative wealth were observed. Education and wealth were found to have little effect on the rate of decrease in healthy ageing scores in older people across different cohorts. This finding corresponds with another analysis of SHARE that identified several indicators for early-life socioeconomic circumstances (eg, number of books at home, housing quality, and overcrowding) and reported their consistent associations with baseline levels but not rates of decrease in physical, cognitive, and emotional functioning. 25 Given the lack of effect on rates of decrease in healthy ageing scores, cumulative disadvantage due to low socio economic status might have largely deteriorated health conditions in early life stages and led to persistent diff erences throughout older age. The differences in baseline healthy ageing scores across education and wealth levels can be clinically relevant, with a strong effect on mortality in later life. A 10-point difference in baseline healthy ageing score was associated with an approximate 33% decreased risk of mortality. Inequalities in healthy ageing across education and wealth levels were apparent but the scale of the gradient varied across cohort studies. Wider gaps were found in HRS and ELSA than in the other studies, while the effect sizes of education and wealth in these cohorts were nearly half the magnitude of those seen in the other cohorts. This finding might be related to contextual factors in different societies, such as different absolute levels of income and material resources, variation in how education affects income or job opportunities, and systematic differences in the distribution of education groups across the sexes, birth cohorts, and time. Based on the theory of health inequality, 26 education is widely used as a proxy measure for social position or status, while wealth indicates a relative position in the income ladder. The subtle variation between these two measures might imply different pathways via material factors or behavioural and psychosocial factors. Wealth is likely to be related to material factors, such as financial difficulties, poor housing tenure, and little access to health care and insurance, which might have direct effects on poor health across the lifetime and affect functional ability in older age. 27,28 Education is likely to be related to behavioural and psychological factors, such as smoking, diet, and social support. 26 These factors might also affect physical and mental health and capability to maintain functional ability in later life. 8 Here we found both education and relative wealth had independent effects on trajectories of healthy ageing scores across cohort studies and the effect sizes remained similar when we accounted for chronic conditions. Pathways via material, behavioural, and psychological factors might all be important and the role of environmental factors in supporting healthy ageing should be explored. Our findings highlight health inequalities in later life across education and wealth; with effects that appear to vary across different contexts. To identify potential mechanisms that explain the differential effect of education and wealth, a lifecourse approach is needed to understand how risk of poor health can accumulate from early life stages and to investigate key material, behavioural, and psychological factors that generate health inequalities in different societeties. 26,29 More longitudinal studies are needed in low-income and middle-income coun tries to enable the comparison of trajectories of healthy ageing across older populations living in various cultural, social, and environmental contexts. Such comparisons will inform policy planning on addressing determinants of healthy ageing across the world and reducing health inequalities in later life. Contributors Y-TW, CD, and AMP developed the original idea and designed the study approach. ASN organised data harmonisation and management. Y-TW did the data analysis. GMT supervised the analyses. All authors contributed to report writing and approved the final manuscript. Declaration of interests We declare no competing interests. Data sharing Documentation and metadata of the ATHLOS harmonisation process can be accessed online. The original cohort data are publicly available for HRS, ELSA, KLOSA, MHAS, and SHARE, or can be accessed by contacting the study management teams of the studies on reasonable request.
2020-07-02T10:21:59.526Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "1937b52c1b8a2993f3ae7effbc31ed8848d6fb6f", "oa_license": "CCBY", "oa_url": "http://www.thelancet.com/article/S2468266720300773/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "74c418da59057d8259efef0c0d1881fc29e5bce7", "s2fieldsofstudy": [ "Economics", "Education", "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine", "Geography" ] }
58934440
pes2o/s2orc
v3-fos-license
Non-Governmental Health Organizations in Palestine from Israeli Occupation to Palestinian Authority The paper examined the Palestinian Non-Governmental Organisations (PNGOs) from a historical perspective with focus on their roles, the challenges they faced, and their current status after the establishment of the Palestinian National Authority (PNA) in 1994. It also examined their driving motives, their contributions to the advancement and development of the Palestinian society, and the challenges they faced. The role of health in development is highlighted and an introduction to NGOs in general is offered, with emphasis on their characteristics in comparison to the public and private sectors after the establishment of the PNA in parts of the West Bank and Gaza Strip (WBG). It is clear that PNGOs in general and health NGOs in particular played an instrumental role in providing much needed health services, but also in paving the road for the establishment of a Palestinian state. The paper showed that there are three main challenges that faced NGOs, namely, political challenges, financial challenges and the unclear role of the PNA. The first two challenges faced NGOs during the Israeli occupation and continue to do so after the establishment of the PNA. The third challenge became relevant only after the establishment of the PNA in parts of the WBG. Introduction Palestinian non-governmental organisations (PNGOs) have traditionally played a critical role in affecting most aspects of Palestinians' lives.Not only did PNGOs provide desperately needed services to the marginalised and disadvantaged during the Israeli occupation of the West Bank and Gaza Strip (WBG) after 1967, but they also provided an invaluable forum for mobilising and organising the national liberation movement that led to the establishment of the Palestinian National Authority (PNA) in 1994.In the absence of a Palestinian government in charge of the WBG, PNGOs provided almost 60 percent of the primary health care services, 100 percent of kindergartens, as well as a substantial portion of services in agriculture, informal and university education, welfare, and housing in WBG (Clark & Balaj, 1996;Claudet, 1996). When the Palestinian Authority took over a weak, underdeveloped and fragmented public health sector from the Israeli authorities in 1994, it faced the task of health sector reform.In its effort to reform the health sector, the PA sought to impose its own vision of health care.The PA initially seemed successful in establishing its vision of health care.However, its attempt to marginalise and control health organisations in this effort has been challenged. The Palestinian NGOs, their driving motives, their contributions to the advancement and development of the Palestinian society, and the challenges they faced will be addressed, the role of health in development will be highlighted and an introduction to NGOs in general will be offered, with emphasis on their characteristics in comparison to the public and private sectors.The Palestinian NGOs from a historical perspective will be examined, with focus on their roles, the challenges they faced, and their current status after the establishment of the PNA. Health and Development It has long been recognised that the health of individuals and groups is influenced by interaction with their environment (Bracht et al., 1990).It is increasingly recognised that behaviour and condition of life are important determinants of health and illness (Bracht et al., 1990).Accumulated evidence suggests that there is a mutually influencing relationship between health and development.The health status of the labour force is an important determinant of the economic potential of a country (Cornia et al., 1987).Several studies have suggested that poor levels of health are associated with poor worker productivity (Cornia et al., 1987).Life expectancy is relatively shorter in underdeveloped communities, and morbidity and mortality are higher compared to more affluent communities. Global solidarity and concern grew during the twentieth century, as well as the belief that every world citizen has the right to a minimal, basic level of health and well-being.Over the years this has been expressed in several international agreements such as the Alma Ata Declaration "Health for All by the Year 2000" (World Health Organisation & UNICEF, 1978) and the Declaration of Rights of the Child (United Nations, 1959).An increasing amount of international action is allocated to health enhancement in developing countries.Parties involved in this area include local governments, international agencies like the World Health Organisation (WHO), and increasingly, national and local NGOs.In recent years, national and local NGOs have grown significantly in number and influence (Ball and Dunn, 1996).Prominent reasons for the expanding role of national and local NGOs are the failing collaboration between international agencies and local governments, and the increased emphasis on community organisation and empowerment as strategies for health promotion and social development.The NGO explosion of recent years can in part also be seen as a response to regionalisation and globalisation.According to Ball and Dunn, "Between the global trends towards powerful institutions and individualism, NGOs thus represent a third force, for collectivism…NGOs are an expression of the people's belief that through their own initiative they can better fulfil their potential by working together, and in so doing reduce the opportunity gap which exists between the advantaged and disadvantaged in society" (Ball & Dunn, 1996, p. 9). Non-governmental Organisations (NGOs): An Overview NGOs, in general terms, are defined as private, voluntary agencies which fund, implement or actively support development assistance programmes (Frantz, 1987).NGOs emerge when a group of people organise themselves into a social unit that is established with the explicit objective of achieving certain ends, and formulate rules to govern the relations among the members of organisation and the duties of such members (Ball & Dunn, 1996). It is becoming more widely recognised that NGOs constitute a large and growing sector, not only in developing countries, but also in more industrialised and developed ones (Sheldon, 1987).Needless to say, although the specific focus of NGOs differs among countries, their fundamental role of championing the disadvantaged and marginalised segments of the population is common to all NGOs.It is the ubiquitous, complex, and multi-faced nature of NGOs that makes it difficult to give them a precise or unified definition.This is particularly true in the light of the instrumental role that NGOs have played at the grass-roots, local, regional, national, and international levels.The World Bank, for example, defines an NGO as "any group or institution that is independent from government, and that has humanitarian or co-operative rather than commercial objectives" (World Bank, 1996, p. 4).Accordingly, there are two main criteria for classifying an organisation as an NGO.The first is independence from government; otherwise it is no longer an NGO but a public organisation.Second, the entity must not be commercially based and, hence, it cannot operate for the sole purpose of making profit. NGOs are also different in terms of objectives, which differ according to each group's conception of development.Some NGOs direct their action towards clearly defined problems of society, while others act with much broader agendas.Some have objectives which are more charitable, while others shape their efforts in a more political fashion, working with other groups in the pursuit of a common goal (Clark, 1995). The international structure of NGOs also varies; some NGOs are complex and hierarchical and others are simple and informal.Some have salaried staff while others only exist because of their members' militant devotion to a cause (World Bank, 1996). In seeking a definition of these NGOs, Uphoff (1993) classified NGOs as elements of the "third sector" as distinct from the first or "public" sector which is governed by bureaucratic regulations and binding policies, and from the second or "private" sector, which is regulated by market mechanisms and the desire to maximise profit.The "grassroots" or third sector, in comparison, typically relies on volunteer services and community efforts associated with the desire to make social changes. History of the Non-Governmental Organisation Sector in Palestine Throughout the years of the Mandate, Jordanian rule over the West Bank and the Israeli Occupation, Palestinians were governed by others.These regimes and occupying powers had their own agenda, and served interests other than those of Palestinians.(Palestinian Human Rights Information Centre, 1997). In this period the only representative system for Palestinians was through the NGOs.It was the sole outlet for Palestinians to express themselves and serve their own needs.This was a matter of preservation of national identity, especially during the Israeli occupation.NGOs provided services for the most marginalised groups and provided essential services that the Israelis failed to make available to the Palestinians (PASSIA, 1998). Starting early in the twentieth century, a number of charitable and relief organisations were established on a family, tribal, or religious basis to provide services to marginalised groups in the various locations within Palestinian society (Claudet, 1996).Successive rulers over Palestine, including the Ottomans, British, Jordanians, Egyptians and Israelis, played a limited and often destructive role in the provision of social services and investment in human capital in Palestine.It is under these regimes that the civil society organisations, including NGOs, the private sector, the media, professional and labour unions, were mobilised to fill parts of the gap resulting from such systematic destructive practices of civil society (Nakhleh, 1990;Sullivan, 1996).These organisations were particularly active during the 1948 Israeli-Arab war and the post-war era to provide assistance to oppressed and displaced Palestinians (Nakhleh, 1990). In essence, the absence of an official Palestinian government in the WBG that necessitated, and in many cases facilitated, the establishment of NGOs to fill the void and provide basic community services to rural and needy urban areas and refugee camps (Samara, 1990;Abdulhadi, 1996).Retrospectively, one can identify at least two major roles that Palestinian NGOs (PNGOs) played during the Israeli occupation period.They contributed to resisting Israeli occupation, and also provided support to Palestinian society to alleviate the impact of occupation (Abdulhadi, 1996). The Israeli policy towards the Palestinian Territories was to limit its social and political development and to maintain dependence on the Israeli economy.Part of that policy was the limitation, restriction and control of the Palestinian NGOs.The Israeli authority followed several procedures that allowed it to control the activities of the NGOs; the approval of elected boards, the control of registration of new NGOs; and control of the funding for these NGOs (Palestinian Network for NGOs, Newsletter, 1996).In the case of international bodies who offer funds to the local NGOs, such as UNDP, the Israelis controlled the projects to be funded (Palestinian NGOs Network, 1995c). The Israeli authorities approved no more than half of the economic development projects submitted by these agencies.As a result, co-ordinating committees were established by international third sector organisations working in the Occupied Territories such as the Association of International Development Agencies (AIDA) and the Network of European Non-Governmental Organisations in the Occupied Territories (NENGOOT).The main role of these networks was to co-ordinate development efforts, which increased during the uprising, Intifada, (Palestinian NGOs Network, 1995c).Despite these limitations, during the long years of the occupation, it was the NGO sector that provided services to the Palestinian communities, mainly in health, education, and agriculture.The NGOs were the only institutions that were able to function successfully during a time where a national authority did not exist.They accumulated long and valuable detailed experience during those years (PASSIA, 1997). Palestinian NGOs' experience is recognised as one of the richest world-wide with regards to the ability of these organisations to preserve the Palestinian social system and to provide services that government did not provide (Sullivan, 1996;Brynen, 1996).This was particularly true during the Israeli occupation period.It is worth nothing that the richness of NGOs' experiences is closely related to the types of challenges and adverse conditions that these NGOs had to endure during the Israeli occupation period, especially during the Intifada, with all the uncertainties arising during that period.That the instability that Palestinian NGOs faced during the Intifada included a whole set of political, financial, legal, and internal factors.Their activities were distinguished by a high level of community involvement due to the political nature of these programmes.Palestinian local initiatives led to the formation of organised structures in the form of charitable societies, co-operatives, professional associations, youth clubs, women's groups, unions, syndicates and popular movements and committees, which played a vital role in the resistance during the Intifada. To assist them in their role as the de facto body responsible for all development-related activities, PNGOs received substantial financial support from local charities, the Palestinian Liberation Organisation (PLO), Arab governments and NGOs, as well as foreign donor states and NGOs (Nakhleh, 1990).This financial and moral support was political in nature.But the local NGOs were alone working on the ground, and they were directly involved in allocating resources and delivering the services (Bird & Lister, 1997). The NGOs in the WBG were among the most affected by externally-oriented planning process, due to their great dependence on external sources of funding, which limited the NGOs' role in this process very significantly (Hamami, 1998;Samara, 1998). Palestinian NGOs under the Palestinian Authority Since the Oslo accord in 1993, the Palestinian arena has witnessed major changes that have left their mark on all aspects of Palestinian society.The NGO sector is not an exception in this sense (Hamami, 1998). Before the Oslo agreement and the emergence of the Palestinian Authority, the NGO sector was tied with the PLO political factions, particularly in the case of the grassroots organisations which represented an extension of the communist party.Funding was mainly secured from the Palestinian National Fund which used to get donations from Arab governments and levies of 5% of the Palestinian wages in the Gulf countries (Jarbawi, 1995). Due to the several Israeli restrictions on community and development activities and censorship of NGOs and the public, the personal and political agendas of NGOs were frequently masked by the need for security.The need for transparency was ignored on the pretext of security (Abu Sitta, 1998). The transfer of some power to authority institutions has been followed by attempts to control and regulate the NGOs through registration procedures and by delegating discretionary authority to government employees to control and even close down NGOs (Al-Barghouthi & Lennock, 1997).This was in the absence of a clear distribution of responsibilities between NGOs and the PNA.PNA ministries' relationship with NGOs depended on the people in charge rather than clear policies (Al-Barghouthi & Lennock, 1997). NGOs have taken two initiatives to work towards a coordinated strategy.The first was an NGO coordinating committee for the annual UN conferences on the question of Palestine in 1995.This was a purely political body, to which the political parties appointed their representatives.The second initiative was a network started by a group of NGOs dominated mainly by the political left and by non-Fateh NGOs (Palestinian NGOs Network, 1996). Classification of the Non-Governmental Organisations in Palestine As discussed earlier, the term NGOs is broad and encompasses a whole set of institutions, associations and organisations that constitute the so-called third sector (Uphoff, 1993).The Palestinian definition, however, is slightly different from the typical international one.During the Israeli occupation, the term NGO was given to every organisation or institution that was not controlled by the Israeli occupation and did not seek to make profit while fulfilling one or more of its roles.Many of these organisations were not officially registered, due to the "secretive" nature of many of these NGOs during the Israeli occupation, which meant that the real number of existing organisations was higher than the official number (Hamami, 1998). With the establishment of the PNA, however, this definition started to be changed.From its earliest days, the nascent PNA tried to contain or at least regulate the NGO sector operating under its authority.Thus, while Fatah-affiliated NGOs were completely absorbed by the PNA structure, the leftist secular and religious-oriented NGOs, which are mainly affiliated with the opposition, have taken an opposite path.Many of these "opposition" NGOs tried to remain independent from the potential domination of the PNA.Consequently, tension between the PNA and many of these NGOs has escalated to the extent that both sides have exchanged accusations of misuse of public funds and abuse of power (Al-Barghouthi & Lennock, 1997). Palestinian NGOs may be divided into three main types of organisations, based on Korten's typology namely (1) welfare NGOs, which represent the oldest and simplest type of organisations and are typically apolitical in nature; (2) development NGOs, which represent the second generation of Palestinian NGOs that started to appear in the late 1970s with the intention of building the nucleus of the Palestinian institutions while reversing the "de-development" efforts of the Israeli authority in the WBG and lastly ; (3) empowerment organisations, which emerged after the establishment of the PNA in 1994 in order to build the foundation for a democratic government in PNA-controlled areas (Korten, 1987b).In the following subsections, more details will be given to define these types of NGOs. Welfare Organisations Welfare organisations are the oldest and most established type of NGOs in Palestine.Historically, welfare and charitable organisations were established in response to unmet basic needs at the community level.Traditionally, such organisations were largely supported by wealthy families, as well as by internationally based religious organisations, to further their own interests (Korten, 1987b).In the Palestinian context, welfare organisations can be defined as those NGOs for which the ultimate purpose is to help poor and marginalised individuals at the grassroots level.Welfare organisations do not interfere, in or aim at alleviating any of the causes of poverty.Instead, their efforts are focused on providing basic food items and services to meet the needs of poor people.Another distinguishing feature of welfare NGOs is that their scale of operation is typically small and often does not exceed the municipal or village council level (Hilal, 1995). Welfare organisations continued their work during the Jordanian and Egyptian rule over the West Bank andthe Gaza Strip, respectively, between 1948 and1967.During that time, no significant changes in the size or importance of these organisations took place.The turning point for Palestinian welfare organisations was the Israeli occupation of the West Bank and Gaza Strip in 1967.During the early years of the occupation, the Israeli authorities focused on tightening their control over the newly occupied areas and setting the stage for containing the territories politically, economically, and even socially, whenever possible.Thus, they did not put much effort into countering charitable and politically non-threatening initiatives by welfare NGOs (Hilal & Al-Malki, 1998). Furthermore, because of the perceived apolitical nature of the welfare organisations, especially in comparison with the more nationalist groups under the patronage of the PLO, the Israeli authorities felt less threatened by welfare organisations.The Israeli authorities wanted to give the impression that they were targeting only radical and "terrorist" organisations and that if an organisation did not interfere in politics and focused on delivery of basic services, it would not be touched.In addition, with the establishment of the PNA in 1994, charitable organisations, unlike other types of NGOs, continued to operate as normal with no noticeable problems, due to their perceived welfare-orientation and apolitical, non-threatening nature (Hilal, 1995). The Community Rehabilitation Centre for Children (CRCDC) is an example of a welfare organisation.CRCDC was established in 1991 by a group of community activists in Jabalia refugee camp in the Gaza Strip.It is dedicated to providing specialised services and enhancing the acceptance of disabled children in the society through a rehabilitation and community education and awareness programme (PHC, 1995). Development Organisations Development organisations in Palestine are relatively new with the majority established in the last twenty years (Claudet, 1996;Abdulhadi, 1996).The establishment of these organisations was a response to systematic Israeli policies and practices aimed at "de-developing" Palestine and eventually making it an integral part of the Israeli economic structure (Roy, 1995) The Union of Health Work Committees (UHWC) is a good example of a Palestinian development organisation in the health sector.UHWC was established in 1984 by a team of medical paramedical volunteers who started working in outreach clinics, camps and villages under the slogan "the health service is a right for whoever needs it".UHWC focused its work on helping the marginalized groups in the society, not only by providing them with needed services, but also by working with their constituency to develop their skills to become more active citizens (Al-Barghouthi, 1993b). To understand the nature of the functions of these organisations, we should first briefly examine the "de-development" process that sparked their establishment.Israeli plans for de-developing the West Bank and Gaza were based on three interrelated strategies-(1) expropriation and dispossession, (2) integration and externalisation, and (3) de-institutionalisation (Roy, 1994). Expropriation and dispossession aimed at destroying the potential comparative advantage of the West Bank and Gaza through the expropriation of large areas of strategically located parcels of land throughout the WBG under various excuses and pretexts.For a traditionally rural society that relied heavily on agriculture, the negative impacts of these Israeli policies were severe.These conditions deteriorated with Israeli blocking of direct export of Palestinian agricultural products to outside markets, as well as obstacles place d in the way of Palestinian efforts to dig water wells on their lands (Ma'an Development Centre, 2003). The second approach that Israel utilised to de-develop the Palestinian economy focused on increasing Palestinian dependency on Israel through integration and externalisation.Furthermore, Israeli policies supported the integration of the Palestinian economy and its large, youthful, unskilled labour force into the Israeli economy by opening Israel's doors and paying high daily wages to unskilled and semi-skilled Palestinian workers in Israeli factories, farms and the then booming construction industry (Feiler, 1993). Moreover, integration and externalisation were eased by de-linking the Palestinian areas from each other and creating three separate Palestinian entities in Jerusalem, the West Bank and the Gaza Strip after outbreak of the Intifada in 1987.Consequently, it became extremely difficult for people, goods and services to move between these areas without prior permission from the Israeli authority (Latendresse, 1995).The forced physical separation between these three areas encouraged duplication of efforts, lack of co-ordination, and economic inefficiency (Coon, 1992). As for the third part of the Israeli plans to de-develop the WBG, it was focused on de-institutionalisation of the Palestinian organisations.That is, Israel tried to destroy and prevent the establishment of institutions that could threaten or even challenge the legitimacy of Israeli occupations and control over the WBG.Israel restricted the establishment of new associations, as well as professional, labour and student unions, and certainly, professional and development-oriented NGOs, which were politically affiliated for most part (Birzeit University, 1997). Empowerment Organisations Empowerment, power sharing, lobbying, and advocacy were not well -developed concepts during the Israeli occupation.Understandably, the Israeli authorities that occupied the West Bank and Gaza were never considered by Palestinians in the WBG as their representative organisation or as a legitimate authority that they could lobby.On the contrary, the public's perception was that dealing with Israelis was national and moral treason.This was true for individuals and organisations alike.For most Palestinians, this was justifiable, given the great injustice and inequalities inflicted against the Palestinian people by the Israelis (Al- Barghouthi & Daibes, 1993). The Israeli government never permitted the formation of democratic structures for Palestinians in West Bank and Gaza.On the contrary, the Israeli government and its occupation army tried to suppress and crush Palestinian attempts to exercise any democratic principles such as freedom of speech and assembly (Palestinian Human Rights Information Centre, 1994). For these and other reasons, empowerment NGOs, as part of the democratic evolution of civil society, did not develop until the peace negotiations between Israel and the PLO started in Madrid in late 1991, and the Palestinian state started to materialise. With the establishment of the PNA in 1994 and the subsequent elections for the President and the Legislative Council in 1996, a transitional form of national government was formed.To ensure transparency, good governance and pluralism, empowerment and advocacy NGOs were established (Bishara, 1999).The Women's Affairs Technical Committee (WATC) is an example of the empowerment NGOs.WATC was founded by a group of women activists to ensure greater participation of women in the decision-making process at the local and national levels of government within the PNA (Holt, 1996). As a result, PNA has waged a crusade against these NGOs and has placed them under tight control.Two main reasons may have caused this; the first is that these NGOs are playing the role of marginalised political opposition, which is making the PNA's position weaker than it would have been otherwise.The second cause of tension between the PNA and the PNGOs is financial, as the NGOs are receiving foreign funding for their activities. Health Organisations: Actors and Contributors Planning in Palestine has traditionally been a complex process, typically undertaken by outsiders, and health planning is no exception (Bird & Lister, 1997).To illustrate this complexity, after the Israeli occupation of the WBG in 1967, four main actors started to provide health services, with little if any co-ordination between them.In addition to Israeli government run hospitals and health centres, the United Nations Relief and Workers Agency (UNRWA), the private sector, and NGOs operated their own clinics and health centres (Al- Barghouthi & Giacaman, 1990). Because each of the four health providers had its own agenda, competition and hostility among them were keen.Health planning was not co-ordinated for the benefits of the Palestinians.Instead, as in most other sectors, health services were used to promote the provider's own agenda and not the recipients' interests.Consequently, despite the apparent improvement in health condition since 1967, the level of improvement was not as high as it could have been if planning efforts were co-ordinated.This was one reason why a World Bank report emphasised that "the root causes of these problems are to be found in a lack of coherent policy and an absence of sector planning" (World Bank, 1993, p. 32). Unfortunately, due to the existing political circumstances, lack of co-ordination, and unproductive competition, duplication of resources and activities continued even after the establishment of the Ministry of Health of the PNA, which was tasked with providing these services (Al-Barghouthi & Lennock, 1997).Given the obstacles faced by the NGO health sector, however, one should be impressed with the overall primary level health conditions in WBG, which compare favourably with other countries in the region, perhaps with the exception of Israel, which enjoys remarkable health care (UNDP, 1997;World Bank, 1998).In the following sections, a brief introduction to the main four health providers will be presented, with emphasis on the overall contribution of each provider.When Israel took control over the Palestinian public health sector in 1967, the Israeli authorities placed health care under the Israeli's Civil Administration.Health care was run by a coordinator at the Israeli Ministry of Health and by the Ministry of Defence.Prior to the Israeli occupation of the West Bank and the Gaza Strip in 1967, the share in health provision by the public health sector constituted 75 percent.The decline of the public sector is the result of Israeli's policy of 'de-institutionalisation'. Israeli authorities spared no effort to exploit their military power to enforce their control over the Palestinians, their resources, and their development, as discussed in the previous sections (Al-Barghouthi & Giacaman, 1990). All public offices and institutions were run by Israeli officials, either from the military or from the civil administration departments, which were in charge of the day-to-day operation of the West Bank and Gaza (Coon, 1992;Roy, 1994). The Israeli administration neither expanded the public health sector under its control nor did it encourage the development of a Palestinian health sector.Thus the number of hospitals was not increased in accordance to the natural population growth.While new clinics were established by the Israeli government, the number of hospital beds remained unchanged, although the population had more than doubled since the beginning of the occupation (Al-Barghouthi & Lennock, 1997).At the same time the development of the Palestinian health sector was discouraged.The main mechanism employed by the Israeli authorities was denying licenses for the establishment of health institutions or imposing high taxes on them (Al-Barghouthi & Giacman, 1990). The Israeli authorities also restricted access of Palestinians to public health care by introducing a government health insurance scheme.As a result only insured Palestinians could benefit free of charge from government health services (Al-Barghouthi & Lennock, 1997). Health policy in the Occupied Territories remained Israel's responsibility.Although the majority of the employees in the public health sector in the Occupied Territories were Palestinians decision-making was confined to a small number of Israeli army officers responsible for public health (USAID, 1993). The result of Israel's attempt to keep the health sector underdeveloped is reflected in the health indicators.These indicators are especially reflected in the high infant mortality rate (Heiberg & Ovensen, 1994).The high levels of expenditure on health compared to the low outcomes point to a distortion or imbalance in the health sector.These are predominantly related to the effects of the Israeli occupation on the social, economic and political development in the Occupied Territories but also to the inefficiency in health care delivery (Lennock, 1998). Public Health Sector under Palestinian Rule When the PA took over the public health sector in May 1994, it inherited a health care system that suffered from weaknesses in both structural and infrastructural underdevelopment.Furthermore, the health care system was fragmented and health care was provided by four different health care providers, the UN Relief and Works Agency for Palestine Refugees in the Near East (UNRWA), the private sector, the non-governmental sector and the government sector, previously controlled by Israel, without, however coordinating their services.Plans to reform the health care system were initiated in the early 1990s (World Development Report, 1993quoted in Hecht & Musgrove, 1993). The vision on reform of the health sector reform is reflected in the National Health Plan and the Internal Action Plan.Both were developed by the Palestinian Red Crescent Society (PRCS). Criticism of the plans by health committees has centred around several issues.It has been claimed that the plans fail to design an overall strategy for the rehabilitation of the health-care system (PHC, 1995).The plans have also been criticised for emphasising the rehabilitation of infrastructure without paying sufficient attention to structural problems such as the absence of protocols and standards and the coordination between different health providers (PHC, 1995).Furthermore, the plans' rationale that secondary and tertiary health care form the foundation for a comprehensive primary care system has been challenged (Palestinian Health Council, 1995;Schnitzer & Roy, 1994;Daibes & Al-Barghouthi, 1996).Moreover, studies on the rehabilitation of health care systems in post-conflict situations have indicated the risk associated with strategies focusing on infrastructural development without considering long-term development objectives (Macrae et al., 1996). The establishment of the PNA in 1994 brought about some, albeit modest, improvement as far as the government contribution to health is concerned.The main increase in the government's health contribution came through additional donor support for building new hospitals and clinics on the one hand, and transferring the ownership of the many clinics and hospitals from NGOs to the Ministry of Health due to the complete integration of these NGOs in the PNA system (Al-Barghouthi & Lennock, 1997). But because of the lack of co-ordination and competition among the various PNA ministries and institutions, together with increasing reluctance of donors to pay for the operating expenses of these activities, many of the ministries, and especially the MOH began to face crises (Al-Bargouthi & Lennock, 1997).In addition, many of the NGOs recognised that donors were slowly starting to shift their funds to the PNA and realised the importance of forging strong relations with the increasingly powerful PNA (World Bank, 1996). United Nations Relief and Work Agency (UNRWA) UNRWA was established in 1949 to provide relief and social services, basic education and health care to Palestinians who were displaced as a result of the 1948 war.Its ultimate purpose was to improve living conditions of Palestinian refugees in and outside the West Bank and Gaza, particularly in Jordan, Lebanon and Syria, until a permanent solution is achieved to the Palestinian refugee problem.UNRWA provides basic health services without charge to almost one million refugees in the West Bank and Gaza through a single hospital and through contracts with governmental, and privately -owned and run hospitals (Al-Barghouthi & Lennock, 1997). Unlike other health providers in the private-and NGO-sectors, however, UNRWA has established a formal working relationship with all health providers, including these in the public sector, which was run by the Israeli until 1994 (State of Israel, 1994).Moreover, UNRWA has co-ordinated its efforts with other donors, not only in the health sector, but also in the education and welfare sectors.UNRWA's network of offices and its large staff in the West Bank and Gaza have given it a comparative advantage over other health providers.Based on this, UNRWA has become a natural co-ordinating body for many donors with no offices on the ground, as well as for other donors who did not want to be identified due to the sensitive nature of assistance. Private-, For-Profit-Sector Private health care providers in the WBG remain under-developed compared to their counterparts in the region.By and large, Al-Barghouthi (1993a) argued that Israeli policies were the main causes of such under-development in the private health sector.Other reasons for the under-development of this sector were the availability of cheaper primary health-care alternatives offered by NGOs and by UNRWA for registered refugees.Furthermore, as the economic conditions in the WBG started to worsen in real terms during the Israeli occupation period, fewer people were able to afford private health services.Consequently, demand for more expensive health-care decreased, which made it even more difficult for private health providers to continue operation without major financial losses (Clark & Balaj, 1996;Claudet, 1996). In addition, because private health providers were, for the most part, small clinics, they could not benefit from the economy-of-scale, or agglomeration, which a larger and planned system would enjoy (Al-Barghouthi & Daibes, 1993).This situation started to change gradually in 1994, however, as more specialised health centres started to open, and as existing ones expanded into various areas of the WBG.At least two interrelated factors contributed to the sudden expansion of private health-care providers in Palestine after 1994.The first factor is the sizeable need for high quality specialised, secondary, and tertiary medical treatment in certain areas of Palestine, which was not adequately met by the public, UNRWA or NGO sectors.The second factor is the PNA's efforts to reduce dependency on Israeli and other foreign health centres (Al-Barghouthi & Lennock, 1997). Many health professionals left their jobs in the public and NGO sectors and joined private companies to obtain the highly competitive financial packages offered by the private sectors.Consequently, governmental and NGO sectors faced a real "brain drain" to the private sector, which left the public sector professionally weaker than before.Ultimately, the poor and marginalised segments and regions were the biggest losers, because their access to high quality health care, which is increasingly monopolised by the private sector, is declining, especially in the light of deteriorating economic conditions in the area (Clark & Balaj, 1996). Non-Governmental, Not-for-Profit Health Sector In the past, the Israeli occupation and the fluctuation in donors' agendas were the main causes of instability facing the Palestinians and their institutions in the WBG.Until 1994, every person and every institution was forced to be on high alert and to be prepared for dealing with whatever action or policy might be taken by Israeli occupation forces to create new facts on the ground before final status negotiations began (Birzeit University, 1997).In addition, because foreign aid to Palestine has traditionally been politically rather than developmentally oriented, Palestinians had to pay close attention to donors' agenda and to remain able to deal with sudden and unpredictable changes in those agendas (Palestinian Authority, 1997). Unlike health NGOs in the other parts of the world, PNGOs have had a busier and more complex agenda that could not be adequately realised solely by providing health services (Abdulhadi, 1996).In addition to providing desperately needed health services, PNGOs had to become involved in national struggles in their own ways.These factors contributed to the uniqueness of Palestinian NGOs (Claudet, 1996). Major Challenges for Health Planning in Palestine The challenges that face health NGOs include both external and internal forces.As explained below, these challenges are closely related to the unstable, uncertain and evolving political conditions in Palestine (Hamami, 1998).During full Israeli occupation of the WBG, many NGOs were not permitted to register due to alleged illegal political affiliations, thus forcing many to operate without a licence or registration (Mohana, 1996). In dealing with the PNA, which has little government experience, Palestinian health NGOs were forced to face additional challenges.The following sub-sections will examine three of the main challenges that faced NGOs, namely, political challenges, financial challenges and the unclear role of the PNA.The first two challenges faced NGOs during the Israeli occupation and continue to do so after the establishment of the PNA.The third challenge, by definition, became relevant only after the establishment of the PNA in parts of the WBG (Clark & Balaj, 1996). Political Factors More than three decades after the fact, it is becoming widely agreed that the Israeli occupation of the WBG in 1967 and the annexation of Jerusalem in 1980 were the most important factors that prevented the Palestinians from living normal lives and establishing their own civil society and public institutions.Israeli policies and military orders were especially tailored to legitimise and normalise the restrictions imposed on the Palestinian people and their nascent institutions.Non-governmental organisations were put in an awkward position (Craissati, 1997).This dilemma forced NGOs to be creative enough to circumvent all obstacles they faced, and to continue their quest to achieve their missions.In response to this pressure, most of the PNGOs were forced to operate underground, all the while taking on the risk of closure, torture, or jail and were forced to 'plan' as a way to deal with the surrounding challenges and to meet the unexpected (Al-Barghouthi, 1993b). Although these challenges were relevant during Israeli occupation, a similar trend continues even after the establishment of the PNA in 1994, albeit to a lesser extent.Because most successful PNGOs were affiliated with leftist political parties, such as the Palestinian Communist Party (now known as the Palestinian People's Party), or with religious parties such as the Islamic Resistance Movement (known as Hamas), tension between these NGOs and the PNA grew significantly (Abu-Amr, 1997). Two main reasons may be identified for the escalation of the tension between the PNA and the NGOs.The first reason is that the NGOs continued to play a political role that was not in harmony with the PNA's agenda (Claudet, 1996).The second reason for the tension is the competition among the NGOs themselves on the one hand, and between the NGO and the PNA on the other, over donors' assistance.Competition became particularly keen in light of the strong relationship that leaders of various NGO have forged with donors and the stronger capacities of NGOs to attract donors' assistance.A second area where donors could weaken ties among NGOs was in giving funds only to established NGOs or strong NGOs which further widened the gap between established NGOs and disadvantaged one (Sourani, 1996;Silsby, 1996). Despite the fact that the signing of the Oslo accords allowed the Palestinian leadership to impose its authority on certain aspects of people's lives, the nature of the accords prevented any improvement in the economic or social conditions of Palestinian residents of the WBG.On the contrary, statistics show that there was a general decline in most economic and social indicators after 1992 (UNSCOT, 1998, p. 19).With the establishment of the PNA, most donors started to shift a large percentage of their funding from the traditional NGOs to PNA ministries and institutions.The changing situation forced many NGOs to close their operations or reduce their size to cope with the financial problems. By mid-1997, the date of publication of the first report on PNA performance by the State Comptroller's Office, which made headlines worldwide, exposed substantial incompetence, mismanagement and misuse of funds by members of the PNA (Sayigh & Shikaki, 1999).As a result, many donors started to re-think their positions, and some began to re-channel funding to NGOs, at least for specific projects.Due to political pressure from the PNA and the donors' own interests in having a stable PNA, however, few such changes were made.Therefore, NGOs still suffer from these politically motivated factors, and more of them are closing operations.As an official of one donor agency said, "Only the strong, the slick and well-connected organisations will survive under such a competitive environment" (personal communication from a major grant donor, interview with author, Gaza). Limitation of Funding and the Donors' Perspective During the Israeli occupation of the West Bank and Gaza, assistance to Palestinians came from a variety of sources, including from local donations and fees, from the PLO, from Palestinian expatriates, from Arab governmental and non-governmental sources, from American and European sources mainly through foreign NGOs based in the respective countries, and, finally, multilateral agencies such as UNRWA and its subsidiary organisations.The peak years for PNGOs were during the years between 1990-1992, during which time they received from 170-240 million U.S. dollars per year.It has been estimated that 70-80 percent of the total funding was received by only 30-40 NGOs (Clark & Balaj, 1996).Although the Intifada did not directly influence the work of health NGOs, it had a positive impact on the environment surrounding the organisations and consequently on the organisation, as Arab funding sources were available even to smaller organisations. The transfer of authority was accompanied by an increase in engagement of the international community and its support to the PNA since the peace process was possible only with the help of this community.The goal of large international donors both bilateral and multilateral, was to strengthen the PNA, so a large amounts of funding were pledged to support it.Some of these donors, such as the European Union, in the past channelled support to the Palestinians through NGOs because they had networks and links in their communities.The transfer of authority meant a shift in funding from NGOs to the PNA was thought to be the appropriate structure for carrying out the vital services for the Palestinian population (Abu-Sitta, 1998). As discussed earlier, assistance to PNGOs suffered a sharp decline after 1994 because most assistance was channelled to the PNA and its institutions.It is for this reason that a World Bank report examined the 'financial crisis' faced by PNGOs and decided to establish an NGO trust fund to support PNGOs in overcoming the crisis (Claudet, 1996).Another way of dealing with this crisis came from the PNGOs themselves, as many of them started to establish profit making operations within their organisations to cover part of their costs and potentially to reduce dependency on external sources of funding over the long term (Al-Barghouthi & Lennock, 1997). In spite of the important role played by these NGOs, it is worth noting that most assistance was political in nature, and it is only in the last decade that developmental thinking started to influence PNGOs' actions.For example, before the Iraqi invasion of Kuwait in 1991, Kuwait was one of the largest sources of assistance to Palestinians in the West Bank and Gaza.After the Iraqi invasion, Kuwait stopped its financial support of the Palestinians and immediately terminated work contracts with the vast majority of Palestinians working there (Al-Barghouthi & Gene, 1997). As the Palestinian Ministry of Health becomes stronger and more structured, it is able to start new activities to cover most needs.In addition to, and perhaps because of, shifting financial resources from NGOs to the PNA, NGOs are increasingly forced to lay off workers and to close certain branches.This has created a high staff turnover at these NGOs, due to their inability to pay salaries for their staff and overall uncertainty of their future (Palestine, Ministry of Health, 1998).In addition, another requirement of NGOs by the PNA was to acknowledge the right of the PNA to monitor their income. Ambiguity of the Palestinian Authority's Vision Since its establishment in 1994, the nascent PNA has had to deal with a whole set of fundamental and challenging issues that have direct impacts on Palestinian's present and future.The PNA has undertaken a lengthy and pains taking negotiation process with the Israelis on issues ranging from withdrawal of Israeli soldiers from certain areas to determining the exact type of beans that Palestinians are permitted to export.In addition, the PNA, as the acting government in the WBG, has had to establish a police force to ensure the safety and security of its citizens and to address the PNA's security commitments with the Israelis.Furthermore, the PNA has had to deal with a deteriorating services infrastructure and attempt to improve it in preparation for Palestinian statehood in the near future (Bird & Lister, 1997). The experience of the PNA, combined with the politically motivated growth of work force in the PNA-created public sector, have all contributed to the lack of focus and to the narrowness of the PNA's vision.It has become hard, if not impossible, for anyone, including well-connected leaders, to anticipate the PNA's position in any matter of interest (Sayigh & Shikaki, 1999). In addition, the absence of a legal framework to govern and regulate the NGO sector has made it more difficult for these NGOs to operate and plan.Under these ambiguous circumstances, many NGOs have had to work without licences, as they did during the Israeli occupation, and to reduce or even close down their operations completely (Al-Barghouthi & Lennock, 1997). Several officials highlighted the importance of legal codes for the regulating the relationship between NGOs and the PNA.Given the current social and political circumstances, it is unknown, if and when these circumstances will change. Summary This paper has presented information on the role of health NGOs, in general and in Palestine in particular. Health is widely recognised to be related to development and, for this reason, NGOs are increasingly stepping in to fill gaps in government provision, as well as to empower local communities.NGOs have varying affiliations and objectives, but they have in common an independence from government, and a non-profit orientation.With their advantages of practical, relevant grassroots experience and greater flexibility than governments, they play a valuable role which attracts a large proportion of international aid and, increasingly, they are seen by governments as cooperative partners in service provision. In Palestine, NGOs are an important focus of local representation, in a context where most social service provision has been governed by the agendas and interests of successive occupying powers.Nevertheless, they are constrained by the political and legal environment.They receive financial and moral support from a variety of sources, local, regional and international, but this often has political strings attached.They also have a limited role in playing because of their political affiliations.This situation has extended into the recent period under the PNA, who first saw the NGOs as competitors and tried to regulate and control them, although policies have often been unclear or inconsistently applied.Empowerment-oriented NGOs, in particular, have had a difficult relationship with the authority. Health care provision in Palestine has been seriously disrupted by the occupation, both by policies linking the NGOs scope of action, and by the destruction of infrastructure.The PNA, in the areas where it assumed control, inherited a weak public health system.Its response was to focus on infrastructure rehabilitation; however, primary health care was comparatively neglected, and serious distortions remain in service provision, for example, in the disparity between urban and rural areas.Moreover, the public health sector was left with the less successful and experienced doctors, when their more successful counterparts opened private clinics.In this context, a major role of the NGOs has been to fill unmet needs in the health sector, for example, providing services for marginalised population groups and regions.Other roles include implementing the political agenda of the PNA and the affiliated bodies, and establishing the foundation for Palestinian statehood by building basic health infrastructure and preparing a cadre of professionals. Palestinian health NGOs have faced a variety of challenges: political pressure from Israel, the PNA, and donor countries and organisations; financial, due to heavy reliance on fluctuating outside aid; and organisational, related to the ambiguity of the PNA's vision and the changing priorities of influential bodies, making organisational objectives difficult to establish and sustain. 7. 1 Public Health Sector Care 7.1.1Public Health Sector under Israeli Occupation Before the Israeli occupation in 1967, health care in the West Bank and the Gaza Strip was provided by the UN Relief and Works Agency for Palestine Refugees in the Near East (UNRWA), the private sector, charitable organisations and the government health sector, i.e. the Egyptian government the Gaza Strip and the Jordanian one in the West Bank (Al-Barghouthi & Giacaman, 1990).
2018-12-18T17:33:38.905Z
2016-10-28T00:00:00.000
{ "year": 2016, "sha1": "28d9e9ff1ec2f794619c78e2de7032c0d3a5ef3d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5539/ass.v12n12p29", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "28d9e9ff1ec2f794619c78e2de7032c0d3a5ef3d", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Political Science" ] }
219701263
pes2o/s2orc
v3-fos-license
ForageGrassBase: molecular resource for the forage grass meadow fescue (Festuca pratensis Huds.) Abstract Meadow fescue (Festuca pratensis Huds.) is one of the most important forage grasses in temperate regions. It is a diploid (2n = 14) outbreeding species that belongs to the genus Festuca. Together with Lolium perenne, they are the most important genera of forage grasses. Meadow fescue has very high quality of yield with good winter survival and persistency. However, extensive genomic resources for meadow fescue have not become available so far. To address this lack of comprehensive publicly available datasets, we have developed functionally annotated draft genome sequences of two meadow fescue genotypes, ‘HF7/2’ and ‘B14/16’, and constructed the platform ForageGrassBase, available at http://foragegrass.org/, for data visualization, download and querying. This is the first open-access platform that provides extensive genomic resources related to this forage grass species. The current database provides the most up-to-date draft genome sequence along with structural and functional annotations for genes that can be accessed using Genome Browser (GBrowse), along with comparative genomic alignments to Arabidopsis, L. perenne, barley, rice, Brachypodium and maize genomes. We have integrated homologous search tool BLAST also for the users to analyze their data. Combined, GBrowse, BLAST and downloadable data gives a user-friendly access to meadow fescue genomic resources. To our knowledge, ForageGrassBase is the first genome database dedicated to forage grasses. The current forage grass database provides valuable resources for a range of research fields related to meadow fescue and other forage crop species, as well as for plant research communities in general. The genome database can be accessed at http://foragegrass.org. Introduction Grasslands cover 36% of the earth's surface, and they are important as feed sources and pastures for livestock (1). Among several forage crops, meadow fescue is one of the most important forage grass species in temperate regions of the world (2). Meadow fescues in general have better adaptations to the winter survival, whereas the closely related perennial ryegrass (Lolium perenne L.) has better nutritive value with high yield quality but lacks persistency and adaptation to winter survival. The Lolium-Festuca species complex is useful in plant breeding, since it is possible to make intergeneric hybrids (Festulolium) by combining Lolium and Festuca genomes (3). Thus, the complementation of traits in Festulolium hybrids for developing novel cultivars with improved quality and adaptation to winter survival is crucial for sustainable forage production. However, modest genomic resources have been developed for meadow fescue compared with other grass species like perennial ryegrass (4,5). In order to develop molecular tools that might enhance the development of better Festulolium hybrids, we initiated and have now developed high-quality genomic resources for meadow fescue, taking advantage of the close comparative relationships with other grass species such as Arabidopsis, perennial ryegrass (5), barley (Hordeum vulgare), rice (Oryza sativa), Brachypodium distachyon and maize (Zea mays). This brings the published resources for meadow fescue up to the level available for other plant species in databases such as Gramene (http:// www.gramene.org/), PlantGDB (http://www.plantgdb.o rg/), Oryzabase (https://shigen.nig.ac.jp/rice/oryzabase/), Arabidopsis genome database (https://www.arabidopsis.o rg/), Medicago truncatula genome database (http://www.me dicagogenome.org/). Compared with the Gramene, more genetic resources like gene expression, annotation and comparative genomics available in databases specifically developed for individual plant species. Hence, we took initiative to develop forage grass base, dedicated only to forage grass genomics, where the researchers and breeders can readily get access to all the necessary information. High-quality annotated Festuca genomes are now available. As a first step, the genome sequences and genome annotations for two meadow fescue genotypes are made available through ForageGrassBase (http://forage grass.org). ForageGrassBase was developed to make these substantial amounts of genomic data accessible through visualizations and analytic tools in a common framework. Integrating resources for other forage grass species into ForageGrassBase is in progress and for new forage grass genomes, when they become available. Materials and Methods Bootstrap (HTML, CSS), Javascript, PHP and Python were used to develop ForageGrassBase. The Generic Genome Browser (GBrowse) (6) and BLAST (7) were also installed. R packages are used for BLAST results visualizations. The database was organized in a similar way as we developed and described in SalmoBase (8). De novo sequencing of the meadow fescue genomes were performed using Illumina mate pair sequencing and assembly was performed by the SOAPdenovo2 assembler (9). Furthermore, gene annotation was performed by inhouse developed annotation pipelines and python scripts (Supplementary files). Briefly, Illumina reads were mapped to the assembly using STAR v2.3.1z12 (9). Cufflinks v2.2.180 (10) was used to assemble the reads into transcript models for all alignments. Gene models were tested by performing open reading frame (ORF) prediction using TransDecoder (https://github.com/TransDecoder/TransDe coder) using both pfamA and pfamB (11) databases for homology searches and a minimum length of 30 amino acids for ORFs without pfam support and BLASTP (12) analysis (evalue <1e-10) for all predicted proteins. Genome browser The GBrowse is simple and one of the most used genome browsers for visualization of genomes. We installed GBrowse to visualize and share genomic data of meadow fescue (Figure 1). Though two browsers are available for closely related perennial ryegrass genome (4,5), the gene annotation and comparative genomics tracks are missing, and moreover, they are not integrated with other grass genomes. Currently, ForageGrassBase contains molecular data of two meadow fescue genotypes; Festuca HF2/7, a Norwegian genotype originating from a population selected for high frost tolerance and a Yugoslavian genotype, B14/16, which is used by our group to develop a mapping family for linkage map construction (13). Further, a comparative genome analysis was performed against other grass species like Arabidopsis, perennial ryegrass (5) barley, Brachypodium, rice and maize. These comparative genomics tracks consisting of gene names and chromosome positions were added to the genome browsers ( Figure 1). More data and tracks will be added in the near future for other economically important forage grass species like timothy (Phleum pratense) to expand the forage grass genomics resources in ForageGrassBase. BLAST server We have installed a BLAST server to search for homologous regions in the meadow fescue genome. Users having unknown sequences can use BLAST search to find the homologous regions in Festuca and their corresponding homologous genes and their physical location in Arabidopsis, perennial ryegrass, Brachypodium, barley, rice and maize (Figure 2A). After the search, our algorithm chooses the best hits and plots them in a unique way. Briefly, our BLAST output formatting algorithm combines all the hits for query sequence on each target, display horizontal bar for each hit based on the length of the hit and assigns color codes based on the similarity. In this way, it would be easier to interpret the results based on similarity and query coverage. BLAST results are connected to GBrowse, so the users can view the homologous regions and nearby genes and other genomic features in all these species ( Figure 2B). Future plans and integrations ForageGrassBase was developed based on high interest for the molecular data of meadow fescue. Genetic variations and gene expression data will be added using Genetic variation browser (GVBrowser) and Gene expression browser (GEBrowser) in the very near future. Due to rapid developments and lower costs of high-throughput sequencing technologies, we expect more forage grass genome sequence data to be available soon, and these resources and new tools will be added under ForageGrassBase. Database access and feedback All the data used in developing this database are available through the 'Download' menu in ForageGrassBase. Genome sequences and gene annotation files for the two Festuca genotypes are available in 'fasta' and 'gff3' file formats to download and re-use. Users can send their questions and comments through 'Contact form' under 'Contact' menu. Conclusions To the best of our knowledge, ForageGrassBase is the only online database to access, visualize and download data for the forage grass species meadow fescue and its homologous sequences/genes in rice, barley, Brachypodium and maize. Supplementary data Supplementary data are available at Database online. Availability of data and materials This work does not contain additional data.
2020-05-28T09:12:32.471Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "3fa6ad88f58e23f3ff0bd1a24db2993949e8187a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1093/database/baaa046", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dfa11e469e6273ac8bef3ab2444b89a6fc750f0d", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Computer Science", "Biology", "Medicine" ] }
220975109
pes2o/s2orc
v3-fos-license
A Newcastle Disease Virus (NDV) Expressing a Membrane-Anchored Spike as a Cost-Effective Inactivated SARS-CoV-2 Vaccine A successful severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) vaccine must not only be safe and protective, but must also meet the demand on a global scale at a low cost. Using the current influenza virus vaccine production capacity to manufacture an egg-based inactivated Newcastle disease virus (NDV)/SARS-CoV-2 vaccine would meet that challenge. Here, we report pre-clinical evaluations of an inactivated NDV chimera stably expressing the membrane-anchored form of the spike (NDV-S) as a potent coronavirus disease 2019 (COVID-19) vaccine in mice and hamsters. The inactivated NDV-S vaccine was immunogenic, inducing strong binding and/or neutralizing antibodies in both animal models. More importantly, the inactivated NDV-S vaccine protected animals from SARS-CoV-2 infections. In the presence of an adjuvant, antigen-sparing could be achieved, which would further reduce the cost while maintaining the protective efficacy of the vaccine. Introduction A severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) vaccine is urgently needed to mitigate the current coronavirus disease 2019 (COVID-19) pandemic worldwide.Numerous vaccine approaches are being developed [1][2][3][4].However, many of them are not likely to be cost-effective and affordable for low-income countries and under-insured populations.This could be of concern in the long run, as it is crucial to vaccinate a larger population than the high-income minority to establish herd immunity and effectively contain the spread of the virus.Among all the SARS-CoV-2 vaccine candidates, an inactivated vaccine might have the advantage over live vaccines of having a better safety profile in vulnerable individuals.In addition, inactivated vaccines could be combined with an adjuvant to obtain a better protective efficacy and dose-sparing to meet the large global demand.However, the current platform for producing the inactivated whole virion SARS-CoV-2 vaccine requires propagation of the virus in cell culture under biosafety level 3 (BSL-3) conditions and only very few BSL-3 vaccine production facilities exist [3], limiting the scaling.Excessive inactivation procedures might have to be implemented to ensure complete inactivation of the virus, at the risk of losing antigenicity of the vaccine.Many viral vector vaccines against coronaviruses have been developed, most of which are used as live vaccines [4][5][6][7][8][9].In addition, the efficacy of certain viral vectors could be dampened by pre-existing immunity to the viral backbone in the human population.Most recombinant protein vaccines require cumbersome manufacturing procedures that would make it difficult to conduct inexpensive mass manufacturing.Genetic vaccines (mRNA and DNA vaccines) display great promise, but as they have only recently been developed, their performance in humans is uncertain. We have previously reported the construction of Newcastle disease virus (NDV)-based viral vectors expressing an uncleaved spike protein, whose transmembrane domain and cytoplasmic tail were replaced with those from the NDV fusion (F) protein (S-F chimera) [10].We have shown that these NDV vector vaccines grow well in embryonated chicken eggs, and that the SARS-CoV-2 spike (S) proteins are abundantly incorporated into the NDV virions [10].The NDV vector, based on a vaccine virus strain against an avian pathogen, overcomes the abovementioned limitation for viral vector vaccines and allows the manufacturing of the vaccine under BSL-2 conditions, prior to its inactivation.The construct used as the inactivated vaccine in this study expresses an S-F chimera and has a mutation (L289A) in the F protein of NDV (NDV-S), which was shown to facilitate hemagglutinin-neuraminidase (HN)-independent fusion of the virus (Figure 1A).This mutant NDV (F L289A) is currently being used as an oncolytic agent in a Phase I trial (NCT04135352).To develop an NDV-based inactivated SARS-CoV-2 vaccine, the existing global influenza virus vaccine production capacity could be employed, with minor modifications to the manufacturing pipeline of inactivated influenza virus vaccines.As egg-grown influenza virus vaccines are inactivated by formalin or beta-propiolactone (BPL) treatment, we chose BPL inactivation for the NDV-S vaccine because it is believed to be a less disrupting process.Such inactivated NDV-S vaccines will display SARS-CoV-2 spike proteins, together with HN and F NDV proteins, on the surface of the whole inactivated virions.The inactivated NDV-S vaccine could be administered intramuscularly, with an adjuvant for dose sparing.This approach should be suitable for safely inducing spike-specific protective antibodies (Figure 1B). In this study, we investigated NDV-S as an inactivated SARS-CoV-2 vaccine candidate with and without an adjuvant in mice and hamsters.We found that the S-F chimera expressed by the NDV vector is very stable, with no measurable loss of stability after 3 weeks of 4 • C storage in allantoic fluid.The beta-propiolactone (BPL)-inactivated NDV-S vaccine is immunogenic, inducing high titers of S-specific antibodies in both animal models.Furthermore, the effects of a clinical-stage investigational liposomal suspension adjuvant (R-enantiomer of the cationic lipid DOTAP (R-DOTAP)) [11][12][13][14], as well as an MF-59-like oil-in-water emulsion adjuvant (AddaVax), were also evaluated in mice.Both adjuvants were shown to achieve dose sparing (>10 fold) in mice.The vaccinated animals displayed less weight loss and significantly reduced viral loads in the lungs compared to the vector-only control animals after being challenged with a mouse-adapted SARS-CoV-2 strain.This is encouraging as the existing global egg-based production capacity for inactivated influenza virus vaccines could be utilized immediately to rapidly produce an egg-based NDV-S vaccine with minimal modifications to their production pipelines.Most importantly, this class of products is amenable to large-scale production at a low cost [15][16][17].Alternatively, the NDV-S and other chimeric NDV vaccines could also be manufactured in cultured cells such as Vero cells [18].In this study, we investigated NDV-S as an inactivated SARS-CoV-2 vaccine candidate with and without an adjuvant in mice and hamsters.We found that the S-F chimera expressed by the NDV vector is very stable, with no measurable loss of stability after 3 weeks of 4 °C storage in allantoic fluid.The beta-propiolactone (BPL)-inactivated NDV-S vaccine is immunogenic, inducing high titers of S-specific antibodies in both animal models.Furthermore, the effects of a clinical-stage investigational liposomal suspension adjuvant (R-enantiomer of the cationic lipid DOTAP (R-DOTAP)) [11][12][13][14], as well as an MF-59-like oil-in-water emulsion adjuvant (AddaVax), were also evaluated in mice.Both adjuvants were shown to achieve dose sparing (>10 fold) in mice.The vaccinated animals displayed less weight loss and significantly reduced viral loads in the lungs compared to the vector-only control animals after being challenged with a mouse-adapted SARS-CoV-2 strain.This is encouraging as the existing global egg-based production capacity for inactivated influenza virus vaccines could be utilized immediately to rapidly produce an egg-based NDV-S vaccine with minimal modifications to their production pipelines.Most importantly, this class of products is amenable to large-scale production at a low cost [15][16][17].Alternatively, the NDV-S and other chimeric NDV vaccines could also be manufactured in cultured cells such as Vero cells [18]. Ethics Statement Animal studies were performed in accordance with protocols (PROTO202000098 and CEIRS program, 13-0386 PRYR II-IACUC-2013-1408) approved by the Institutional Animal Care and Use Committee (IACUC) at the Icahn School of Medicine at Mount Sinai.All animals were housed in a The sequence of the spike-fusion (S-F) chimera (green: ectodomain of S, and black: the transmembrane domain and cytoplasmic tail of the NDV F protein) was inserted between the P and M gene of the NDV LaSota (NDV_LS) strain L289A mutant (NDV_LS/L289A).NDV-S: NDV_LS/L289A_S-F.The polybasic cleavage site of the S was removed ( 682 RRAR 685 to A). (B) The concept overview of an inactivated NDV-based SARS-CoV-2 vaccine.The NDV-S vaccine could be produced using the current global influenza virus vaccine production capacity.Such an NDV-S vaccine displays abundant S proteins on the surface of the virions.The NDV-S vaccine could be inactivated by beta-propiolactone (BPL).The NDV-S vaccine could be administered intramuscularly (i.m.) to elicit protective antibody responses in humans. Ethics Statement Animal studies were performed in accordance with protocols (PROTO202000098 and CEIRS program, 13-0386 PRYR II-IACUC-2013-1408) approved by the Institutional Animal Care and Use Committee (IACUC) at the Icahn School of Medicine at Mount Sinai.All animals were housed in a temperature-controlled biosafety level 2 (BSL-2) animal facility in the Annenberg building and Icahn building.All efforts were made to minimize animal suffering. Plasmids The construction of the NDV_LS/L289A_S-F rescue plasmid has been described in a previous study [10].Briefly, the sequence of the ectodomain of the S without the polybasic cleavage site ( 682 RRAR 685 to A) was amplified from the pCAGGS plasmid encoding the codon-optimized nucleotide sequence of the S gene (GenBank: MN908947.3) of a SARS-CoV-2 isolate (Wuhan-Hu-1/2020) by a polymerase chain reaction (PCR) [19], using primers containing the gene end (GE), gene start (GS), and Kozak sequences at the 5' end [20].The nucleotide sequence of the transmembrane domain (TM) and the cytoplasmic tail (CT) of the NDV_LaSota fusion (F) protein was codon-optimized for mammalian cells and synthesized by Integrated DNA Technologies (IDT, Coralville, IA, USA) (gBlock).The amplified S ectodomain was fused to the TM/CT of F through a GS linker (GGGGS).Additional nucleotides were added at the 3' end to follow the "rule of six" of the paramyxovirus genome.The S-F gene was inserted between the P and M gene of the pNDV_LaSota (LS) L289A mutant (NDV_LS/L289A) antigenomic cDNA by in-Fusion cloning (Takara Bio USA Inc., Mountain View, CA, USA).This NDV_LS/L289A mutant is currently being used in a Phase I oncolytic NDV trial (NCT0413532).The recombination product was transformed into NEB ® Stable Competent Escherichia.coli (New England Biolabs Inc., Ipswich, MA, USA) to generate the NDV_LS/L289A_S-F rescue plasmid.The plasmid was purified using the PureLink TM HiPure Plasmid Maxiprep Kit (Thermo Fisher Scientific, Waltham, MA, USA). Rescue of NDV LaSota Expressing the Spike of SARS-CoV-2 To rescue NDV_LS/L289A_S-F, six-well plates of BSRT7 cells were seeded 3 × 10 5 cells per well the day before transfection.The next day, 4 µg of pNDV_LS/L289A_S-F, 2 µg of pTM1-NP, 1 µg of pTM1-P, 1 µg of pTM1-L, and 2 µg of pCI-T7opt were re-suspended in 250 µL of Opti-MEM (Gibco, Gaithersburg, MA, USA).The plasmid cocktail was then gently mixed with 30 µL of TransIT LT1 transfection reagent (Mirus) [10].The mixture was incubated at room temperature (RT) for 30 min.Toward the end of the incubation, the growth medium of each well was replaced with 1 mL of Opti-MEM.The transfection complex was added dropwise to each well and the plates were incubated at 37 • C with 5% CO 2 .The supernatant and cells from transfected wells were harvested at 48 h post-transfection, and briefly homogenized by several strokes using an insulin syringe.Two hundred microliters of the homogenized mixture was injected into the allantoic cavity of 8-to 10-day-old specific-pathogen-free (SPF) embryonated chicken eggs.The eggs were incubated at 37 • C for 3 days, before being cooled at 4 • C overnight.The allantoic fluid was collected and clarified by centrifugation.The rescue of NDV was determined by a hemagglutination (HA) assay using 0.5% chicken or turkey red blood cells.The RNA of the positive samples was extracted and treated with DNase I (Thermo Fisher Scientific, Waltham, MA, USA).A reverse transcriptase-polymerase chain reaction (RT-PCR) was performed to amplify the transgene.The sequences of the transgenes were confirmed by Sanger Sequencing (Genewiz, South Plainfield, NJ USA).Recombinant DNA experiments were performed in accordance with protocols approved by the Icahn School of Medicine at Mount Sinai Institutional Biosafety Committee (IBC). Preparation of Concentrated Virus Before concentrating the virus, allantoic fluids were clarified by centrifugation at 3441× g using a Sorvall Legend RT Plus Refrigerated Benchtop Centrifuge (Thermo Fisher Scientific, Waltham, MA, USA) at 4 • C for 30 min to remove debris.Live virus in the allantoic fluid was pelleted through a 20% sucrose cushion in NTE buffer (100 mM NaCl, 10 mM Tris-HCl, 1 mM EDTA, pH 7.4) by ultra-centrifugation in a Beckman L7-65 ultracentrifuge at 25,000 rpm for two hours at 4 • C using a Beckman SW28 rotor (Beckman Coulter, Brea, CA, USA).Supernatants were aspirated off and the pellets were re-suspended in PBS (pH 7.4).The protein content was determined using the bicinchoninic acid (BCA) assay (Thermo Fisher Scientific, Waltham, MA, USA).To prepare inactivated concentrated viruses, one part of 0.5 M disodium phosphate (DSP) was mixed with 38 parts of the allantoic fluid to stabilize the pH.One part of 2% beta-propiolactone (BPL) was added dropwise to the mixture during shaking, which gave a final concentration of 0.05% BPL.The treated allantoic fluid was mixed thoroughly and incubated on ice for 30 min.The mixture was then placed in a 37 • C water bath shaken every 15 min for two hours.The inactivated allantoic fluid was clarified by centrifugation at 3441× g for 30 min.The loss of infectivity was confirmed by the lack of growth (determined by the HA assay) of the virus (1:1000 dilution in PBS) in 10-day-old embryonated chicken eggs that were inoculated.The inactivated viruses were concentrated as described above. Evaluation of the Stability of the S-F in the Allantoic Fluid The allantoic fluid containing the NDV_LS/L289A_S-F virus was harvested and clarified by centrifugation.The clarified allantoic fluid was aliquoted into 15 mL volumes.Week (wk) 0 allantoic fluid was concentrated immediately after centrifugation as described above, through a 20% sucrose cushion.The pelleted virus was re-suspended in 300 µL phosphate buffered saline (PBS) and stored at −80 • C. The other three aliquots of the allantoic fluid were maintained at 4 • C to test the stability of the S-F construct.Week 1, 2, and 3 samples were collected consecutively on a weekly basis, and concentrated virus was prepared in 300 µL PBS using the same method.The protein content of the concentrated virus from wk 0, 1, 2, and 3 was determined using the BCA assay after one freeze-thaw from −80 • C. One microgram of each concentrated virus preparation was resolved in 4-20% sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE; Bio-Rad, Hercules, CA, USA).The S-F protein and the NDV hemagglutinin-neuraminidase (HN) protein were detected by Western blot. Western Blot Concentrated live or inactivated virus samples were mixed with Novex™ Tris-Glycine SDS Sample Buffer (2X) (Thermo Fisher Scientific, Waltham, MA, USA) with NuPAGE™ Sample Reducing Agent (10X) (Thermo Fisher Scientific, Waltham, MA, USA).One or two micrograms of the concentrated viruses was heated at 95 • C for 5 min, before being resolved on 4-20% SDS-PAGE (Bio-Rad, Hercules, CA, USA), using the Novex™ Sharp Pre-stained Protein Standard (Thermo Fisher Scientific, Waltham, MA, USA) as the protein marker.To perform Western blots, proteins were transferred onto a polyvinylidene difluoride (PVDF) membrane (GE healthcare, Chicago, IL USA).The membrane was blocked with 5% non-fat dry milk in PBS containing 0.1% v/v Tween 20 (PBST) for 1 h at room temperature (RT).The membrane was washed with PBST on a shaker three times (10 min at RT each time) and incubated with an S-specific mouse monoclonal antibody 2B3E5 (provided by Dr. Thomas Moran at ISMMS) or an HN-specific mouse monoclonal antibody 8H2 (MCA2822, Bio-Rad, Hercules, CA, USA) diluted in PBST containing 1% bovine serum albumin (BSA) overnight at 4 • C. The membranes were then washed with PBST on a shaker three times (10 min at RT each time) and incubated with secondary sheep anti-mouse IgG linked with horseradish peroxidase (HRP) diluted (1:2000) in PBST containing 5% non-fat dry milk.The secondary antibody was discarded and the membranes were washed with PBST on a shaker three times (10 min at RT each time).Pierce™ ECL Western Blotting Substrate (Thermo Fisher Scientific, Waltham, MA, USA) was added to the membrane, and the blots were imaged using the Bio-Rad Universal Hood Ii Molecular imager (Bio-Rad, Hercules, CA, USA) and processed by Image Lab Software (Bio-Rad, Hercules, CA, USA). Immunization and Challenge Study in BALB/c Mice Seven-week-old female BALB/cJ mice (Jackson Laboratories, Bar Harbor, ME, USA) were used in this study.Experiments were performed in accordance with protocols approved by the Icahn School of Medicine at Mount Sinai Institutional Animal Care and Use Committee (IACUC).Mice were divided into 10 groups (n = 5) receiving the inactivated virus without or with an adjuvant at three different doses intramuscularly.The vaccination followed a prime-boost regimen with a two-week interval.Specifically, group 1, group 2, and group 3 received 5, 10, and 20 µg inactivated NDV-S vaccine (total protein) without the adjuvant, respectively; group 4, group 5, and group 6 received low doses of 0.2, 1, and 5 µg inactivated NDV-S vaccine, respectively, combined with 300 µg/mouse of R-DOTAP (PDS Biotechnology); group 7, group 8, and group 9 mice received 0.2, 1, and 5 µg inactivated NDV-S vaccine, respectively, with 50 µL/mouse of AddaVax (Invivogen) as the adjuvant; and group 10 received 20 µg inactivated wild type (WT) NDV as the vector-only control.The SARS-CoV-2 challenge was performed at the University of North Carolina by Dr. Ralph Baric's group in a biosafety level 3 (BSL-3) facility.Mice were intranasally (i.n.) challenged 19 days after the boost using a mouse-adapted SARS-CoV-2 strain at 7.5 × 10 4 plaque forming units (PFU) [2,21].Weight loss was monitored for 4 days. Immunization and Challenge Study in Golden Syrian Hamsters Eight-week-old female golden Syrian hamsters were used in this study.Experiments were performed in accordance with protocols approved by the Icahn School of Medicine at Mount Sinai Institutional Animal Care and Use Committee (IACUC).Four groups (n = 8) of hamsters were included.The inactivated vaccines were given intramuscularly following a prime-boost regimen with a two-week interval.Group 1 received 10 µg of inactivated NDV-S vaccine, group 2 received 5 µg of inactivated NDV-S vaccine combined with 50 µL of AddaVax per hamster, and group 3 hamsters received 10 µg of inactivated WT NDV as the vector-only control.A healthy control group receiving no vaccine was also included.Twenty-four days after the boost, hamsters were challenged intranasally with 10 4 PFU of the USA-WA1/2020 SARS-CoV-2 strain in a biosafety level 3 (BSL-3) facility.Weight loss was monitored for 5 days. Lung Titers The inferior lung lobes of mice were collected and homogenized in 1 mL PBS.Upper right (UR) and lower right (LR) lung lobes of hamsters were harvested at day 2 and day 5 post-infection.Each lung lobe of hamsters was homogenized in 1 mL PBS.A plaque assay was performed to measure the viral titer in the lung homogenates, as described previously [2,10,21].Geometric mean titers of plaque forming units (PFU) per lobe (mice) or per mL (hamsters) were calculated using GraphPad Prism 7.0. Enzyme Linked Immunosorbent Assays (ELISAs) Mice were bled pre-boost and 11 days after the boost.Hamsters were bled pre-boost and 26 days after the boost.Sera were isolated by low-speed centrifugation.ELISAs were performed as described previously [19].Briefly, Immulon 4 HBX 96-well ELISA plates (Thermo Fisher Scientific, Waltham, MA, USA) were coated with 2 µg/mL of recombinant trimeric S protein produced in insect cells (50 µL per well) in coating buffer (SeraCare Life Sciences Inc., Milford, MA, USA) overnight at 4 • C [19].The next day, all plates were washed three times with 220 µL PBS containing 0.1% (v/v) Tween-20 (PBST) and blocked in 220 µL blocking solution (3% goat serum, 0.5% non-fat dried milk powder, 96.5% PBST) for 1 h at RT.Both mouse sera and hamster sera were three-fold serially diluted in blocking solution starting at 1:30, followed by a 2 h incubation at RT. ELISA plates were washed three times with PBST and incubated in 50 µL per well of sheep anti-mouse IgG-horseradish peroxidase (HRP) conjugated antibody (GE Healthcare, Chicago, IL USA) or goat anti-hamster IgG-HRP conjugated antibody (Invitrogen, Carlsbad, CA, USA) diluted (1:3000) in blocking solution.Plates were washed three times with PBST and 100 µL of o-phenylenediamine dihydrochloride (SigmaFast OPD; Sigma, St. Louis, MO, USA) substrate was added per well.After developing the plates for 10 min, 50 µL of 3 M hydrochloric acid (HCl) was added to each well to stop the reactions.The optical density (OD) was measured at 492 nm on a Synergy 4 plate reader (BioTek, Winooski, VT, USA) or equivalents.An average of OD values for blank wells plus three standard deviations was used to set a cutoff for plate blank outliers.A cutoff value was established for each plate that was used for calculating the endpoint titers.The endpoint titers of serum IgG responses were graphed using GraphPad Prism 7.0. Micro-Neutralization Assay All neutralization assays were performed using Vero E6 cells in the biosafety level 3 (BSL-3) facility following institutional guidelines, as described previously [19,22].Serum samples were heat-inactivated at 56 • C for 60 min prior to use.Pooled sera in technical duplicates were serially diluted three-fold, starting at 1:20 dilution.The cells were fixed with 100 µL 10% formaldehyde per well for 24 h, before being taken out of the BSL-3 facility.A cell-based ELISA using an anti-NP antibody (1C7), kindly provided by Dr. Thomas Moran at ISMMS, was performed in a BSL-2 biosafety cabinet, as previously described [19,22].The OD of 492 nm was measured on a Biotek SynergyH1 Microplate Reader.Non-linear regression curve fit analysis (the top and bottom constraints were set at 100% and 0%, respectively) over the dilution curve was performed to calculate 50% of inhibitory dilution (ID 50 ) of the serum using GraphPad Prism 7.0. Statistics The statistical analysis was performed using GraphPad Prism 7.0.The statistical difference in lung viral titers was determined using the Kruskal-Wallis test with Dunn's correction for multiple comparisons. The Spike Protein Expressed on NDV Virions Is Stable in Allantoic Fluid The stability of the antigen could be of concern as the vaccine needs to be purified and inactivated through a temperature-controlled (~4 • C) process.The final product is often formulated and stored in liquid buffer at 4 • C. To examine the stability of the S-F protein, allantoic fluid containing the NDV-S live virus was aliquoted into equal volumes (15 mL) and stored at 4 • C. Samples were collected weekly (wk 0, 1, 2, and 3) and concentrated through a 20% sucrose cushion.The concentrated virus was re-suspended in equal amounts of PBS.The total protein content of the four aliquots was comparable among the preparations (wk 0: 0.94 mg/mL; wk 1: 1.04 mg/mL; wk 2: 0.9 mg/mL; wk 3: 1.08 mg/mL).The stability of the S-F construct was evaluated by Western blot with the anti-S monoclonal antibody 2B3E5.Compared to the stability of the NDV HN protein, the Spike protein remained stable when kept in allantoic fluid at 4 • C (Figure 2A).The inactivation by 0.05% BPL was confirmed by the lack of HA activity following inoculation of the inactivated virus into embryonated chicken eggs (Figure 2B).Importantly, the inactivation procedure using 0.05% BPL did not cause any loss of antigenicity of the S-F, as evaluated by Western blot (Figure 2C).These observations demonstrate that the membrane-anchored S-F chimera expressed by the NDV vector is very stable, without degradation at 4 • C for 3 weeks or when treated with BPL for inactivation. Inactivated NDV-S Vaccine Induced High Titers of Binding and Neutralizing Antibodies in Mice For a pre-clinical evaluation of the inactivated NDV-S vaccine, the immunogenicity and dose-sparing ability of the adjuvants were investigated in mice.A dose-ranging study of the vaccine in the presence or absence of an adjuvant was evaluated based on the ability to induce antibody/neutralizing antibody responses.After partial purification, the vaccine preparation was administered intramuscularly, following a prime-boost regimen with a 2-week interval.Specifically, for the three unadjuvanted groups, mice were intramuscularly immunized with increasing doses of inactivated NDV-S vaccine at 5, 10, or 20 µg per mouse.Two adjuvants were tested here: A clinical-stage adjuvant, liposomal suspension of the pure R-enantiomer of the cationic lipid DOTAP (R-DOTAP) and the MF59-like oil-in-water emulsion adjuvant AddaVax.Each adjuvant was combined with low doses of NDV-S vaccines at 0.2, 1, and 5 µg.Mice receiving 20 µg of inactivated WT NDV were used as vector-only (negative) controls (Figure 3A).Mice were bled pre-boost (2 weeks after prime) and 11 days post-boost to examine antibody responses by ELISAs and micro-neutralization assays (Figure 3A) [19,22].After one immunization, all vaccination groups developed S-specific antibodies.The boost greatly increased the antibody titers of all NDV-S immunization groups.Immunization Vaccines 2020, 8, 771 8 of 14 with R-DOTAP combined with 5 µg of vaccine induced the highest antibody titer.Immunization with one microgram of vaccine formulated with R-DOTAP or AddaVax and 5 µg of vaccine with AddaVax induced comparable levels of binding antibody, which is also similar to the titers induced by 20 µg of vaccine without an adjuvant.As expected, immunization with the inactivated wild-type NDV virus did not induce S-specific antibody responses (Figure 3B).We performed microneutralization assays to determine the neutralizing activity of serum antibodies collected from vaccinated mice.Except for mice immunized with the WT NDV, sera from all mice immunized with the NDV-S vaccine showed neutralizing activity against the SARS-CoV-2 USA-WA1/2020 strain.The neutralization titers induced by the immunization of 1 µg of vaccine with R-DOTAP (ID 50 of ~476) and 5 µg of vaccine with AddaVax groups (ID 50 of ~515) appeared to be the highest and were comparable to each other.These levels are also in the higher range of human convalescent serum neutralization titers, as measured in our previous studies [19,23].Interestingly, although the group receiving 5 µg of vaccine with R-DOTAP developed the most abundant binding antibodies detected by ELISA, these sera were not the most neutralizing ones, suggesting that R-DOTAP might have a different impact on immunogenicity compared to AddaVax.It is possible that with more antigen combined with R-DOTAP, the immune responses were skewed towards the induction of non-neutralizing antibodies (Figure 3C).In any case, these results demonstrated that the inactivated NDV-S vaccine expressing the membrane-anchored S-F was immunogenic, inducing potent binding and neutralizing antibodies.Importantly, at least 10-fold dose sparing was achieved with an adjuvant in mice. Inactivated NDV-S Vaccine Induced High Titers of Binding and Neutralizing Antibodies in Mice For a pre-clinical evaluation of the inactivated NDV-S vaccine, the immunogenicity and dose- DOTAP might have a different impact on immunogenicity compared to AddaVax.It is possible that with more antigen combined with R-DOTAP, the immune responses were skewed towards the induction of non-neutralizing antibodies (Figure 3C).In any case, these results demonstrated that the inactivated NDV-S vaccine expressing the membrane-anchored S-F was immunogenic, inducing potent binding and neutralizing antibodies.Importantly, at least 10-fold dose sparing was achieved with an adjuvant in mice.The 50% of inhibitory dilution (ID 50 ) of serum samples showing no neutralizing activity (WT NDV) was set as 10 (LoD: limit of detection). The Inactivated NDV-S Vaccine Protects Mice from Infection by a Mouse-Adapted SARS-CoV-2 Virus To evaluate vaccine-induced protection, mice were challenged 19 days post-boost using a mouse-adapted SARS-CoV-2 virus (Figure 3A) [2,10,21].Weight loss was monitored for 4 days post-infection, at which point the mice were euthanized to assess pulmonary virus titers.Only the negative control group receiving the WT NDV, was observed to lose notable weight (~10%) by day 4 post-infection, while all of the vaccinated groups showed no weight loss (Figure 4A).Viral titers in the lung at 4 days post-challenge were also measured.As expected, the negative control group given the WT NDV exhibited the highest viral titer of >10 4 PFU/lobe.Groups receiving 5 µg of unadjuvanted vaccine or 0.2 µg of vaccine with R-DOTAP exhibited detectable but low viral titers in the lung, while all of the other groups were fully protected, showing no viral loads (Figure 4B).These results are encouraging as immunization with 0.2 µg of vaccine adjuvanted with AddaVax conferred a level of protection that was equal to that induced by immunization with 10 µg of vaccine without an adjuvant.Although 0.2 µg of vaccine with R-DOTAP did not induce sterilizing immunity, approximately, a 1000-fold reduction of viral titer in the lungs was achieved. all of the other groups were fully protected, showing no viral loads (Figure 4B).These results are encouraging as immunization with 0.2 µg of vaccine adjuvanted with AddaVax conferred a level of protection that was equal to that induced by immunization with 10 µg of vaccine without an adjuvant.Although 0.2 µg of vaccine with R-DOTAP did not induce sterilizing immunity, approximately, a 1000-fold reduction of viral titer in the lungs was achieved. The Inactivated NDV-S Vaccine Confers Protection against SARS-CoV-2 Infection in a Hamster Model Golden Syrian hamsters have been characterized as a useful small animal model for COVID-19 as they are susceptible to SARS-CoV-2 infections and manifest SARS-CoV-2-induced diseases [24,25].Here, we conducted a pilot study that assessed the immunogenicity and protective efficacy of the inactivated NDV-S vaccine in hamsters.Female golden Syrian hamsters were immunized by a primeboost regimen with a 2-week interval via the intramuscular administration route.Twenty-four days after the booster immunization, hamsters were intranasally infected with 10 4 PFU of SARS-CoV-2 (USA-WA1/2020) virus.Animals were bled pre-boost and at 2 days post-infection (dpi).Lungs of a subset of animals were harvested at 2 dpi.The lungs of the rest of the animals were harvested at 5 dpi.Four groups of hamsters were included in this pilot study.Group 1 was immunized with 10 µg of inactivated NDV-S vaccine per animal without adjuvants.Group 2 received 5 µg of inactivated NDV-S vaccine with AddaVax as an adjuvant.Group 3 was immunized with 10 µg of inactivated WT NDV as the vector-only negative control group.Group 4, which was not vaccinated and was mock-challenged with PBS, was included as the healthy control group (Figure 5A).Serum IgG titers sampled prior to the booster immunization and at 2 dpi were measured by ELISA.One immunization The Inactivated NDV-S Vaccine Confers Protection against SARS-CoV-2 Infection in a Hamster Model Golden Syrian hamsters have been characterized as a useful small animal model for COVID-19 as they are susceptible to SARS-CoV-2 infections and manifest SARS-CoV-2-induced diseases [24,25].Here, we conducted a pilot study that assessed the immunogenicity and protective efficacy of the inactivated NDV-S vaccine in hamsters.Female golden Syrian hamsters were immunized by a prime-boost regimen with a 2-week interval via the intramuscular administration route.Twenty-four days after the booster immunization, hamsters were intranasally infected with 10 4 PFU of SARS-CoV-2 (USA-WA1/2020) virus.Animals were bled pre-boost and at 2 days post-infection (dpi).Lungs of a subset of animals were harvested at 2 dpi.The lungs of the rest of the animals were harvested at 5 dpi.Four groups of hamsters were included in this pilot study.Group 1 was immunized with 10 µg of inactivated NDV-S vaccine per animal without adjuvants.Group 2 received 5 µg of inactivated NDV-S vaccine with AddaVax as an adjuvant.Group 3 was immunized with 10 µg of inactivated WT NDV as the vector-only negative control group.Group 4, which was not vaccinated and was mock-challenged with PBS, was included as the healthy control group (Figure 5A).Serum IgG titers sampled prior to the booster immunization and at 2 dpi were measured by ELISA.One immunization with NDV-S vaccine with or without the adjuvant successfully induced spike-specific antibodies.Since there was no seroconversion from infection at 2 dpi indicated by the baseline level of the WT NDV sera, the increase in titers at 2 dpi compared to titers after vaccine priming most likely represents vaccine-induced antibody levels after the boost.As expected, the boost substantially increased the antibody titers in the NDV-S vaccination groups, whereas the WT NDV sera showed negligible binding signals (Figure 5B).Nevertheless, we cannot exclude a contribution from a rapid production of S antibodies by vaccine-induced memory B cells after exposure to SARS-CoV-2.Hamsters were challenged and weight loss was monitored for 5 days.The WT NDV group lost up to 15% of its weight by 5 dpi.Animals receiving 10 µg of inactivated NDV-S vaccine lost ~10% of their weight by 3 dpi, at which point body weights started to recover.Animals receiving 5 µg of inactivated NDV-S vaccine with AddaVax only lost weight by 2 dpi, at which point body weights started to recover (Figure 5C).Viral titers in the upper right (UR) lung lobes and lower right (LR) lung lobes were also measured.The lung lobes were homogenized in 1 mL of PBS.Viral titers in the lung homogenates were measured by a plaque assay.Animals vaccinated with NDV-S with or without adjuvant displayed a substantial reduction of viral titers at 2 dpi, while the viral titers of these two groups at 5 dpi were below the limit of detection (Figure 5D). Hamsters were challenged and weight loss was monitored for 5 days.The WT NDV group lost up to 15% of its weight by 5 dpi.Animals receiving 10 µg of inactivated NDV-S vaccine lost ~10% of their weight by 3 dpi, at which point body weights started to recover.Animals receiving 5 µg of inactivated NDV-S vaccine with AddaVax only lost weight by 2 dpi, at which point body weights started to recover (Figure 5C).Viral titers in the upper right (UR) lung lobes and lower right (LR) lung lobes were also measured.The lung lobes were homogenized in 1 mL of PBS.Viral titers in the lung homogenates were measured by a plaque assay.Animals vaccinated with NDV-S with or without adjuvant displayed a substantial reduction of viral titers at 2 dpi, while the viral titers of these two groups at 5 dpi were below the limit of detection (Figure 5D). Discussion We have previously reported NDV-based SARS-CoV-2 live vaccines expressing two forms of spike protein (S and S-F) [10].Since the S-F showed superior incorporation into NDV particles, we investigated its potential to be used as an inactivated vaccine in this study.The NDV-S was found to be very stable when stored at 4 • C for 3 weeks, without degradation of the S-F protein.In mice, we have shown that a total amount of inactivated NDV-S vaccine as low as 0.2 µg could significantly reduce viral titers in the lung when combined with R-DOTAP, by approximately a factor of 1000, while the adjuvant AddaVax conferred even better protection.The NDV-S vaccine at 1 µg with either adjuvant elicited potent neutralizing antibodies and resulted in undetectable viral titers in the lung after SARS-CoV-2 challenge.These pre-clinical results demonstrate that antigen-sparing greater than 10-fold can be achieved in a mouse model, providing a valuable input for clinical trials in humans.In a pilot hamster experiment, the inactivated NDV-S vaccine is also immunogenic, inducing high titers of spike-specific antibodies.Since hamsters are much more susceptible to SARS-CoV-2 infection, the group receiving the WT NDV lost up to 15% of its weight by day 5, while both NDV-S vaccinated groups with or without the adjuvant showed greatly attenuated weight loss and reduced viral titers in the lungs.The AddaVax adjuvant again enhanced vaccine-induced protection, resulting in weight loss at only 2 dpi of the group.We did not evaluate the adjuvant R-DOTAP, as the dosing was not well-determined for this model by the time of this study.However, R-DOTAP and additional adjuvants will be evaluated in combination with the inactivated NDV-S vaccine in future studies.Moreover, the presented studies were highly focused on humoral responses and protection against the SARS-CoV-2 challenge in mice and hamsters as a proof of principle for the inactivated NDV-based vaccine, in which T cells and cytokine responses were not analyzed.It will be important to examine CD8+ T cells, CD4+ T cells, and Th1/Th2 responses in future preclinical studies with and without the adjuvant of choice to evaluate the mechanisms of protection and safety. We have shown promising protection by immunization with inactivated NDV-S in both mouse and hamster models.Even though sterilizing immunity might not always be induced, the trade-off for having an affordable and widely available effective vaccine that reduces the symptoms of COVID-19 should be much preferred over a high-cost vaccine that is limited to high-income populations.Most importantly, the egg-based production of an NDV-S vaccine requires only minor modifications to the current inactivated influenza virus vaccine manufacturing process.The cost of inactivated influenza virus vaccines (trivalent and quadrivalent) is in the low-dollar range.Since NDV grows to similarly high titers as the influenza virus, the cost of goods should be similar to that of a monovalent inactivated influenza virus vaccine (a fraction of the cost of a quadrivalent seasonal influenza virus vaccine), or even lower due to dose sparing with an adjuvant that is inexpensive to manufacture. Figure 1 . Figure 1.Design and concept of an inactivated Newcastle disease virus (NDV)-based severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) vaccine.(A) Design of the NDV-S vaccine.The sequence of the spike-fusion (S-F) chimera (green: ectodomain of S, and black: the transmembrane domain and cytoplasmic tail of the NDV F protein) was inserted between the P and M gene of the NDV LaSota (NDV_LS) strain L289A mutant (NDV_LS/L289A).NDV-S: NDV_LS/L289A_S-F.The polybasic cleavage site of the S was removed ( 682 RRAR 685 to A). (B) The concept overview of an inactivated NDV-based SARS-CoV-2 vaccine.The NDV-S vaccine could be produced using the current global influenza virus vaccine production capacity.Such an NDV-S vaccine displays abundant S proteins on the surface of the virions.The NDV-S vaccine could be inactivated by betapropiolactone (BPL).The NDV-S vaccine could be administered intramuscularly (i.m.) to elicit protective antibody responses in humans. Figure 1 . Figure 1.Design and concept of an inactivated Newcastle disease virus (NDV)-based severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) vaccine.(A) Design of the NDV-S vaccine.The sequence of the spike-fusion (S-F) chimera (green: ectodomain of S, and black: the transmembrane domain and cytoplasmic tail of the NDV F protein) was inserted between the P and M gene of the NDV LaSota (NDV_LS) strain L289A mutant (NDV_LS/L289A).NDV-S: NDV_LS/L289A_S-F.The polybasic cleavage site of the S was removed ( 682 RRAR 685 to A). (B) The concept overview of an inactivated NDV-based SARS-CoV-2 vaccine.The NDV-S vaccine could be produced using the current global influenza virus vaccine production capacity.Such an NDV-S vaccine displays abundant S proteins on the surface of the virions.The NDV-S vaccine could be inactivated by beta-propiolactone (BPL).The NDV-S vaccine could be administered intramuscularly (i.m.) to elicit protective antibody responses in humans. Figure 2 . Figure 2. The S-F chimera is stable.(A) Stability of the S-F chimera at 4 °C.Allantoic fluid containing the NDV-S virus was aliquoted into equal amounts(15 mL) and stored at 4 °C.Virus from each aliquot was concentrated through a 20% sucrose cushion, re-suspended in an equal amount of phosphate buffered saline (PBS), and then stored at −80 °C for several weeks (wk 0, wk 1, wk 2, and wk 3).One microgram of each concentrated virus was resolved onto 4-20% sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE).Protein degradation was evaluated by Western blot using the S-specific mouse monoclonal antibody 2B3E5.The hemagglutinin-neuraminidase (HN) protein of NDV was used as an NDV protein control.(B) Inactivation of the virus by betapropiolactone (BPL).Viruses in the allantoic fluid were inactivated by 0.05% BPL, as described previously.Clarified allantoic fluids with live and inactivated viruses were diluted in PBS (at 1000fold dilution) and inoculated into 10-day-old embryonated chicken eggs.The eggs were incubated at 37 °C for 3 days.The loss of infectivity of the inactivated virus was confirmed by the lack of growth of the virus determined by a hemagglutination (HA) assay.(C) Stability of the S-F before and after BPL inactivation.Live or inactivated (using 0.05% BPL) NDV-S virus was concentrated through a 20% sucrose cushion, as described previously.Two micrograms of live or BPL-inactivated virus were loaded onto 4-20% SDS-PAGE.Stability loss of the S-F was evaluated by Western blot, as described in A. Figure 2 . Figure 2. The S-F chimera is stable.(A) Stability of the S-F chimera at 4 • C. Allantoic fluid containing the NDV-S virus was aliquoted into equal amounts (15 mL) and stored at 4• C. Virus from each aliquot was concentrated through a 20% sucrose cushion, re-suspended in an equal amount of phosphate buffered saline (PBS), and then stored at −80 • C for several weeks (wk 0, wk 1, wk 2, and wk 3).One microgram of each concentrated virus was resolved onto 4-20% sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE).Protein degradation was evaluated by Western blot using the S-specific mouse monoclonal antibody 2B3E5.The hemagglutinin-neuraminidase (HN) protein of NDV was used as an NDV protein control.(B) Inactivation of the virus by beta-propiolactone (BPL).Viruses in the allantoic fluid were inactivated by 0.05% BPL, as described previously.Clarified allantoic fluids with live and inactivated viruses were diluted in PBS (at 1000-fold dilution) and inoculated into 10-day-old embryonated chicken eggs.The eggs were incubated at 37 • C for 3 days.The loss of infectivity of the inactivated virus was confirmed by the lack of growth of the virus determined by a hemagglutination (HA) assay.(C) Stability of the S-F before and after BPL inactivation.Live or inactivated (using 0.05% BPL) NDV-S virus was concentrated through a 20% sucrose cushion, as described previously.Two micrograms of live or BPL-inactivated virus were loaded onto 4-20% SDS-PAGE.Stability loss of the S-F was evaluated by Western blot, as described in A. Figure 3 . Figure 3. Inactivated NDV-S vaccine elicits high antibody responses in mice.(A) Immunization regimen and groups.BALB/c mice were given two immunizations via the intramuscular administration route with a 2-week interval.Mice were bled pre-boost and 11 days after the boost for in vitro serological assays.Mice were challenged with a mouse-adapted SARS-CoV-2 strain 19 days after the boost.Ten groups described in the table were included in this study.Group 1, 2, and 3 were immunized with 5, 10, and 20 µg of vaccine, respectively; group 4, 5, and 6 were immunized with 0.2, 1, and 5 µg of vaccine formulated with the R-enantiomer of the cationic lipid DOTAP (R-DOTAP), respectively; group 7, 8, and 9 were immunized with 0.2, 1, and 5 µg of vaccine combined with AddaVax, respectively; and group 10 was immunized with 20 µg of WT NDV virus as the vector-only control.(B) Spike-specific serum IgG titers.Serum IgG titers from animals after prime (pattern bars) and boost (solid bars) toward the recombinant trimeric spike protein were measured by an enzyme linked immunosorbent assay (ELISA).Endpoint titers were shown as the readout for ELISA.(C) Neutralization titers of serum antibodies.Microneutralization assays were performed to determine the neutralizing activities of serum antibodies from animals after the boost (D26) using the USA-WA1/2020 SARS-CoV-2 strain.The 50% of inhibitory dilution (ID50) of serum samples showing no neutralizing activity (WT NDV) was set as 10 (LoD: limit of detection). Figure 3 . Figure 3. Inactivated NDV-S vaccine elicits high antibody responses in mice.(A) Immunization regimen and groups.BALB/c mice were given two immunizations via the intramuscular administration route with a 2-week interval.Mice were bled pre-boost and 11 days after the boost for in vitro serological assays.Mice were challenged with a mouse-adapted SARS-CoV-2 strain 19 days after the boost.Ten groups described in the table were included in this study.Group 1, 2, and 3 were immunized with 5, 10, and 20 µg of vaccine, respectively; group 4, 5, and 6 were immunized with 0.2, 1, and 5 µg of vaccine formulated with the R-enantiomer of the cationic lipid DOTAP (R-DOTAP), respectively; group 7, 8, and 9 were immunized with 0.2, 1, and 5 µg of vaccine combined with AddaVax, respectively; and group 10 was immunized with 20 µg of WT NDV virus as the vector-only control.(B) Spike-specific serum IgG titers.Serum IgG titers from animals after prime (pattern bars) and boost (solid bars) toward the recombinant trimeric spike protein were measured by an enzyme linked immunosorbent assay (ELISA).Endpoint titers were shown as the readout for ELISA.(C) Neutralization titers of serum antibodies.Microneutralization assays were performed to determine the neutralizing activities of serum antibodies from animals after the boost (D26) using the USA-WA1/2020 SARS-CoV-2 strain.The 50% of inhibitory dilution (ID 50 ) of serum samples showing no neutralizing activity (WT NDV) was set as 10 (LoD: limit of detection). Figure 4 . Figure 4. Inactivated NDV-S vaccine protects mice from SARS-CoV-2 infection.(A) Weight loss of mice infected with SARS-CoV-2.Weight loss of mice challenged with a mouse-adapted SARS-CoV-2 strain was monitored for 4 days.(B) Viral titers in the lung.Lungs of mice were harvested at day 4 post-infection.Viral titers of the lung homogenates were determined by a plaque assay.Geometric mean titer (PFU/lobe) is shown (LoD: limit of detection).Statistical analysis was performed using the Kruskal-Wallis test with Dunn's correction for multiple comparisons.P-values between groups were shown. Figure 4 . Figure 4. Inactivated NDV-S vaccine protects mice from SARS-CoV-2 infection.(A) Weight loss of mice infected with SARS-CoV-2.Weight loss of mice challenged with a mouse-adapted SARS-CoV-2 strain was monitored for 4 days.(B) Viral titers in the lung.Lungs of mice were harvested at day 4 post-infection.Viral titers of the lung homogenates were determined by a plaque assay.Geometric mean titer (PFU/lobe) is shown (LoD: limit of detection).Statistical analysis was performed using the Kruskal-Wallis test with Dunn's correction for multiple comparisons.P-values between groups were shown. Figure 5 . Figure 5. Inactivated NDV-S vaccine attenuates SARS-CoV-2 infection in hamsters.(A) Immunization regimen and groups.Golden Syrian hamsters were vaccinated with inactivated NDV-S following a prime-boost regimen with a 2-week interval.Hamsters were challenged 24 days after the boost with the USA-WA1/2020 SARS-CoV-2 strain.Four groups of hamsters (n = 8) were included in this study.Group 1 received 10 µg of inactivated NDV-S vaccine without any adjuvant.Group 2 received 5 µg of inactivated NDV-S vaccine adjuvanted with AddaVax.Group 3 receiving the 10 µg of inactivated WT NDV was included as the vector-only (negative) control.Group 4 animals receiving no vaccine were mock challenged with PBS as healthy controls.(B) Spike-specific serum IgG titers.Hamsters were bled pre-boost and a subset of hamsters were terminally bled at 2 days post-infection (dpi).Vaccine-induced serum IgG titers towards the trimeric spike protein were determined by ELISA.Endpoint titers are shown as the readout for ELISA.(C) Weight loss of hamsters challenged with SARS-CoV-2.Weight loss of SARS-CoV-2-infected hamsters was monitored for 5 days.(D) Viral titers in the lungs.Viral titers in the upper right (UR) and lower right (LR) lung lobes of the animals at 2 and 5 dpi were measured by a plaque assay (LoD: limit of detection).Statistical analysis was performed using the Kruskal-Wallis test with Dunn's correction for multiple comparisons.P-values between groups were shown. Figure 5 . Figure 5. Inactivated NDV-S vaccine attenuates SARS-CoV-2 infection in hamsters.(A) Immunization regimen and groups.Golden Syrian hamsters were vaccinated with inactivated NDV-S following a prime-boost regimen with a 2-week interval.Hamsters were challenged 24 days after the boost with the USA-WA1/2020 SARS-CoV-2 strain.Four groups of hamsters (n = 8) were included in this study.Group 1 received 10 µg of inactivated NDV-S vaccine without any adjuvant.Group 2 received 5 µg of inactivated NDV-S vaccine adjuvanted with AddaVax.Group 3 receiving the 10 µg of inactivated WT NDV was included as the vector-only (negative) control.Group 4 animals receiving no vaccine were mock challenged with PBS as healthy controls.(B) Spike-specific serum IgG titers.Hamsters were bled pre-boost and a subset of hamsters were terminally bled at 2 days post-infection (dpi).Vaccine-induced serum IgG titers towards the trimeric spike protein were determined by ELISA.Endpoint titers are shown as the readout for ELISA.(C) Weight loss of hamsters challenged with SARS-CoV-2.Weight loss of SARS-CoV-2-infected hamsters was monitored for 5 days.(D) Viral titers in the lungs.Viral titers in the upper right (UR) and lower right (LR) lung lobes of the animals at 2 and 5 dpi were measured by a plaque assay (LoD: limit of detection).Statistical analysis was performed using the Kruskal-Wallis test with Dunn's correction for multiple comparisons.P-values between groups were shown. BSL-3) biocontainment facility of the Global Health and Emerging Pathogens Institute at the Icahn School of Medicine at Mount Sinai, in accordance with institutional biosafety requirements.
2020-12-23T06:16:38.252Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "3cf7d18b2322bc6a5ec9ae000dea27f19a1ca342", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-393X/8/4/771/pdf?version=1608191687", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e5f2383a0bcb815533da1b21d4868bd6e25f53f1", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
53682905
pes2o/s2orc
v3-fos-license
Comparison of Physical Structure of Iran Traditional Neighborhoods Based on Living Center Theory of Christopher Alexander ( Case Study : Haji and Kolapa Neighborhoods in Hamedan ) In the way Christopher Alexander provides understanding and knowing order of nature, the pattern of living structures according to the concepts of totality and strong centers are paid attention to in 15 integrated features. The purpose of this paper is to analyze the physical totality of these two neighborhoods based on Alexander`s living center theory and its adaptability with architectural physics.This paper tries to answer following questions: What is the theory of living centers proposed by Alexander? Based on living center theory, how is the geometrical structure of traditional neighborhoods in Hamedan? How is the comparative study of structure of both neighborhoods based on Alexander`s theory? The results of study suggest that certain space as an essential feature of the theory of Alexander is not applicable with geometry structure of elements of the traditional neighborhoods of Hamadan. Comparison of geometry structure of the neighborhood and the characters of Alexander's theory pattern suggests that the most important role in the neighborhood for creating more life arises from strong centers, levels of Scale, boundaries, non-separateness, roughness, the void and contrast.The findings survey can use urban planners, urban designers and architectures to design new neighborhoods. Introduction Change and crisis in different economic, social and physical dimensions of cities bring about remarkable negative effects on cities and in particular the neighborhoods which reduce the urban neighborhood quality and make loss of totality.Therefore, it is necessary to pay attention to creation of living urban spaces with an integrated totality in architecture and urbanization.Based on behavioral patterns, Alexander proposed order and deep geometrical relations in nature with the concept of life phenomenon and living structures believing that strong centers and integrated totality can help to realize the internal energy of beings to be enlived.(Alexander, 2002).Traditional neighborhoods of Hamedan were formed in Qajar and Safavid era with different elements such as mosque, bath and square.Haji and Colapa are the traditional neighborhoods the ancient texture of which has been preserved.The purpose of this paper is to analyze the physical totality of these two neighborhoods based on Alexander`s living center theory and its adaptability with architectural physics.The questions of this paper are: What is Alexander`s living center theory?How is the comparative study of both neighborhood structures based on Alexander`s theory?The paper proceeds to show that living center theory cannot be applied to the structure of traditional neighborhoods.Also, Haji neighborhood has more living centers than Colapa neighborhood. Material and Methods This study is carried out in a descriptive-analytic method based on a quality approach in which the data are collected from library and field observation. Literature Review Research on traditional neighborhoods has been studied from different views by researchers including the latest models of patterning and renewal of framework of quarters by Mohsen Habibi (2003), the capacity of the neighborhood growth of the urban development by: Mojtaba Rafeian (2010), public participation in neighborhood organization and development; Marjan Sharafi (2009).Haji district in a paper entitled group belonging to the place, the fulfillment of social housing in traditional neighborhoods was studied by Hasan Sajjadzadeh (2011).Architecture from Alexander`s view recognizing the ability of feeling to create a living architecture (Rahmani, 2013).Mohajery analyzes the design ideas by Alexander from the book notes on the composition of form to nature of order (new concepts of complexity theory.He believes Alexander`s theory could establish a relationship between different parts of design and planning (Mohajery, 2008).Sabry and Akbari points out that Alexander started his work criticizing the modern world and scientific worldview and thought of architecture related to worldview.Relying on the nature and tradition system, he asked for termination of destruction through the modern urbanization (Sabry & Akbari, 2013).Hedayat nia introduces fifteen features of Alexander`s theory for Ghourtan castle and concludes that the architecture of desert in Iran matches the criteria of Alexander`s (Hedayat, 2013).Life phenomenon and living structures pattern in traditional neighborhoods as an urban space in local scale is a new issue that has not been studied before. Explaining the Alexander's Theory A key concept in Alexander's theory is that order is a dynamic concept rather than a static one.Although humans often experience a large number of components or relationships as complexity, multiplicity does not disorder (Alexander, 2010).Through his writings about architecture, Alexander also refers to philosophy of life and nature.He believes that there is a deep connection between nature and human's mind.He sees the world as an integrated and ordered generic which contains living and non-living creatures (Alexander, ibid).The idea of order indicates that all beings can be divided into living and non-living structures (Alexander, 2002).The important concepts for Alexander's theory are life, integrated totality strong centers and pattern of living structures which are to be briefly explained in order to survey and analyze different options. Life Alexander defines life as it is manifest in physical architecture, its measurable characteristics, and the stepwise transformations that make up any process that is capable of producing a living structure (Alexander, 2010).Alexander focuses the phenomenon of life and defines it as a quality existing in the nature of space and everything such as functional spaces of living system.Life is general concept exists in every area of contiguous space in geometric, structural, social or formal aspect (Alexander, 2002). Integrated Wholeness Alexander's conception of wholeness and centers is grounded in the geometry of space and its physical attributes of position and distance.To apply Alexander's concepts of physical structure to information systems, they must first be translated from a language of physical space to a language of cognitive space where physical position and distance correspond to concepts and consonance in "fields" populated by abstractions rather than shapes (Alexander, 2010).Each spatial structure consists of integrated system shaping the ground of geometrical relationships and complex structures along with human activities and events associating it (Alexander, 2010).Alexander believes life as a subtle structure containing different parts the life of spaces comes from this totality in a way that supporting the life comes from this reality that thing acts as an integrated totality which means that we see it as a part of continuous chain (Alexander, ibid). Strong Center Strong center is a structured field through the space which includes separated set of points in space which represents a kind of centralization due to its structure which is caused by internal coherence and its connections to the context it is present at (Alexander, ibid).There are found patterns in the universe their interaction causes the world to become balanced and compensate the destructive forces in the nature (Tahouri, 2002). Patterns of Living Structures Integrated totality containing powerful centers and life makes patterns of living structures.For Alexander,the presence of these living structures in creation process is so important that the city or building can be evaluated based on the presence or absence of these patterns.He focuses the experimental methods to understand these living structures although expressed in mathematics language (mohajery et al., 2008). Levels of Scale Places where levels of scale big, small and very small are shaped in a beautiful spectrum with bounded levels make a deeper sense of levels and centers are created according to them (Alexander, 2002). Strong Centers The most important feature of a living creature based of which totalities are shaped is the existence of strong centers presented as totality's pillars.Centers can be various and symmetric since each center is represented as a square which is beyond a local symmetry.By strong centers we do not necessarily mean geometric centers because if a center is single dimensional which is only appeared as geometric shape not a strong center, it makes a poor power (Alexander, ibid). Boundaries Living centers are shaped by boundaries.The aim of creating a round boundaries is a dual aim.boundaries act for separating and linking, boundaries makes attention for the center and on the other hand the limited center is integrated by linking to the beyond world (Alexander, ibid). Alternating Repetition One way for center to support its life is alternating repetition by which we mean a kind of repeated tone parallel and alternating which are intensified through primary centers' rhythm (Alexander, ibid). Positive Space The most simple and necessary feature through living structures is certain space which is Prominence of each particle to the outside.If he center is certain and well formed the certain space helps it being more powerful (Alexander, ibid). Good Shape The feature good shape is dependent to centers each part of which has a certain and defined form.In order to have a good form, all forms have to be definite after Analysis and characterization (Alexander, ibid). Local Symmetries There is a bilateral relationship between local symmetry and living center.Local and general symmetry Supplements for sustaining a totality.In one hand the most continuous and coherent patterns have the most local symmetry and on the other hand symmetric parts are necessary in order to change a plan to a totality.Generally, it is concluded that local symmetry should rule on total structure in order to create strong centers however in formation of a coherent space, general symmetry helps all parts' understanding of the space (Alexander, ibid). Deep Interlock and Ambiguity Connection of centers and difficulty of separating them from the adjunct centers make a deep solidarity between them.Ambiguity and Solidarity appear as Inter connectedness and being band with the near centers and also as creating an important point which belongs to its own center and also to around centers (Alexander, ibid). Contrast Contrast t in living creature causes its stability and it can be created as different shapes of full and empty, white and black and etc.The important point of creating such centers is Integration and cohesion of the spaces which should be protected (Alexander, ibid). Gradients Moving through the space and hierarchy with gradual change of distance, size, intensity and features make a proper ground for creation of strong centers.Hierarchy makes variety of centers and reveals its internal totality (Alexander, ibid). Roughness Roughness is fulfilled when uniform designs are not located in same place.The important point of creating a Heterogenic space is the designer to be Unconscious and unintended to create strong centers (Alexander, ibid). Echoes Echo appears when smaller elements and centers which make bigger centers are formally members of a family and this cause their Coherence and unity (Alexander, ibid). The Void Another element effective in formation of living centers is the empty space between them.Accordingly in addition to peace and silence made by empty space, it attracts more energy of center and empowers it and forms a geometric and regular space (Alexander, ibid). Simplicity and Inner Calm Totality of a living structure is simple so that in most times it can be created through simple and geometric forms.However internal simplicity and relax is not only made apparently rather it is reached by protecting necessary elemtns and omitting the others (Alexander,ibid,(226)(227)(228)(229) Not-Separateness The last and also the most important feature is integrality which is fulfilled as appearance of a living generic as part in the world and inseparable from it so that it is melted is around context (Alexander,ibid,(230)(231)(232)(233)(234)(235). Hamadan History The concept of city: "city in ancient Persian was called "Kheshtar" meaning kingdom ".Also, this term in Avesta is called as "Kheshtareh" The Persian word" city "has come from this Avestan term, while in Sanskrit the word was called "Keshtar" In Sassanian era "city" was called in modern concept."State", or province is the second component of city, coming from Old Persian and Avestan word "Astan" to mean a place (Qadakchy, 2008).The destruction of the neighborhoods of Hamadan, and the physical texture changing began in Reza Shah era.In this period between 1932 to 1937 the central square of the city was made by German Karl Frisch with pre-planned radial design, and six streets radiating from it interrupted the spine, alleys and old markets and paved the way for Western culture (ibid, 2008).Many cities have been created by emerging numbers of villages.Hamadan is perhaps the best known example.In the surrounding neighborhoods at first were villages which had been agricultural centers, and with the expansion of neighborhoods these centers were formed among neighborhoods.Some of the gardens and agricultural centers were in area of city which include Mir Aqeel garden around the dome Alavian, Nazar Beyg garden in Nazar Beyg neighborhood, Kolapa garden inside Kolapa area and some others.The structure of neighborhood was in a manner that the streets were isolated from a great area called grass with many twists and turns so that only the local people knew them and, at times, crossed dark roofed transits.Houses were around these passages and lords' houses were at the end twists with high walls were hidden from the view of passers.Therefore, the neighborhoods were independent units of city, including mosques, baths, aqueducts and multidisciplinary malls or residential areas (ibid, 2008). The Introduction of the Samples Haji neighborhood with residential use is intermediate between shohada and Shewerin Street and has area named as the lawn Haji.This neighborhood is attributed to Mr. Mohammad Ismail Isfahani.In the second half of the thirteenth century AH, 120-130 years ago, this tribe came from Isfahan to Hamadan and called the neighborhood after their own name; their descendants and their relatives lived in the same place and the traces of their life were remained until 40-50 years ago (AZkaei).This neighborhood is one of the most vibrant neighborhoods while the centre of area is created by combining the two rectangle and triangle-shaped spaces.thecommunication roads are led from the center to different parts of neighborhood, downtown and other neighborhoods, Green trees, water, mosques, schools, male and female bathrooms, café and infirmary and large shop have given special effects to this small square (Biglari, 1977).Kolapa neighborhood with residential use is in East side of Booali St. located between Ayatollah Taleghani Street and Ayatollah Madani Boulevard.According to doctor Azkaei appellation of the neighborhood is not clear and the words can be meant small foot (AZkaei).Neighborhood center form is like a small-square and its nearby space is occupied by mosque, grocery, school, bakery, butchery, demolished baths and blocked fountains.The grass also has endowed a special beauty to the neighborhood..In south and southwest of the neighborhood three uses are located: Bu Ali hotel, the Islamic Revolution Court and the Ekbatan Hospital (Consulting Engineers, 1993). Boundaries Living centers in neighborhoods that are symbolized by the boundaries of vernacular architecture are formed in all dimensions with different layers, narrow passages and arches of the mud wall and are all to protect lively space in creating a neighborhood centers, and each has its boders. Kolapa Haji Figure4.Analysis of boundaries in typical neighborhood of Hamadan Source: Consulting Engineers, 1993, author's analysis. Alternating Repetition In Haji neighborhood centers, repetitive centers within a whole in parallel with the alternating rhythm with the second center are interlocked and intertwining, but iterative structure in the Kolapa neighborhood tends to be inexact and diverse, with this difference and variety, creating a beautiful oscillation. Kolapa Haji Figure 5. Analysis of alternating repetition in typical neighborhood of Hamadan Source: Consulting Engineers, 1993, author's analysis. Positive Space To the extent any space is consisted of simple parts, and their spaces form well-defined areas,they will be more lively (Alexander, ibid).But neighborhood center in both samples ensures with indefinite form, a strong center for each part of space which as a social and public open space is full of meaning and purpose free from irrelevancy. Kolapa Haji Figure 6.Analysis of positive space in typical neighborhood of Hamadan Source: Consulting Engineers, 1993, author's analysis. Good Shape Elements of Haji neighborhood center which are made of consistent and simplest forms, elementary and novel, have beautiful shape.In contrast, neighborhood center which is made of strong elements, because of its noncentrality inside itself has a bad shape and although it hasn't an inner symmetry or two-side symmetry ; spaces have been created around it which are clear, consistent, and compact, with a sense of maturity and enclosure. .Kolapa Haji Figure 7. Analysis of good shape in typical neighborhood of Hamadan Source: Consulting Engineers, 1993, author's analysis. Local Symmetres Local symmetry should be dominant in the total structure of neighborhood to create strong centers, and in formation of coherent space.Global symmetry of the single space helps the observer to understand the space better, but in each of two neighborhoods, centers can be seen including mosques, marketplaces and grass that have local symmetry. Kolapa Haji Figure 8. Analysis of local symmetry in a typical neighborhood of Hamadan Source: Consulting Engineers, 1993, author's analysis. Deep Interlock and Ambiguity Roofed narrow passes with lots of twists and turns in neighborhoods are as bridge that at the same time belong to the neighborhood and the surrounding areas, and both are connected to the irrefrangible node. Kolapa Haji Figure 9. Analysis of deep interlock and ambiguity in typical neighborhoods inHamedan Source: Consulting Engineers, 1993, author's analysis. Contrast Life in neighborhood can not occur without difference and diversity and the most popular contrast that creates life is full and empty space, moving space and pause space.In practical situations, various residential areas, commercial (market), religious (mosques), social (grass) work better together, and these differences will allow centers to get their pleasant nature. Kolapa Haji Figure 10.Analysis of contrast in typical neighborhoods of Hamedan Source: Consulting Engineers, 1993, author's analysis. Gradients Quality changes gradually, gently and delicately from one extreme to another in neighborhood spaces, including lawns with cultural function, houses with residential function, mosques with religious functions and malls with economic function. Roughness As in the structure of the neighborhood size and location of such spaces have made the most beautiful and most appropriate spaces, it has caused heterogeneity.Furthermore, the heterogeneity seen in the neighborhood elements can be seen in original form of neighborhood. Kolapa Haji Figure 11.Analysis of roughness in typical neighborhoods of Hamedan Source: Consulting Engineers, 1993, author's analysis. Echoes Regarding structure of neighborhood, heterogeneity in neighborhoods is seen more than reflection, and few places have been created with this template. Kolapa Haji Figure 12. Analysis of echoes in typical neighborhoods of Hamedan Source: Consulting Engineers, 1993, author's analysis. The Void The empty space in the neighborhood center brings about peace and quiet in the heart of the space, through which a major hub gives life for smaller centers. Kolapa Haji Figure 13.Analysis of the the void in typical neighborhoods of Hamedan Resource: Consulting Engineers, 1993, author's analysis. Simplicity and Inner Calm Simplicity and inner calm is quality that is necessary in completion of a "whole" of the elements of the neighborhood in contrast, relaxation is relevant with slowness, superiority, peace and quiet in the neighborhood is often associated with non-geometric forms. Not-Separateness Neighborhood is intermingled harmoniously and humbly in its surroundings and the most important element is the center of the neighborhood as a consistent center for being connected with what is around and there is no separation between them. Kolapa Haji Figure 14.Analysis of not-separation in typical neighborhoods of Hamedan Source: Consulting Engineers, 1993, author's analysis. Discussion What comes next is the result of studies, surveys and analysis of comparison between neighborhood of Hamadan concept and Alexander's Theory.It is explained at table 1: Conclusion Life space is originated from an integrated whole that is the result of a strong center and this means that we see it extended as a part of a continuous chain.According to Christopher Alexander, life phenomenon is formed geometrically in space from 15 characters which can be seen in living and non-living structures.According to studies on the traditional neighborhoods of Haji and Kolapa in Hamadan, we can express geometric structure of space based on theory of Alexander does not comply fully with the phenomenon of life in the traditional neighborhood because structures of traditional neighborhoods is based on local and organic patterns and certain spaces of the neighborhood play a less important role.In theory of Alexander, positive space has important role in making space live but neighborhood center as an important and integral member of the neighborhood, although without having good shape.Haji neighborhood with the greatest number of characters in the theory of Alexander has more life originated from strong centers in the neighborhood center like aqueduct and baths.As a public space and second character, the existence of more contrast among functions of its elements, and other character the positive space in neighborhood center elements including shops, bath, and mosque take place.three important elements in Kolapa neighborhood including hotels, university and hospital due to their location have no role in creating life in neighborhood center.Due to comparison of geometry structures of the neighborhoods and features of Alexander's living structures pattern, it seems most important role in studied neighborhoods for creating more life arises from strong centers, Levels of scale, Boundaries, Not-separateness, Roughness, The void and Contrast. Footnote Christopher Alexandre was born in Vienna, Austria, in 1936 and grew up in England.He obtained his bachelor degree in architecture and master degree in mathematics from Cambridge university.Alexandre got his Ph.D in architecture from Havard in 1963 and became the lecturer in berkley univercity of cALIFORNIA from 1963.He is the forefather of pattern language movement in computer science and the author of pattern language in 1977. Table 1 . Comparative analysis of the theory of Alexander and the representative structure neighborhood of Hamadan Comparative analysis of the theory of Alexander and neighborhood of Hamadan
2018-11-17T05:13:05.201Z
2016-02-02T00:00:00.000
{ "year": 2016, "sha1": "c0c5141f29a7a739b1c3a2c147c39ca9a103693f", "oa_license": "CCBY", "oa_url": "https://ccsenet.org/journal/index.php/mas/article/download/54753/30541", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "c0c5141f29a7a739b1c3a2c147c39ca9a103693f", "s2fieldsofstudy": [ "Law" ], "extfieldsofstudy": [ "Mathematics" ] }
119035095
pes2o/s2orc
v3-fos-license
A procedure for calculating the many-particle Bohm quantum potential In a recent work, M.Kohout (M.Kohout, Int.J.Quant.Chem. 87, 12 2002) raised the important question of how to make a correct use of Bohm's approach for defining a quantum potential. In this work, by taking into account Kohout's results, we propose a general self-consistent iterative procedure for solving this problem. I. INTRODUCTION Bohm's formulation of quantum mechanics in terms of single-particle trajectories ([1, 2, 3, 4, 5, 6]) has been, and still is, a continuous matter of dispute, often in rather philosophical terms above all ( [5,6,7,8,9,10,11,12,13,14]) concerning the fundamental meaning of quantum mechanics. At a concrete level such a theory has been used as a theoretical tool for understanding and interpreting several processes in different fields, from molecular physics to plasma physics, from scattering theory to simulation of quantum wires [15,16,17,18,19,20,21], to name a few. Recently M.Kohout [22] raised the important question of how to treat the more rigorous and realistic 3N-dimensional formulation of the quantum potential generated by N electrons. Rigorously speaking, in an N-particle system Bohm's potential is a 3Ndimensional function and the one-particle Bohm's potential, usually considered in literature, is the simplest 3-dimensional reduced form which systematically does not take into account the effects due to the presence of other particles. He proposed a formal interpretation of the wavefunction as a product of a one-particle marginal function and a conditional many-particle function where somehow the effects due to the other particles are taken into account, then a formal expression of the quantum potential is obtained. However, as the author underlines, for the conditional many-particle function an explicit expression is required and this is a rather difficult problem . In this work, by taking into account Kohout results we build a reasonable initial guess for the many-particle potential and, following this choice, we develop an iterative self-consistent procedure for obtaining a general (numerical) expression of the potential. We restrict our analysis to a spinless system, in any case there exists the possibility to extend the procedure to wavefunctions which explicitly consider the spin variables. To conclude, we must underline that the intention of this work is simply to show that the widely used Bohm potential (used in the approximation of one particle) can actually be reasonably treated in its true form of manyparticle and we propose a method to do so. It is not our intention to prove that such a procedure is preferable to other quantum approaches, such as Hartree-Fock or Density Functional Theory, for determining general many-body effects in electronic systems. II. A BRIEF ACCOUNT OF BOHM'S THEORY Bohm's formulation of quantum mechanics in terms of single-particle trajectory is based on the assumption that the wavefunction determines the dynamics of more fundamental variables (hidden variables). The essence of the theory can be summarized by quoting Bohm's original work "The first step in developing this interpretation in a more explicit way is to associate with each electron a particle having precisely definable and continuously varying values of position and momentum" [1]. Within this assumption, the system is described by its wavefunction ψ(x 1 ..., x N , t) and the positions of its particles x i ; these two fundamental quantities are governed respectively by Schrödinger equation where H, as usual, is the Hamiltonian of the system, and by the dynamical equation: where This procedure leads to a non-Newtonian equation of motion which becomes Newtonian in the classical limit of → 0; the connection of such an approach with quantum mechanics is provided by the fact that the quantum formalism automatically emerges from Bohm mechanics. In simple terms this approach "...implies however the particle moves under the action of a force which is not entirely derivable from the classical potential, V (x), but which also obtains contributions from the quantum mechanical potential..." [1]. Given the N-particle electronic wavefunction whose general form can be written as ψ(R, t) = χ(R, t)e i S(R,t) ; R ∈ ℜ 3N , substituting it into the time-dependent Schrödinger equation and then separating real and imaginary part, once the velocity is = 1 m ∇S, and ρ = |ψ| 2 = χ 2 one obtains the equations: and where m is the mass of the particle, V (R, t) is the potential characterizing the system (e.g. electrostatic for interacting fermions), Q(R, t) = − 2 2m ∇ 2 √ ρ √ ρ is the Bohm potential and ∇ = N i=1 ∇ i the sum of the gradients of the N-particles. We restrict our analysis to the stationary case, thus we can write in terms of the wavefunction phase factor: III. KOHOUT'S FORMULATION OF THE MULTI-PARTICLE POTENTIAL PROBLEM Bohm's potential in three dimensions is used in many applications, within this framework ρ(r), a one-particle electron density, is defined as: (Ω domain of definition of the system in real space) and the single-particle wavefunction writes: The wavefunction's form of Eq.7 corresponds to the procedure of separating Bohm's dynamical equations into independent and indistinguishable single-particle equations and describe a set of identical particles moving in an average potential where specific mutual interactions are neglected. In simple terms it is sufficient to describe only one particle, embedded in an average potential generated by the other particles, in order to automatically describe the whole system. In this case S(r 1 , ......r N ) = s(r 1 ) + ....s(r M ). It must be noticed that, within the approximation done, what we called the one-particle Bohm potential, does not correspond to the Bohm potential for a system composed by only one particle. In fact, if this was the case, Bohm equations could not be defined at the nodes of the electron wavefunction, instead the definition of ψ(r) given in Eq.7 with ρ(r) defined by Eq.6 suggests that it is unlikely to have a ψ which contains zeros at least for dense systems. The one-particle potential is the simplest approximation that can be used for describing an electronic system. Recently M.Kohout proposed another approach, by redefining the wavefunction ψ(r, r ′ ) (r ′ indicates the remaining N − 1 particles of an N -particle system when one focuses the attention on particle r): ψ(r, r ′ ) = χ(r, r ′ )e i S(r,r ′ ) = φ(r) · β(r ′ |r)e i S(r,r ′ ) . By using this factorization, he obtains a formal expression of the potential Q(r, r ′ ) as a sum of the one-particle Bohm potential and a conditional multi-particle potential Q cond (r|r ′ ) = . This expression bares a rather difficult problem: to find a reasonable expression for β(r ′ |r). In this work we will circumvent this problem by using a suitable expression for χ(r, r ′ ) which recovers the properties of the factorized wavefunction introduced by Kohout, and leads to an iterative procedure for Q(r, r ′ ); this is reported in the next sections. IV. MANY-PARTICLE DENSITY AND BOHM'S WAVEFUNCTION. As anticipated in previous sections our final aim is to develop a procedure for obtaining a Bohm's potential where many-particle effects are somehow incorporated. For this purpose, we redefine the wavefunction of the system by extending the form ψ(r) = φ(r)e i s(r) for one-particle to ψ(r 1 , r 2 ...r M ) = φ(r 1 , r 2 ...r M )e i S(r 1 ,r 2 ...r M ) , an M-particle wavefunction in an N-particle system, with M ≤ N. We should also require φ to be antisymmetric and S to be symmetric with respect to the M! possible pair permutations of the M particles; this requirement will preserve the antisymmetry of ψ. Later we will show that the symmetry of S corresponds to the fundamental physical property of indistinguishable particles. In the next sections, to remind the analogy with the one-particle case we will identify r with r 1 , thus φ(r 1 , r 2 ...r M ) = φ(r, r 2 ...r M ). This basically means that given N particles, our system is characterized by (or alternatively, we are interested in considering) M-particle interactions which produce observable effects on the average behavior of the system, thus M particles must be treated explicitly; we can describe those effects by considering the M-th approximation, where M = 1, ...N, the case M = 1 is the trivial one-particle case, M = 2 counts two-particle effects etc.etc.. For the moment we are not interested in S(r 1 , r 2 ...r M ) whose role will be clear later on. Let us focus on φ(r, r 2 ...r M ). In the one-particle case N · |φ(r)| 2 = ρ(r) = N Ω (N−1) |ψ(r, r 2 , r 3 ......r N )| 2 dr 2 dr 3 ....dr N = ρ(r), i.e. the average electron density of N indistinguishable particles projected on the real (3-dimensional) space. In analogy we can define an M-particle electron density as : where, in analogy to the one-particle case, one has: This form of φ(r, r 2 ...r M ) satisfies the requirements of the factorization in marginal and conditional part as defined by Kohout: In fact the conditional function of Eq.8, β(r ′ |r), can be written as: and (as can be easily verified) one obtains This kind of factorization was already considered by Hunter [23] who, once more, underlines that the nature of φ(r), as a marginal probability density function, makes unlikely the existence of zeros. In terms of probability φ(r, is interpreted as the square root of the classical expression for the conditional probability density. The conditional probability density, i.e. the probability density for M − 1 particles given the position of the particle r, is written as the probability density of the particles r, r 2 .....r M , (|φ(r, r 2 ....r M )| 2 ), divided by the probability density of particle r, (|φ(r)| 2 ). In general the mathematical structure of quantum mechanics leads to a non-commutative probability theory which coincides with the classical one only in case we treat commuting spaces [24]. Here for a commuting space we intend a set of variables representing physical quantities, such as positions, which do commute. In the language of quantum mechanics this means for example that position operators of the M-particles commute, i.e. a measurement of the position of particle 1 does not influence the measurement of the position of particle 2. The same example does not hold in the space of spins, however since we have restricted our analysis to spinless systems, we can apply the rules of classical probability. In Ref. [24] is reported a study about the possibility of defining a quantum correction which takes into account a non-commutative probability for the conditional function. Once we have formally defined the many-electron wavefunction via Eq.13, we can determine a reasonable initial guess for Bohm's potential, by defining a reasonable φ(r, r 2 ....r M ) : In the previous section we defined, in analogy to the one-particle case, an M-particle wavefunction for an N-particle system. Clearly the larger the value of M the more difficult the determination of the wavefunction, although in many cases two or three-particle effects may be enough for a basic understanding of some physical properties. So far by defining the many-particle wavefunction we simply gave a first approximation for the potential Q, where −V (r, ...r M ) is the electrostatic potential experienced by each single particle. In this form, Eq.15 is not very useful thus we should find a way for a simplification which can be based on physical well founded hypothesis. The first thing to notice is that ∇ r i S(r, ...r M ) = v r i (r, ...r M ) represents, within Bohm's framework, the velocity field of particle r i which depends upon (or alternatively, is influenced by) the positions of the other M − 1 particles as well. We can also notice that −V (r, ...r M ) can be represented as the electrostatic potential per particle and can be expressed in good approximation as: In Eq.16, ρ(w) is the one-particle electron density, obtained by integrating the M-particle density over M − 1 variables. The meaning of Eq.16 is that in a system of indistinguishable particles the average electrostatic potential experienced by one-particle is the same as that experienced by another particle. The "average" character is obtained by considering the one-particle electron density instead of the M-particle one. The use of a simpler rather than a a more complicated expression for V can be justified by the fact that we know how to express electrostatic properties and we know that Eq.16 is reasonable. What is unknown are the quantum effects represented by Q, for this reason we can take known quantities in their simplest approximation as far as they are known to be reasonable. In any case the important fact is that following Eq.16 one can write: The expression for V (r, ..r M ) given in Eq.17 is particularly useful for simplifying Eq.15. In fact Eq.15 can be written as : where 1 M Q(r, r 2 ..., r M ) is the quantum potential per interacting particle. At this point we can proceed having in mind the following points: Eq.18 is a 3M-dimensional non linear partial differential equation and we would like to have a solution which somehow recovers the dynamical physics contained in Bohm's approach. The natural simplification of Eq.18 is to decompose it in a system of M one-particle equations coupled through S(r, r 2 ...r M ) and Q(r, r 2 ..., r M ), where each equation describes the squared modulus of the velocity field of a particular electron ∇ i S(r, ...r M ) subject to a potential per particle V (r i ) + 1 M Q(r, r 2 ..., r M ) . Clearly the sum of solutions of single equations is also a solution for the initial one. In mathematical terms this kind of solution represents only one possible way (a particular solution) , while in physical terms this represents the direct extension of Bohm's one-particle dynamical description to a system of M-particle non- In the next section we show that, by using Eq.19, a self-consistent procedure for Q(r, ...r M ) can be determined. be symmetric under any permutation of the N particles. This fundamental property is a direct consequence of the fact that the particles are indistinguishable and implies that the velocity field of one particle can be obtained from that of another via particle permutation (see Appendix). The symmetry of the total wavefunction can be preserved by taking φ(r, r 1 , ....r M ) antisymmetric with respect to any pair exchange of the M particles. The property of symmetry for S allows us to make the iterative procedure possible and physically reasonable. In fact we solve Eq.20 with respect to r and obtain S(r) as: formally neglecting, for the moment, the part G(r 2 , ...r M ) of Eq.21. Then in order to obtain a global solution which satisfies the permutation (symmetry) criterion we write the solution for S at the first iterative step as : [25,26,27,28,29,30,31,32,33] and references therein). In particular the fast sweeping algorithms of Ref. [30], and the robust algorithms for multidimensional Hamilton-Jacobi equations of Refs. [31,32,33] represent an extremely useful approach to a computer implementation of this procedure. In particular these latter algorithms, although dealing with the more general non stationary problem, in principle can be adapted to the stationary case and in general would make it possible also to extend the procedure to non stationary cases. VII. CONCLUSIONS We propose a general method to treat Bohm's potential in its 3N-dimensional rigorous form. Inevitably, there are several physical approximations that one should accept; we use a spinless system, thus we can apply the concept of classical probability to a quantum system in a stationary state. This choice offers technical advantages but is not physically obvi-ous; in case the spin variables are explicitly considered, the basis of our procedure remains valid, but we should find an opportune way to define a more general wavefunction where the conditional function is determined by non-commutative probability principles as suggested in Ref. [24]. The separation of the many-electron equation in single but mutually dependent one-particle equations is not unique from a rigorous mathematical point of view but it is based on a reasonable physical approximation. From a technical-mathematical point of view the major problem is the solution of the non-linear partial differential equations for S (eikonal equation). As it is commented in several textbooks of quantum mechanics (see for example [34]), to solve this equation is not an easy task, however the literature ) in this case the eikonal equation is reduced to independent ordinary first order differential equations which can be easily solved. The simple cases listed above can be used as a first approximation of more complicated systems. As stated before, it is important to underline that the intention of this work is that of proposing a procedure to properly treat and use Bohm potential in fields where it is currently (and extensively) employed for practical applications. We do not claim that this procedure is computationally or methodologically more convenient than others in solving general manyelectron problems , however, this method may also represent a complementary theoretical approach to the standard ones used in current research; due to the deterministic interpretation of Quantum Mechanics, on which the method is based, effects which cannot be describe by standard Hartree-Fock or DFT may be revealed. This issue, anyway, involves a much deeper analysis which goes beyond the purpose of this work and will be possibly treated elsewhere. VIII. ACKNOWLEDGMENTS I would like to thank Dr.M.Kohout, Prof.T.Vilgis and Dr.E.Cappelluti for a critical reading of the manuscript and helpful suggestions. IX. APPENDIX For simplicity let us consider a two-particle system (r, r ′ ). Let us suppose that from Eq.20 for (r, r ′ ), integrated with respect to r, the following solution is found: f (r, r ′ ) = g(r, r ′ ) + h(r) + l(r ′ ) where g(r, r ′ ) = g(r ′ , r) and l(r ′ ) is unknown since it represents the constant (with respect to r) obtained by solving the equation with respect to r. From Eq.24 it follows that the formal expression for the velocity field for the particle r is v r (r, r ′ ) = ∇ r [g(r, r ′ ) + h(r)] this means v r (r, r ′ ) = v 1 (r, r ′ ) + v 2 (r) (26) while for particle r ′ we have: v r ′ (r, r ′ ) = ∇ r ′ [g(r, r ′ ) + l(r)] = v 1 (r ′ , r) + v 3 (r ′ ). Because the particles are indistinguishable one must obtain v r ′ (r ′ , r) from v r (r, r ′ ) by permutation of (r, r ′ ) thus the unknown function l(r ′ ) must coincide with h(r ′ ). As one can easily see, by taking f (r, r ′ ) = g(r, r ′ ) + h(r) and making it symmetric with respect to the permutation (r, r ′ ) → (r ′ , r), we reach the same result for l(r ′ ). In the most general case g(r, r ′ ) is not necessarily equal to g(r ′ , r), however the same procedure (and principles) can be applied with the only difference that g must be also made symmetric, i.e. g f in (r, r ′ ) = g(r,r ′ )+g(r ′ ,r) 2 and h f in (r, r ′ ) = h(r)+h(r ′ ) 2 . In case of more than 2 particles the same argument can be used by taking into account all possible particle permutations. [1] D. Bohm shown in this work. For STEP we intent the process of going through all the 3 (4) equations. Next by using the final Q or S, obtained at the first STEP, it is possible to start a second STEP. S ij (Q ij ) is the function S calculated at different stages of the procedure. i corresponds to the number of equations solved within the system , i = 0 corresponds to the starting (initial guess) expression for Q, j indicates the number of global iterations (STEP).
2019-04-14T03:20:34.059Z
2004-06-15T00:00:00.000
{ "year": 2004, "sha1": "46fb34feda8635e6c195fe9e15dbbf72c806a526", "oa_license": null, "oa_url": "http://arxiv.org/pdf/quant-ph/0412128", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "46fb34feda8635e6c195fe9e15dbbf72c806a526", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
31497855
pes2o/s2orc
v3-fos-license
Biology of Aging ; properties . Some students who have not declared a discipline, and have obtained the approval of their academic advisor and the Senior Associate Dean of the GSBS, may sign up for INTD 6090-1GEN.Grading will be Satisfactory or Unsatisfactory.A list of seminars from all disciplines will be posted on the Graduate School Web site.Each Section Director will determine, for the relevant IBMS-6090 section, the policy for tracking student's attendance and participation in seminars. IBMS 6097.Research.0.5-12 Credit Hours.This course is required of all students in the IBMS program, except of those who have signed up for Final Hours.Students are required to attend a minimum of 16 seminars per semester and to complete a requirement to demonstrate their attendance and participation. To fulfill the minimum number of seminars, students may include seminars offered by disciplines other than their own in which they are enrolled.However, to enroll, students should obtain permission from the course Section Director affiliated with the appropriate discipline.The course numbers of the individual course sections are INTD 6090-1GEN, 6090-2BA, 6090-3CB, 6090-4CGM, 6090-5MIM, 6090-6MBB, 6090-7NS and 6090-8PP for the IBMS Disciplines: Biology of Aging (BA), Cancer Biology (CB), Cell Biology, Genetics & Molecular Medicine (CGM), Molecular Biophysics & Biochemistry (MBB), Molecular Immunology & Microbiology (MIM), Neuroscience (NS), and Physiology & Pharmacology (PP).Some students who have not declared a discipline, and have obtained the approval of their academic advisor and the Senior Associate Dean of the GSBS, may sign up for INTD 6090-1GEN.Grading will be Satisfactory or Unsatisfactory.A list of seminars from all disciplines will be posted on the Graduate School Web site.Each Section Director will determine, for the relevant IBMS-6090 section, the policy for tracking student's attendance and participation in seminars. Registration for at least one term is required for M.S. candidates.Prerequisite: Admission to candidacy for the Master of Science degree is required. The objective of the Qualifying Examination (QE) is to determine if a student has met programmatic expectations with regard to: i) Acquiring a level of scientific reasoning and a knowledge base in his/her field of study appropriate for a graduate student at the current stage of training; ii) Demonstrating skills of problem-solving and development of experimental strategies designed to test hypotheses associated with a specific scientific problem; and iii) Demonstrating the ability to defend experimental strategies proposed for solving scientific problems.Successful completion of the QE is required for Advancement to Candidacy and continuation in the IBMS Ph.D. program.During the Spring semester of Year 2 (4th semester overall) of the student's program, the QE will be administered by a faculty committee approved by a student's Discipline leadership.Each IBMS discipline will administer the QE process for its students so as to achieve the goals of the discipline while satisfying the expectations of the IBMS graduate program.In general, the QE requires the solving of a relevant unsolved problem in the biomedical sciences by writing a research proposal based on an idea conceived and developed by the student, followed by an oral defenseof-proposal to explore the student's problem-solving process, and the soundness of the student's experimental design.Following the QE, a report will be submitted by the chair of the examination committee to the student's discipline leadership indicating the outcome of the exam and any recommendations that may be required to foster further academic progress by the student.IBMS 7001 is divided into 7 modules overseen by the 7 IBMS Disciplines, each that is responsible for providing its students with a detailed description of the examination process, and for ensuring that the programmatic expectations and goals of the QE are met. IBMS 7010. Student Journal Club & Research Presentation. 1-2 Credit Hours.This course is designed to provide graduate students with experience in critical reading of the primary literature, seminar preparation and presentation, data analysis and interpretation, and group-based learning as they relate to the graduate program in Integrated Biomedical Sciences.This course is required of all students in the IBMS program starting in their second year except of those who have signed up for Final Hours. Students are required to attend a minimum of 16 total presentations per semester (journal club or research presentations) and to complete a requirement to demonstrate their attendance and participation.Students are also required to present one journal club presentation per semester until they are Advanced to Candidacy.Once Advanced to Candidacy, the student will present one journal club presentation per academic year and one research presentation per academic year such that the student is giving at least one presentation in each semester.To enroll, students should obtain permission from the course Section Director affiliated with the appropriate discipline.A list of journal clubs from all disciplines will be posted on the Graduate School Web site.Each Section Director will determine, for the relevant IBMS 7010 section, the policy for tracking student's attendance and participation and will be responsible for assigning a final grade. IBMS 7099.).Some students who have not declared a discipline, and have obtained the approval of their academic advisor and the Senior Associate Dean of the GSBS, may sign up for INTD 6090-1GEN.Grading will be Satisfactory or Unsatisfactory.A list of seminars from all disciplines will be posted on the Graduate School Web site.Each Section Director will determine, for the relevant IBMS-6090 section, the policy for tracking student's attendance and participation in seminars.Registration is only permitted following a student's admission to candidacy for the PhD degree, approval of the dissertation research proposal and approval of the membership of the candidate's Supervising Committee. Students will work directly with a faculty advisor or assistant dean to develop an independent plan of study. Students will work with the course director and Assistant Director of Global Health to identify an appropriate international elective site, using established sites/programs or one that the student discovers on their own.All rotations must be vetted and approved by the course director and will adhere to a community service-learning model that is a structured educational experience combining community service with preparation and reflection.Students are expected to help shape the learning experience around community-identified needs and advance insight related to the context in which service is provided, the connection between service and academic coursework, and students' roles as citizens and professionals.Students will spend 4 weeks living and working at an international service site.Sites may allow for a range of experiences, such as participating in patient care, conducting clinical or public health research, and/or participating in a language immersion program.There may also be opportunities for patient education and emphasis on efforts of local empowerment, aiming to build up the communities in a sustainable way.Regardless of the focus, all sites must be supervised by qualified health care providers.Students are encouraged to integrate themselves into the health care delivery system, to explore community needs that they could address, and when possible, to strive to make an impact through community education, home visits, and research.This is a longitudinal 4th-year elective to support and nourish the inherent altruism of our students.This elective will bring together like-minded students and faculty who have a passion for caring for the medically underserved in their communities.The students will take a leadership role in managing and directing the student-run clinics at the Alpha Home, SAMM Transitional Living and Learning Center, Haven for Hope, Travis Park Dermatology (under faculty supervision).Clinical experiences will be at these clinics.This elective will include a few evening seminars throughout the year in which students and faculty meet to discuss social justice, how to start a free clinic, homelessness and topics chosen by the students.Every student will complete a project of their choice over the year. In this longitudinal course, students will be required to undertake an independent study into a specific issue in medical ethics or medical humanities.Students will be required to read on research methods in medical ethics as well as literature in their issue of interest, and then to propose and conduct an original study project, a literature review, a position paper, or an ethical analysis of a particular topic or case. Students will be expected to write an academically rigorous final research report of 10 to 15 pages.Students will be encouraged to produce a final paper that can be submitted for publication in a peer-reviewed bioethics or medical humanities journal.Students will be required to meet with the instructor and/or chosen faculty advisor over the course for assistance, guidance, and discussion.(Center for Medical Humanities and Ethics). INTD 4019. Clinical Ethics. 2 Credit Hours. Students in this two-week course will have the opportunity to focus on work in clinical ethics consultation.The student will be required to participate in rounds as an ethicist, do in-depth reading on clinical ethics consultation, observe clinical ethics consults, attend ethics committee meetings, and provide an educational seminar to hospital staff on an issue of ethical significance.This is a 2-week, in person course for 4th-year medical students who are planning future work in marginalized communities either locally or globally.This preparatory course uses a multidisciplinary, assetbased approach to provide a foundation of practical knowledge in community engagement to optimize the students' experiences, facilitate their adaptation to working in diverse settings, and maximize their impact in the communities where they serve.Topics include community partnerships and responsiveness to community needs, chronic and infectious illnesses of high burden in marginalized communities, prioritizing community resources, advocacy, health equity, ethical dilemmas, cultural humility, and professionalism.Course material is presented through a variety of approaches, including lectures, smallgroup case discussions, laboratory sessions, and online learning modules. Students will be introduced to the novel coronavirus SARS-CoV-2 and the disease it causes, COVID-19.They will review emerging information pertaining to the virus and disease including virology, epidemiology and pathophysiology.They will also be engaged with material covering leadership principles, communication and social determinants of health.They will participate in online activities and discussions to further facilitate learning.This elective is completely online.Prerequisites: Completed MS1 and MS2 curriculum. It is an interactive, inter-professional course that engages students in music listening sessions to teach students active listening skills. Through various forms of music, students will learn how to actively listen for specific details to gain insight on meaning, become comfortable with ambiguity and interpretation, and develop pattern recognition skills to quickly recognize deviation.Students will also develop stronger methodology for writing patients notes through conceptual practice of SOAP format notes for music pieces.Taught jointly by UTHSCSA faculty and professional musicians, this strategy of applying practical skills to an abstract concept such as music will refine these skills for students in clinical settings.Specifically, this course aims to improve interpersonal communication skills, and organizational note writing.This is also an opportunity for students to practice problems solving with other healthcare professionals.No longer can medical students wait until their senior academic year to prepare for USMLE Step 2 and discern their chosen specialty.In this course, which is to be delivered during the spring immediately prior to their senior year, medical students will be given instruction on specialty discernment and trained in test preparation techniques.Specialty discernment requires various forms of advising and mentoring.In this course, students will receive general instruction on the process of specialty discernment and will use online resources to prepare for residency applications in the context of academic metrics, specific program requirements, and specialty-based resources.The transition from undergraduate to graduate medical education is one of intense focus.While the transition seems as if it has a marked delineation, it exists on a continuum.In order to support the active process of creating goals, students need to reflect on their experiences as a clerkship student and create expectations of themselves in the context of their chosen specialty and career.Goal orientation in the context of mastery orientation defines success in terms of how well the knowledge, skills, and abilities have been demonstrated.(Cutrer, et al.)This type of goal orientation requires reframing and additional advising depending on the phase of the learner.Test preparation does not have to be separate and dedicated from the medical curriculum.In fact, directing learners to recognize opportunities to use exam preparation to build and apply more clinically-minded strategies, even when the content of the exam may not focus on clinical reasoning or diagnosis, might better prepare them to learn from their patients and to apply similar strategies later on.(Swan Sein, et al., 2021) The goal of this 8-hour course is to help senior medical students, who will be residents in a few months, develop teaching skills that will enhance the quality of their interactions with students.The course will be conducted in an interactive workshop format to allow participants to practice important teaching skills for residents.These include 1) orienting and priming students to their responsibilities and roles and accepting the personal role of teacher and role model, 2) giving feedback to improve student performance, 3) helping students to improve their patient presentations-the use of questioning, and 4) coaching procedural and technical skills.The participants will practice these skills and receive feedback from their course peers and instructors based on the guidelines for clinical teachers in action with students and provide critiques.Large and small group discussions and role plays will be used to reinforce teaching principles.This course, the second component of our broad survey of the basics of neuroscience, begins at the level of the neural circuit, and guides the students through an understanding of increasingly complex levels of organization and function in the brain.Topics include neurotransmitter systems, sensory and motor function, motivated behavior, regulation and integration of autonomic, behavioral, and emotional responses in the limbic system, higher order cognitive processes, and the neurobiological basis underlying some important psychiatric disorders and their treatment. INTD 5046. Metanalysis In Cognitive Neuroimaging. 2.5 Credit Hours. The objective of this course is to familiarize students with human functional brain imaging methods, experimental designs, statistical analyses, inferential strategies, and content.Students are guided through a literature-based research project that culminates in a quantitative metanalysis of a set of studies using similar tasks. INTD 5047. Neuroanatomy. 2 Credit Hours. The purpose of this course is to provide students with a practical working knowledge of the structure of both the peripheral and central nervous system.The emphasis will be on the organization of the human brain, although the brains of other species may also be included if appropriate for a specific brain region.The course will look at each of the individual components of the central nervous system in some depth but will also emphasize the complex integration of these various components into a functional brain.It is crucial to understand the intricate process of translating basic research into market driven products, navigate the complex pathways of intellectual property management and the regulatory affairs of agencies such as the FDA.This course will offer students in biomedical sciences the opportunity to integrate industry-relevant training and experience with their basic science education.The course will explore the marketing and regulatory process by which a biomedical product is developed and brought to commercialization. INTD 5075.Complementary Healthcare for the Clinician.0 Credit Hours.The goal of this elective is to introduce future doctors to practices outside of the classical medical school curriculum that promote an evidence-based approach to wellness.This is so that the medical students of the UTHSC School of Medicine are informed about the reality, evidence and rumor surrounding a variety of commonly used alternative and supplementary healthcare practices.The of this class is not to make the student an expert in areas such as acupuncture or yoga, but to be well informed of the role of such practices as it relates to patient treatment and wellness.To this end, all the classes will have a practical component which will allow the students to experience the alternative modalities in a structured setting.This course provides an in-depth learning experience that instructs students on the fundamentals of cell biology as well as prepares the student to evaluate and design new research in the cutting-edge areas of modern cell biology.The course combines a didactic program of lectures along with a small-group discussion format in which students interact closely with a group of faculty who have active research programs.The course focuses on active areas of research in cell biology: Cell Signaling and Communication, Cell Growth, and Cell Death.Each week, the faculty jointly discuss key publications that serve to bridge the gap between the fundamental underpinnings of the field and the state of the art in that area.Students and faculty will then jointly discuss key publications that serve to bridge the gap between the fundamental underpinnnings of the field and the state of the art in that area. INTD 6008. Mitochondria & Apoptosis. 1 Credit Hour. This course will focus in depth on Mitochondria and Apoptosis.Topics will include: Mitochondria and Respiration; Mitochondria and Reactive Oxygen Species; Mitochondria and Apoptosis.It will provide an opportunity for a unique learning experience where the student can prepare to evaluate and design new research in the cutting-edge areas of modern cell biology and molecular biology.Instead of a didactic program of lectures, the entire course comprises a small-group format in which students interact closely with a group of faculty who have active research programs.Each week, faculty will provide students with a brief overview of the research area.Students and faculty will then jointly discuss key publications that serve to bridge the gap between the student's prior understanding of the field and the state of the art in that area. INTD 6009.Advanced Molecular Biology. 2 Credit Hours.This course will provide an in-depth learning experience on the fundamentals of molecular biology as well as prepare the student to evaluate and design new research in the cutting-edge areas of modern molecular biology.The course combines a didactic program of lectures along with a small-group discussion format in which students interact closely with a group of faculty who have active research programs. The course focuses on active areas of research in molecular biology: Chromatin structure, Transcription, DNA Replication and Repair, Recombination, RNA processing and regulation, Protein processing, targeting and degradation.Each week, the faculty provide students with didactic lectures on a current research area.Students and faculty then jointly discuss Key publications that serve to bridge the gap between the fundamental underpinnings of the field and the state of the art in that area. INTD 6011. Introduction To Science Of Teaching. 1 Credit Hour. This course will provide insight into the basic skills of learning and teaching.Faculty from the Academic Center for Excellence in Teaching and the Graduate School will provide the opportunity to learn the skills, strategies, and experiences for a future in academia and teaching.Topics include lecture presentations on why scientists choose to teach, planning a student learning experience in addition to developing a lecture syllabus, curriculum and teaching portfolio and philosophy.and some cost (airfare and small project fee), and is available October, January, and April, 3) Programs in Nicaragua, Mexico, Panama, and Guatemala, and 4) Other sites available through online directory: http:// www.globalhealth-cc.org/GHEC/Resources/GHonline.htm.All rotations share a commitment to service learning -medical education and selfreflection that arises out of service to needy populations.Students spend up to 4 weeks (or possibly longer) living in an international site and participating in the care of patients, under the supervision of local and visiting health care providers.The clinical settings and caseload will vary based on the location.There may be opportunities for patient education and emphasis on efforts of local empowerment, aiming to build up the communities in a sustainable way.Students will be expected to integrate themselves into the health care delivery system, and when possible, to strive to make an impact through community education and home visits.For certain Latin American sites, fluency in Spanish is a prerequisite.Students are encouraged to seek similar service learning experiences with underprivileged populations in San Antonio and Border communities prior to or after the rotation.End of rotation "reflection essays" are required and will serve to process student experiences.In this course you are required to read short stories, poems, and a book of nonfiction.While many of the stories or poems directly address medical or ethical issues, the primary purpose is not to enhance your store of knowledge in these areas, but to promote your appreciation of these works through discussions with other students (online via Canvas discussions and in class) and with authors and lecturers.Your own contributions to the course -not just the insights you've gained as medical students but the wisdom you bring to the class as human beings -will be critical to its success.We hope that the readings will help you prepare for and process your clinical experiences, furthering your development as a person as well as physician.There will be no "right" or "wrong" answers in this course; rather, our goal is to encourage thoughtful and serious responses to the readings and a lively and fulfilling conversation about them and the issues they raise.MSIV students will receive two credits for completion of this longitudinal elective.All students are expected to participate in class discussions.Grades are earned by reading assignments, attendance at class meetings, and posting primary and secondary responses to posted discussion questions.Open for Cross Enrollment on Space Available Basis. INTD 7020.Clinical Patient Management. 5 Credit Hours.This course is designed to help students develop skills in clinical behavioral dentistry through small group discussions, lectures, and routine patient treatment by application of the principles of coordinating patient care; communicating effectively with colleagues, staff, and faculty; and managing time, records, and environment.The students are required to manage their comprehensive care patients in the Junior Clinic following the principles presented in this course. INTD 7074. Topics In Translational Medical Product Development. 1 Credit Hour. It is crucial to understand the intricate process of translating basic research into market driven products, navigate the complex pathways of intellectual property management and the regulatory affairs of agencies such as the FDA.This course will offer students in biomedical sciences the opportunity to integrate industry-relevant training and experience with their basic science education.The course will explore the marketing and regulatory process by which a biomedical product is developed and brought to commercialization. Students will have the opportunity to use this course to study for the National Board, Part II examination, according to their own need.This course also will serve as a framework for a student returning from a leave of absence or from other protracted time away from classes or clinic. At the conclusion of the course, the enrolled student must demonstrate knowledge and/or skills and/or values consistent with the expectations for entering the level of course study from which the student left.An individualized course of study will be developed once the student is enrolled. INTD 4025 . Healthcare Practice and Policy Elective.0.5 Credit Hours.The Healthcare Practice Elective is an introductory-level, discussionbased, eight-hour course targeted to fourth-year medical students.The course focuses generally on practice and policy issues of payment methodologies, cost-effectiveness, and access to care.INTD 4030.Serving Marginalized Communities: From local to global. 2 Credit Hours. This course is required of all students in the IBMS program, except of those who have signed up for Final Hours.Students are required to attend a minimum of 16 seminars per semester and to complete a requirement to demonstrate their attendance and participation.To fulfill the minimum number of seminars, students may include seminars offered by disciplines other than their own in which they are enrolled.However, to enroll, students should obtain permission from the course Section Director affiliated with the appropriate discipline.The course numbers of the individual course sections are INTD 6090-1GEN, 6090-2BA, 6090-3CB, 6090-4CGM, 6090-5MIM, 6090-6MBB, 6090-7NS and 6090-8PP for the IBMS Disciplines: Biology of Aging (BA), Cancer Biology (CB), Cell Biology, Genetics & Molecular Medicine (CGM), Molecular Biophysics & Biochemistry (MBB), Molecular Immunology & Microbiology (MIM), Neuroscience (NS), and Physiology & Pharmacology (PP INTD 3058. Hospice and Palliative Medicine. 0 Credit Hours. The purposes of this completely online course are to: 1. Prepare early clinical students to increase knowledge in clinical settings including: a. Exposure to healthcare team members, b.Exposure to roles on clerkship (H&Ps, orders, SOAP notes, prescriptions, etc.), c.Interpretation of EKGs and radiographs, d.Interpretation of normal/abnormal lab values, e. Reflection essays serve as a way to process experiences, including clinical cases, new perspectives gained, and analysis of health care disparities, and strategies for the overcoming poverty-related health problems.Students are encouraged to share their experiences upon return through a formal presentation.INTD 3002.School of Medicine Research Elective.0 Credit Hours.Students will participate in basic or clinical research projects under the supervision of university faculty.The goal of this elective is to immerse students in a rich research environment and provide an opportunity to work with research mentors to fully engage in the research process from writing the proposal to collecting the data to disseminating research results.This elective is open to students who already have an established working relationship with a faculty member and who wish time to continue their work, students who wish to establish a new project, and for students who are in the MD-MPH degree program and MD with Distinction in Research Program.Interested students must contact the course director prior to the enrollment date to express interest in the elective and receive further instructions on the application process for the research and identification/ confirmation of the faculty mentor.INTD 3030.Clinical Foundations.3 Credit Hours. INTD 4012. Capstone II: Machine Learning and Artificial Intelligence for Health and Medicine. 4 Credit Hours. select mentors from UTHSCSA and UTSA.Completion of INTD 4011: Capstone I; Machine Learning and Artificial Intelligence for Health and Medicine.INTD 4015.Humanism in Medicine Fellowship. 2 Credit Hours. INTD 4058. Hospice and Palliative Medicine Elective. 4 Credit Hours. This The course will center on the Texas Medical Practice Act and applicable federal laws. 4048.Art Rounds. 2 Credit Hours.This is an interactive, interprofessional course that takes students to the McNay Art Museum to learn physical observation skills.Studies demonstrate that increased observational skills translate to improved physical examination skills.Using artwork as patients, students will have the opportunity to learn how to observe details and how to interpret images based on available evidence.Taught jointly by Health Science Center faculty and McNay museum educators, students will have the opportunity to develop and hone their observation, problem solving, and assessment skills.They will also observe, interpret, and give case reports on the original works of art to teach them the skill of verbalizing descriptions of what is seen, and not to accept assumptions made with a first impression.Open for Cross Enrollment on Space Available Basis.rotationoffers clinical experience in Hospice and Palliative Medicine (HPM).Palliative care provides treatment for seriously ill hospitalized and ambulatory patients and focuses on symptom management, enhancement of function, physical comfort, quality of life, psychosocial support, and communication about the goals of medical care for the patients as well as their families.presentingresults and evaluating peers.The course objectives include facilitating systems thinking, exposing students to the ACGME general competencies (with emphasis on practice-based learning and improvement and systems-based practice), increasing understanding of health care economics and working in teams.INTD 4105.Medical Jurisprudence.0.5 Credit Hours. The Skin Around Us: A View of Skin Disease from a Humanities Perspective. 4 Credit Hours. This elective is for fourth year medical students with a special interest in learning about skin diseases through a humanities perspective.Throughout the four week course, students will attend daily clinics, create a project and write an essay on activities encountered during the elective.The students will also complete brief writing assignments each week after watching videos, movies, and/or reading books. INTD 4108.Bridging the Gap: Transition from UME to GME. 4 Credit Hours.Medical education is changing with the introduction of a United States Licensure Medical Examination (USMLE) Step 1 scored on a pass/fail basis, increasing focus on the Undergraduate Medical Education to Graduate Medical Education transition, and changes to the residency application process. 4115. Advanced Electronic Health Record Training (EPIC Based). 4 Credit Hours .The primary learning objective of this elective is to prepare students for advanced use of the EPIC EMR in clinical and research environments.Successful completion of this course provides a formal certification as a Physician Builder in EPIC.That designation will permit students to take advantage of advanced features in EPIC as they advance in their careers.The course is broken down into two sections: Physician Builder-Basic and Physician Builder-Advanced.This course is a requirement for students enrolling in the MD/MS in AI dual degree program but is available to all medical students in good standing at the LSOM.Students must have a working familiarity with the EPIC EMR.One way to establish this familiarity is to have completed a clinical rotation in which EPIC EMR was utilized as a part of the assigned clinical work.Course fees: If the student is not part of the MD/MS in Artificial Intelligence dual-degree program, fee for the EPIC training course will need to be paid by student.INTD 4205. Veritas Mentors in Medicine Longitudinal Elective. 2 Credit Hours. This is longitudinal elective and the course work requirements will be for 2 week credit and must be complete by March 1st.Evaluation of MiM performance will include feedback from faculty mentors and students. INTD 4210. School of Medicine Research Elective Level 1. 4 Credit Hours. Medical research is multidisciplinary and broad in scope.Students will participate in basic, clinical research, quality improvement, or patient safety research projects under the supervision of faculty in the Health Science Center.The goal of this elective is to immerse students in a rich scholarly environment and provide an opportunity to work with research/ faculty mentors to fully engage in a scholarly research process from writing the proposal to collecting the data to disseminating results.This elective is open to students who already have an established working relationship with a faculty member and who wish time to continue their work, students who wish to establish a new project, and for students who are in the MD-MPH degree program and MD with Distinction in Research Program.Interested students must submit a research elective application which includes the faculty mentor the student will work, to the office of UME, no later than 12 weeks before the research elective is to begin.Applications will be reviewed and confirmed or declined no later than 8 weeks prior to the proposed start date of the elective.Students will be able to 1) Formulate a research question and identify a research methodology to answer that question; 2) understand research ethics and apply an ethical approach to research design, implementation, and dissemination 3) design a research study and gather quality data; 4) apply and interpret basic biostatistics relevant to the individual research project; 5) write scientific reports.The supervising faculty member will evaluate the performance of the student using a standard, research specific, medical student evaluation form.Students will receive a Pass or Fail summative grade at the conclusion of the 4 week elective.Faculty will be expected to give the student formative feedback after two weeks to assist the student in meeting all expectations to pass the elective.Medical research is multidisciplinary and broad in scope.Students will participate in basic, clinical research, quality improvement, or patient safety research projects under the supervision of faculty in the Health Science Center.The goal of this elective is to immerse students in a rich scholarly environment and provide an opportunity to work with research/ faculty mentors to fully engage in a scholarly research process from writing the proposal to collecting the data to disseminating results.This elective is open to students who already have an established working relationship with a faculty member and reflects their increasing experience with the research process.INTD 4210 Level 1 elective or evidence of past experience knowledge and/or skills is a prerequisite.The expectation is that enrolled students will continue with research experiences begun in INTD 4210 Level 1 including students pursuing the MD-MPH degree and MD with Distinction in Research.Interested students must submit a research elective application which includes the faculty mentor the student will work, to the office of UME, no later than 12 weeks before the research elective is to begin.Applications will be reviewed and confirmed or declined no later than 8 weeks prior to the proposed start date of the elective. INTD 4211.School of Medicine Research Elective Level 2. 4 Credit Hours. INTD 4212. School of Medicine Research Elective Level 3. 4 Credit Hours. Medical research is multidisciplinary and broad in scope.Students will participate in basic, clinical research, quality improvement, or patient safety research projects under the supervision of faculty in the Health Science Center.The goal of this elective is to immerse students in a rich scholarly environment and provide an opportunity to work with research/ faculty mentors to fully engage in a scholarly research process from writing the proposal to collecting the data to disseminating results.Students enrolled in this course will have prior experience with research and ongoing research activities.As such, this elective is open to students who already have an established working relationship with a faculty member and reflects their increasing experience with the research process.INTD 4211 Level 2 electives is a prerequisite.As with INTD 4211 Level 2, the expectation is that enrolled students will continue with research experiences begun in INTD 4210 Level 1 and INTD 4211 Level 2 including students pursuing the MD-MPH degree and MD with Distinction in Research or produce evidence of past experience knowledge and/or skills which are deemed equivalent to these prerequisites.Interested students must submit a research elective application which includes the faculty mentor the student will work, to the office of UME, no later than 12 weeks before the research elective is to begin.Applications will be reviewed and confirmed or declined no later than 8 weeks prior to the proposed start date of the elective.Students will be able to formulate a research question and identify a research methodology to answer that question; understand research ethics and apply an ethical approach to research design, implementation, and dissemination; design a research study and gather quality data; apply and interpret basic biostatistics relevant to the individual research project; write scientific reports.The supervising faculty member will evaluate the performance of the student using a standard, research specific, medical student evaluation form.Students will receive a Pass or Fail summative grade at the conclusion of the 4 week elective.Faculty will be expected to give the student formative feedback after two weeks to assist the student in meeting all expectations to pass the elective. INTD 5005.Core Course 1: Biochemistry.2 Credit Hours.Topics to be covered include: protein structure; properties of enzymes; structure, biosynthesis, and function of lipids; pathways and regulation of carbohydrate metabolism and biosynthesis and regulation of amino acids, nucleotides, and related compounds.Prerequisites: consent of instructor. INTD 5007. Advanced Cellular And Molecular Biology. 4 Credit Hours. on a current research area.Students and faculty will then jointly discuss key publications that serve to bridge the gap between the fundamental underpinnings of the field and the state of the art in that area.INTD 5043.Fundamentals Of Neuroscience 2: Systems Neuroscience.3 Credit Hours. Laughter is the Best Medicine: An Interdisciplinary Elective about Humor, Healing, and Healthcare. 1 Credit Hour. The topics covered in the course are specifically designed to mesh in time with those covered in Fundamentals of Neuroscience 2 describing the function of these areas.For this reason, it would be best if these two courses were taken concomitantly.The course will be didactic with digital images, models, and wet specimens included in the course.This class is a serious look at humor!The physiological and psychological benefits of humor, as well as its therapeutic use with patient interactions, will be explored.Students will learn how to develop and improve their personal use of humor to combat burn out, through techniques to enhance coping skills and stress reduction.Student participation and interaction is integral to the content delivery. INTD 5067.Introduction to Programming for Biologists. 3 Credit Hours.This course covers fundamentals of computer programming.It is designed and tailored for biologists in three ways: 1) students can pass it with minimal mathematical background, 2) when possible, examples and exercises are based on biological data analyses, and 3) it prepares students for other courses that are focused on bioinformatics techniques and tools.The topics are similar to the first introductory course that a student would take in a computer science program including: An introduction to Unix operating systems (i.e., Linux and macOS), Research. 1 Credit Hour.This course is designed to familiarize students with the current literature related to cardiovascular disease.Each week a different research topic selected from the recent literature is presented and discussed.Students are expected to attend and participate in the discussions.In addition, students are required to prepare and present once during the semester.A list of previous and current course presentations will be available online.This elective allows for detailed in-depth study in a specific area of study.The area and mode of study are to be agreed upon by the student and instructor.The course may be repeated for credit when the area of study varies.Clock hours are to be arranged.Prerequisites: Graduate standing and consent of instructor.This course covers topics relevant to ethics in scientific research.The course is taught on a case-study basis, dealing with real and hypothetical situations relevant to the conduct of scientific research.Topics discussed will include, but will not be limited to: data management, peer review, recognizing scientific misconduct, authorship, and The University of Texas regulations relevant to human and animal research.This course is required of all doctoral graduate students. The purpose of this course is to determine the impact of the IPE course on developing IPE teams/teamwork and communication competencies relative to environmental health knowledge and its intersection with health equity.UTHSA students will complete IPE competencies pre-post surveys, a course evaluation and conduct a community service learning (CSL) activity to evaluate their understanding of IPE and environmental health and inequities.Open for Cross Enrollment on Space Available Basis.INTD 6002.Ethics In Research.0.5 Credit Hours. Resident Lecture Series in Psychiatric Disorders and Psychopharmacology. 1 Credit Hour. The course is recommended for Supervised Teaching Course INTD 6071.Computational biology is a rapidly emerging subfield of biomedical science.Acquiring basic computational skills will enable biologists to better understand and analyze "big data" and use novel approaches to answer biological questions.In addition, it will improve communication with computational scientists and bioinformaticians, thereby enhancing collaborations.The course consists of two modules.The first 5-week module is designed to gain familiarity with R coding.The second 3week module covers working in the Unix/Linux environment and the use of shell scripts.This course will be taught in the form of interactive hands-on computer classes in combination with homework assignments.No prior knowledge of programming or coding is required.This course is designed to prepare students for more advanced computational biology course work, such as INTD 6062 and CSAT 6095.Open for Cross Enrollment on Space Available Basis.By the end of this course, a student should be able to: explain in-depth the topics covered during the course, describe and discuss research publications in a wide variety of disciplines within the life sciences, critically analyze, interpret and evaluate scientific publications or presented research updates, identify and present emerging topics in their field of interest (as defined by the research of their mentor).The course is for PREP-UT Health Link students.This is an interdisciplinary advanced elective in which students attend 17 lectures from the Psychiatry Year One Residents' lecture series.These lectures focus on the psychopathology, epidemiology, and pharmacological treatments for illnesses such as schizophrenia, anxiety disorders, trauma related disorders, eating disorders, and sleep disorders.This is an interdisciplinary advanced elective in which students attend 20 lectures, selected from the full offering of daily one-hour lectures comprising the Neurology Residents' Basic Sciences lecture series.These lectures cover a range of topics, such as Epilepsy, Movement Disorders, the Thalamus, Parkinson's Disease, Alzheimer's Disease, Stroke, Sleep, etc., all given from a clinical perspective.In addition, graduate students will have the opportunity to observe or participate in at least two enrichment activities related topically to the lectures they attend, which may include such settings as case presentations, diagnostic training sessions, or clinical observations, again selected from the list of offerings included in the "Neurology Residents" series. INTD 6046. Resident Lecture Series in Psychiatric Disorders and Psychoparmacology II. 1 Credit Hour. This course, designed to assist graduate students and faculty in acquiring teaching skills, is composed of four modules, each covering a range of topics from lecture and clinical teaching to instructional development to assessing student achievement.The final problem set will be a capstone project where the students implement methods of their own choosing and compete to achieve the best model performance.Open for Cross Enrollment on Space Available Basis. Flow Cytometry: Principles and Applications. 2 Credit Hours. This course will cover the principles of flow cytometry, the components of cell analyzers and cell sorters, the applications of different assays in flow cytometry and the interpretation of flow cytometry data.Flow cytometry plays an essential role in helping to elucidate cell phenotype characterization and function in both clinical and research settings.The purpose of this course is to bring students up-to-date on the technology of flow cytometry and to help them gain knowledge in how to apply this tool for patient diagnosis as well as basic and translational research.This course will focus on recent findings and topics related to the underlying aspects of the neural basis of learning and memory.Students will have the opportunity to learn about: molecular basis of memory formation, consolidation and retrieval, memory and emotion, associative learning, memory and amnesia, and recognition memory and the medial temporal lobe.The lectures will be interactive and driven by discussions of key journal articles.Each week the first hour will be reserved for lecturing and the second hour will be reserved for a discussion of a journal article. Elective in International Medicine. 4 Credit Hours. This elective serves as a vehicle for students to participate in international medicine rotations.Students will work with a faculty sponsor to identify a program, either a pre-established site or a site discovered by the student which requires faculty approval.This elective includes: 1) The Center for Medical Humanities and Ethics International Scholars Program in India, a competitive program requiring a separate application through the department of Medicine, 2) Shoulder to Shoulder program in Latin America, which requires a separate application process 7005. Indian Health Care Preceptorship. 4 Credit Hours. This elective offers the opportunity for an experience in the health care of Native Americans, coordinated through the Indian Health Service.Most experiences involve both inpatient and outpatient care under direct supervision of board certified family physicians or internists.Educational activities such as conferences, teaching rounds, etc., may vary from site to site.All clinical sites are located outside the state of Texas, including sites in New Mexico, Arizona and Alaska.Early application is recommended.Students completing appropriate application forms may be reimbursed for transportation costs and provided room and board by the Indian Health Service. INTD 7006. Biomarkers in Health Care Research and Delivery. 1 Credit Hour. This course provides a broad overview of the rapidly evolving use of biomarkers in health care research and health care delivery.Biomarkers are non-subjective (i.e., not symptom scores, disability scales, or diagnoses) physical or functional measurements that serve as quantitative indices of physiological processes, pathological processes, and responses to exposures or interventions (including therapeutic interventions) that are intended to enhance the rigor and reproducibility of health care research and care delivery.Federal agencies, including the Food and Drug Administration (FDA), the National Institutes of Health (NIH) and the Institute of Medicine (IOM) are deeply engaged in promoting the use of biomarkers, introducing multiple funding opportunities for biomarker development toward FDA qualification and/ or regulatory approval for clinical use.Additionally, opportunities for commercial partnership during biomarker development will be discussed.Examples will be provided of fluid (serum, CSF, urine, etc.), tissue, imaging, and biometric biomarkers (including wearable devices).Course format will emphasize assigned readings/viewings from various sources (IOM white papers, FDA & NIH video and powerpoint presentations, recent biomarker validation publications, current biomarker qualification submissions, relevant regulatory guidance, funded-grant synopses, et cetera) followed by in-class review and discussion.Special topic lectures will be delivered by invited speakers ranging from established biomarker researchers to regulatory experts.Open for Cross Enrollment on Space Available Basis.
2019-05-07T13:41:03.861Z
1978-01-01T00:00:00.000
{ "year": 1978, "sha1": "b6f22a592c37eb027defc14308df223c4f628440", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "b80db0eac62e65e05a6c5ffaa78cb46701d07754", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
211829741
pes2o/s2orc
v3-fos-license
Parsing Synthetic Aperture Radar Measurements of Snow in Complex Terrain: Scaling Behaviour and Sensitivity to Snow Wetness and Landcover : This study investigates the spatial signatures of seasonal snow in Synthetic Aperture Radar (SAR) observations at different spatial scales and for different physiographic regions. Sentinel-1 C-band (SAR) backscattering coefficients (BSC) were analyzed in the Swiss Alps (SA), in high elevation forest and grasslands in Grand Mesa (GM), Colorado, and in North Dakota (ND) croplands. GM BSC exhibit 10dB sensitivity to wetness at small scales (~100 m) over homogeneous grassland. Sensitivity decreases to 5 dB in the presence of trees, and it is demonstrated that VH BSC sensitivity enables wet snow mapping below the tree-line. Area-variance scaling relationships show minima at ~100 m and 150-250 m respectively in barren and grasslands in SA and GM, increasing up to 1 km and longer in GM forests and ND agricultural fields. The spatial organization of BSC (as described by 1D-directional BSC wavelength spectra) exhibits multi-scaling behavior in the 100 -1,000 m range with a break at (180-360 m) that is also present in UAVSAR L-band measurements in GM. Spectral slopes in GM forested areas steepen during accumulation and flatten in the melting season with mirror behavior for grasslands reflecting changes in scattering mechanisms with snow depth and wetness, and vegetation mass and structure. Overall, this study reveals persistent patterns of SAR scattering variability spatially organized by land-cover, topography and regional winds with large inter-annual variability tied to precipitation. This dynamic scaling behavior emerges as an integral physical expression of snowpack variability that can be used to model sub-km scales and for downscaling applications. interface. Volume scattering is practically undetectable at C-and L-bands for shallow snowpacks, in which case the backscatter at the snow-ground interface is dominant [34]. The sensitivity of backscattering coefficients to snow cover condition in the Alps was examined by many in the past including Matzler (1996) and Strozzi et al. (1997) [28,35]. In particular, Strozzi and Matzler (1998) [36] demonstrated that C-band radar measurements with an incidence angle of 30° could be used to distinguish wet from dry snow and snow-free areas. The capability to discriminate between dry-refrozen and wet snow at microwave frequencies is likewise well established [19,37]. Nagler and Rott (2000) [24] used ERS-2 (European Remote Sensing satellite 2) and RADARSAT-1 imagery to identify wet snow in the Austrian Alps. This work was followed by an improved approach using dual polarimetric bi-temporal Sentinel-1 SAR data to monitor snowmelt [25]. To integrate across scales from the SAR nominal measurement scale (10's m) to landscape scale, including the scales representative of unambiguous physical processes in state-of-the-science models (100 m -kms), remains however a critical challenge [38]. The goal of this study is to investigate the temporal evolution of the spatial signatures of seasonal snow in SAR observations at different spatial scales and for different physiographic regions using multi-temporal Sentinel-1 dual polarization C-band observations. L-band UAVSAR (Uninhabited Aerial Vehicle Synthetic Aperture Radar) observations are also available for one of the study regions. The focus is on quantifying and interpreting the impact of spatial variability on SAR BSC imagery with an eye toward inferring constraints for physically-based snow models using spectral scaling analysis. In particular, the temporal evolution of the spectral slope (scaling factor) and local changes in spectral slope (scaling breaks) with scale are interpreted in the light of snowpack condition, landcover, and landform. This information can be used to capture (parameterize) scale-aware subgridscale variability in coupled snow hydrology-microwave models, and to downscale snow products (e.g. passive microwave) as illustrated by [68] for soil moisture. In addition, the scaling characteristics can be used to upscale or downscale the results of coupled-snow hydrology-microwave models observing system simulators (OSS) to the desired scale in forward mode and in data-assimilation experiments. The study regions and the data are described in Section 2. Section 3 describes the processing steps followed to derive the BSC from Sentinel-1 measurements, including the Cloud-Pottier (CP) decomposition the dual polarization SAR data to derive the Alpha and entropy parameters. Section 4 presents the results of the multi-temporal analysis of backscattering, Entropy and Alpha parameters for different regions, and the spatial scaling analysis toward elucidating how the SAR BSC intensity changes with topography and land cover, followed by conclusion and discussion concerning the suitability of SAR measurements in Section 5. Supplementary data presented in Tables and Figures are referred to using the notation S# throughout the manuscript. Study Regions Three different study sites characterized by deep seasonal snowpacks (depth > 1 m) were selected to investigate the snowpack properties from Sentinel-1 SAR imagery ( Figure 2). The first study region is Grand Mesa, Colorado (CO), USA (38°54' -39°06'N, 107°42' -108°20'W). This is one of NASA's Snow Experiment (SnowEx) primary field sites, where an intense field campaign was conducted in February 2017, hereafter referred to as SnowEx'17. SnowEx's primary goal is to enable development and, or systematic evaluation of alternative snow remote-sensing technologies, methods and retrieval algorithms using extensive in-situ measurements [38,39]. The Swiss Federal Office of Meteorology and Climatology (MeteoSwiss) maintains climatological stations to monitor weather and snowpack properties at different terrain elevations in this region, most located above the tree-line. The Swiss Alps site is characterized by steep complex topography in the 800-3000 m elevation range and heterogeneous land cover including barren land, grassland, deciduous forests, urban areas, and lakes. In contrast, Grand Mesa is characterized by elevated flat terrain with complex land cover ( Figure 3) including grassland, shrubs, and meadows of closed canopy evergreen forest with deciduous forest in adjacent slopes [39,40]. SNOTEL stations Snow cover is maximum over Grand Mesa from November through March and minimum from June to September for the 2016-2019 period ( Figure 4). June and July images serve as snow-free baseline. Snow cover over the Swiss Alps shows similar seasonality ( Figure 5), although snow accumulation begins in September and decreases in October due to the warmer temperatures and high precipitation in the form of rainfall rather than snow [41]. In North Dakota, considerable snow accumulation occurs during February and March when the entire study region is covered with snow until it melts in April ( Figure 6). Land-cover Classification -The spectral variability vegetation index (SVVI) is used to classify vegetation due its refined sensitivity as compared to NDVI (Normalized Difference Vegetation Index) and application to both natural and agricultural land-uses [42]. SVVI is calculated as the difference between the standard deviation (SD) of all Landsat bands (excluding thermal) and the SD of all three infrared bands, as follows SVVI derived from Landsat-8 data over Grand Mesa was used here to distinguish grassland, forest and snow cover features as shown in Figure 7. Nevertheless, despite superior performance as Backscattering Coefficient Estimation The standard framework for processing the Sentinel-1 Single Look Complex (SLC) data is presented in Figure 8. Radiometric and geometric corrections are applied to the Sentinel-1 SAR data to derive the normalized backscattering coefficients. Thermal noise due to the background energy of the SAR receiver is removed from the VV and VH intensity images (this background energy is independent of the received signal of the SAR sensor). Next, radiometric calibration was performed which converts the digital number of the image pixel to the corresponding backscatter intensity for both polarization channels, and phase information is preserved to extract the coherency matrix (https://sentinel.esa.int/web/sentinel/technical-guides/sentinel-1-sar). The Terrain Observation with Progressive Scans SAR (TOPSAR) technique is applied in the IW mode of Sentinel-1 acquisitions to achieve large swath widths with enhanced radiometric performance [43]. The IW mode acquisition consists of three swaths. Each swath has a single image for each polarization channel, thus a single SLC image consists of six images for dual polarization channels. Due to the coherent addition of scattered signals within a pixel, constructive and destructive interference occurs depending on the relative phase of each scattered signal. Speckle is an inherent problem of the SAR system, and the Lee speckle filter [44] with a width of five pixels (75m) was applied to remove the speckles of the backscattered elements. SAR data have different topographical distortions (i.e. layover, foreshortening, shadowing) that depend on acquisition geometry. A geometric terrain correction is necessary to convert the data from slant range geometry into a gridded map. Specifically, the Range-Doppler terrain correction is applied, which is a robust approach that takes into account topography, and orbit and velocity information from the satellite. While computing the Sentinel-1 backscattering coefficient (BSC), local terrain variation and their impact on the BSC is not considered. So the local incidence angle is used to represent the local terrain variation as proposed by Kellendorfer et. al (1998) [45]. This is called radiometric corrected backscattering coefficient. The BSC of an illuminated target area is highly dependent on the incident angle of the signal. At small incidence angles the backscattered intensity is high compared to that at higher incidence angles over the same illuminated area. Thus, the cosine correction [46] is applied to the georeferenced data to minimize the backscatter variation due to the incidence angle. Finally, radiometric and geometric corrected normalized BSC coefficients for VV and VH polarization channels are derived. Coherency Metrics (Entropy -Alpha Estimation) The CP (Cloude-Pottier) polarimetric decomposition [47,48] is an incoherent decomposition technique based on the eigenvalues and eigenvectors from the coherency matrix that was originally developed for fully polarimetric SAR data (intensity and phase at HH, VV, and HV polarizations). The Entropy and Alpha parameters obtained from the CP decomposition reveal scattering characteristics of the SAR signal that can be tied to scattering mechanisms of the snowpack [49][50][51]. Here, a modified CP decomposition approach is applied to the Sentinel-1 data coherency matrix generated from the debursted SLC image (Section 3.1) which includes only VV and VH channels. The scattering matrix captures the complete scattering characteristics of each pixel in an image as follows, where SVV, SVH and SHV represent the co-and cross-pol BSC at VV, VH and HV polarization respectively. Due to the constraints of the reciprocity theorem, the cross polarization elements in the scattering matrix are similar in the monostatic backscattering case, i.e., SHV = SVH. These elements can be represented by the corresponding Pauli vector kp: where the operator T represents the conjugate transpose. The coherency matrix [T2] is obtained from the product of the Pauli vector and its conjugate: Using the eigenvalue and eigenvector based incoherent target decomposition technique, [T2] can be decomposed into its corresponding rank-1 coherency matrix T2 as follows: where λi are the eigenvalues of the rank-1 coherency matrix. The normalized eigenvalues (Pi) can be interpreted as pseudo probability measures derived from the eigenvalues: Shannon's Entropy (H) is subsequently estimated from the normalized eigenvalues (Pi): Following [44], the eigenvectors u of the averaged coherency matrix can be expressed as This could be further written as a revised eigenvector parameterization of a 2 x 2 unitary matrix as Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 26 January 2020 doi:10.20944/preprints202001.0300.v1 where 1 , 2 represent the target's scattering mechanisms, and β, δ and ϕ are used for the estimation of target orientation angles. The roll invariant mean dominant scattering parameter � is calculated in terms of the pseudo probabilities as follows: In the remainder of the manuscript, � is referred to as Alpha. Generally, it is expected that Entropy is low for fresh snow cover and increases with wetness. Conversely, Alpha is high for fresh snow and decreases with wetness. Entropy and Alpha respectively increase and decrease with surface roughness. Entropy is high (> 0.7) for vegetated areas whereas Alpha takes values in the intermediate Scaling Analyses The spatial characteristics of snowpack properties as captured by SAR measurements were examined by quantifying the changes in the variance as a function of area, and by tracking changes the spatial statistics between overpasses based on the slope of the power spectra of individual images as per Kim and Barros (2002) [52]. To isolate homogeneous areas for scaling analysis, backscattering images for Grand Mesa, Swiss Alps, and the North Dakota areas were subset into areas of homogeneous land cover with areas ~4-16 km 2 . The BSC intensity images were aggregated from 225 m 2 to ~16 km 2 using the aggregation scheme summarized in Table 5. Note that the speckle removal described in Section 3.1 to improve signal-to-noise ratio (SNR) is expected to introduce a scaling break above and below the LRS filter scale. in the zonal (x) and meridional (y) directions is calculated for scaling analysis as following [53,54]. If k x ( ) and k y � � are wavenumbers (wavelengths) corresponding to x and y directions, in the range of scales where the power spectrum |F′(k γ )| 2 exhibits power-law behaviour where γ can be x or y and C is a constant, the spectral slope β γ along direction γ is estimated by applying the log transform to Eq. (12a) as follows: The spectral slope is the metric (scaling factor) that explains the transfer of backscatter energy across scales. A change in slope between adjacent scales (scaling break) is indicative of a change in scaling behavior. Here, the underlying premise is that scaling breaks and changes in scaling factor can be attributed to physical changes in the snowpack that impact backscattering mechanisms. Snow wetness mapping The BSC difference between wet and dry snow or snow-free surfaces has long been explored to detect and map wet snow [22]. The threshold polarization ratio algorithm proposed by [25] to map wet snow was applied to the Sentinel-1 SAR data for the three study regions. Both VV and VH polarization BSC ratios with respect to a reference image (e.g. summer conditions) as a function of the local incidence angle were utilized to determine the appropriate threshold values to detect wet snow based on LandSat-8 visible imagery and SCA-NDSI maps. Different threshold values were estimated for each of the three study regions due to different topography and SAR viewing geometry. Following [25], the reference image is the average of multiple SAR images from summer and early winter for snow free conditions to reduce noise. Results from the analysis of BSC variance with scale (see Section 4) suggest a minimum is reached at ~250 m, and therefore the wet snow detection algorithm is applied to the SAR data at 240 m resolution to strike a balance between spatial resolution and accuracy. BSC image pairs are co-registered based on the SRTM 30m DEM and a multichannel intensity filter was applied [55]. Finally, weighted averages of RVV=VVwinter/VVsummer and RVH=VHwinter/VHsummer at different times were determined based on the local incidence angle following Nagler et al. (2016) [25]. The most commonly used wet snow mapping algorithm [24] was successfully applied previously to map wet snow at high elevations above the treeline, and it worked well in this study for North Dakota and the Swiss Alps areas, but it failed in Grand Mesa due to the presence of evergreen forest. To address this limitation, the approach from [25] was modified to take advantage of VH BSC sensitivity (~ 5 dB for snow on-off in the forest). The backscattering coefficients (VV and VH) derived for summer (reference) and all winter Sentinel 1 overpasses with same acquisition geometry (descending mode in Grand Mesa) are used to calculate the ratios RVV = VVwinter/VVsummer and RVH = VHwinter/VHsummer. A weighted average polarization RAvg image is then estimated as follows: where k = 0.5, 1 = -0.5 and 2 = 2. To discriminate wet from dry snow and snow-free areas over forest and other land covers is done based on the threshold of -1.2 dB of the average polarization ratio image (RAvg). This value is selected based on the histogram of polarization ratio images over the different land cover classes based on the analyses of snow cover during the accumulation (October-March), melting (April-may) and snow-free (July-August) seasons. Temporal Variability of SAR Measurements over Complex Terrain The temporal variability of backscatter measurements over snow covered areas has been documented for many different geographic regions with a focus on sensitivity to snow condition and snow mass [26,28,[33][34][35][56][57][58]. Generally, BSC slightly increases over the course of the (dry) snow accumulation season [66]. The change of snow BSC from snow free BSC is strongly depend on (1) local weather including winds and precipitation regimes, (2) local soils and vegetation, and (3) the timing of the snow and snow-free observations. Here, the temporal evolution of Sentinel-1 BSC sensitivity for Grand Mesa, Swiss Alps, and North Dakota is shown in Figure 9 taking advantage of multiple overpasses of Sentinel-1 C-Band dualpolarization (VV and VH). Seasonal BSC intensity is significantly different over grassland for both polarizations in Grand Mesa. VH BSC is 3 dB lower for dry snow vis-à-vis snow-free conditions, but VV BSC shows no difference, although VV BSC differs by 5 dB between fresh snow (Dec-Jan) and melting season conditions (Apr-May). This is in contrast with [28] reported that the backscattering coefficient for dry snow was 5 dB lower than snow-free conditions at C-band irrespective of the polarization in the Swiss Alps at mid-elevations (~ 2,500 m), which highlights the importance of regional controls in seasonal backscattering. Sentinel-1 data over the Grand Mesa were acquired in descending mode at 07:00 LT (Local Time), and thus the surface of the snowpack is always frozen in late winter at the time of overpass. Diurnal melt-refreeze processes (Figures S6(a-f)) result on increased snow surface roughness as well as rough ice-water and, or ice-soil snowpack interfaces due to the refreezing of the daytime meltwater that can percolate deep into the snowpack [62]. Meltrefreeze cycles along with increases in diffuse scattering caused by the larger snow grain sizes in old Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 26 January 2020 doi:10.20944/preprints202001.0300.v1 snow and due to wind sintering explain therefore the BSC increase from early spring (smooth wet snow) to late spring and early summer conditions (refrozen crusted wet snow) [63]. In early winter, from December to March, dielectric discontinuities in the snowpack tied to heterogeneous stratigraphy strongly impact the backscattering signal. In particular, if the event-scale snow accumulation is small, heterogeneous layering in the snowpack leads to strong backscattering [63]. In the Swiss Alps, the Sentinel-1 data were acquired in ascending pass at 19:15 LT. Both VV and VH BSC are ~5dB higher in the accumulation season than in Grand Mesa and in North Dakota due to strong scattering at the dry snowpack-ground interface because of the rock surfaces and steep terrain. In April-May, VV and VH BSC decrease both relative to dry-snow and snow-free conditions due to surface melting. Indeed, the Sentinel-1 imagery shows strong spatial organization of BSC behavior with slope and aspect, and thus direct solar radiation, which is indicative of afternoon surficial Note that, in addition to surface radiative effects, wind driven coarsening and roughing of the snowpack surface should also be influenced by the diurnal cycle of ridge-valley wind patterns, thus introducing persistent spatial variability that varies locally with time-of-day and with landform. Therefore, BSC sensitivity in complex terrain is necessarily regional and even local. The advantage of satellite revisits is that it is possible to learn a local climatology (time-varying patterns) of BSC sensitivity from tracking its variability in space and time, which can be interpreted subsequently in the light of snow physical condition using a snow physics model or ground-based observations. More extensive discussion follows in Section 4.2 in the context of scaling analysis. Space-Time Scaling Behavior Variance Scaling -To examine the evolution of spatial variability with time, we first focus on the relationship between variance (2 nd order moment of the spatial distribution of BSC) with area for three (4×4 km 2 ) areas of homogeneous land-cover identified in Figure 3 as land-cover within B, that is the variance is minimum at the spatial scales corresponding to uniform land-cover (e.g. grass meadows versus forest-patch length-scales). Finally, the variability observed at A, B and C was maximum in summer (snow-free) and decreases to a minimum in early winter (fresh snow cover). Warm weather episodes during the accumulation season result in surface melt followed by overnight refreeze (e.g. 20 April, 2018). Over forested areas (areas B and C), the minimum variance is reached at larger spatial scales in the range 1-2 km 2 , although it increases at larger scales due to changes in topography at the edges of area C (see Figures 2 and 3). independence of frequency suggests that the physical basis of BSC scaling behavior is robust for dry snow conditions and there is therefore potential to explore multi-frequency SAR data for retrieving snow properties without extensive calibration using ground observations [65]. The intra-seasonal persistence of spectral slopes during the accumulation season is present in 2018 as well, but the spectral slopes change dramatically (see selected spectra in Figure 14d S2(a-b)). Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 26 January 2020 doi:10.20944/preprints202001.0300.v1 Figures 15a-b show there are significant differences in spectral slopes between grassland and mixed and forest areas with high (low) spectral slope ratios in the y-direction (x-direction) for snowfree conditions in late May (5/31) and before the full greening of the Mesa by end of June (6/24, see Figure S5 for NDVI maps). Note the increasingly steeper slopes for VV BSC in the x-direction in the forest as snow wetness increases and peaks by the end of April (4/25) for snow-on conditions in contrast with the behavior for grassland in the y-direction (Figure 15b). The decreases after the onset of the melting season show that attenuation due to snow wetness on the ground counterbalances volume scattering by the canopy above leading to a significant increase in spectral slope ratios. No synthesis of contrasting scaling between forest (orange, area C) and grassland (blue, are A) in part (a). The Sentinel-1 BSC spectra over the Swiss Alps shows multi-scaling behavior with scaling breaks at ~ 180 m and ~ 360 m for snow-on conditions ( Figure 16, Table S3) with single-scaling for snow-free conditions. Note that the selected area ( Figure 2) for scaling analysis is above the tree-line, and thus vegetation should not be playing a role here. At pixels where meteorological stations exist in the Swiss Alps ( Figure S1(a)) the relationship between BSC and snow depth is ambiguous ( Figure S2 (ab)). This suggests that snow mass is wrapped on the terrain filling (accumulating) the terrain roughness (depressions) at scales below 360 m. The variation of VV and VH BSC with slope ( Figure 17a) is only apparent in the spring (Apr-May) during the melt season. There is however high all-year sensitivity to aspect (sun exposure, ascending and descending pass acquisitions and local incidence angle) with a difference of 7 dB between the North-East and North-West slopes for VV polarization in contrast with the North and South slopes that show the same BSC for all seasons ( Figure 17b). As in Grand Mesa, the sensitivity to aspect captures differences in insolation patterns that are indicative of spatial variability on daytime surface melt followed by nocturnal refreeze cycles that strongly impact the microphysics of the snow surface during the accumulation season. Wet snow attenuation effects consistent with the minimum BSC magnitude independent of slope and aspect (Figures 17a and 17b) explain the distinct spectral slopes at small scales on May 18, 2018 in the y-direction ( Figure 16, bottom row). Nonlinear behavior of heterogeneous snowpacks in the melt season reflects the changing patterns of SCA for scales < 360 m (see also [8,72]). The impact of wind-driven snow redistribution is out of the scope of this work. In particular, it is expected that "snowform" should reflect wind climatology in the planetary boundary layer that is closely modulated by regional Whereas the minimum WSCA occurs by end of May, the time rate of WSCA change in the forested areas is slower compared to other land covers suggesting that light extinction through the canopy plays a significant role in reducing incoming shortwave radiation, and thus preserving the snowpack. Wet Snow Mapping As expected the algorithm detects well wet snow above the tree line in the Swiss Alps ( Figure 19b) capturing weather related variability such as new snowfall and melting events in April and May. Landsat imagery of the region that is mostly snow-free by the end of May. However, the BSC changes during the snow melt [63] can potentially lead to the temporary underestimation of wet snow areas. Using the summer BSC image as a reference for wet snow mapping can lead to overestimation in areas of densely vegetated deciduous forest and underestimation for dry snow in barren areas with high BSC, both relatively small in Grand Mesa. Also, the selection of threshold for the wet snow mapping is highly dependent on local soil surface, vegetation covers and observation period, and thus it should be calibrated locally. Conclusion ESA's spaceborne Sentinel-1 dual-polarization SAR data offers high spatial and temporal polarimetric BSC imagery to monitor seasonal snowpacks globally. This study reports a comprehensive effort to examine the spatial information content of these data in the light of snow physics. First, the data show large ambiguity in the relationship between BSC magnitude and snow mass, with the underlying ground surface dominating the signal during snow accumulation and a difference in BSC magnitude greater than 5 dB attributed to rocky terrain. In GM, BSC exhibit 10 dB sensitivity to wetness at small scales (~100 m) over homogeneous grassland. Sensitivity decreases to 5 dB in the presence of trees, and it is demonstrated that VH BSC sensitivity enables wet snow mapping below the tree-line. VV and VH BSC trends (positive, negative) in the early snow melting season strongly depend on the time of data acquisition (morning, evening), which demonstrates the importance of melt-refreeze cycles on surface roughness and microphysics. Parameters Entropy and Alpha derived from the coherency matrix showed little sensitivity to snowpack changes during the accumulation season in all cases, which is attributed in part to the lack of full polarimetric information in Sentinel-1 data. Previously, [25] demonstrated a wet snow mapping algorithm that was adopted here to map wet snow using Sentinel-1 data. The algorithm that worked well in the Swiss Alps failed in Grand Grand Mesa, increasing up to 1 km and longer for forested areas in Grand Mesa and agricultural fields in North Dakota. These scales can be viewed as measurement optima, that is the scales at which the average of the measured BSC Lm is representative of the local mean value of the field measured at scale l (Lm > l). Spectral analysis reveals two scaling regimes at sub-km scale with scaling breaks around ~180-360 m. These scaling regimes as measured by the spectral slopes are reliable within the same accumulation season and exhibit strong sensitivity to snow mass and snow wetness when trees are present. This is in keeping with work in the peer-reviewed literature highlighting the reliability of snowpack melting patterns at regional scale consistent with topography and landform as discussed in Section 4.2. Nevertheless, large inter-annual variability as illustrated for the case of Grand Mesa suggests that snowmelt patterns organized by topography and vegetation can explain the scaling breaks at small scales, the regional spatial variability of snowpack conditions (surface roughness, microphysics, LWC) varies strongly with local weather including snowfall that determines snow accumulation, and wind-driven snow redistribution. Indeed, variance-area and spectral scaling differences between regions of complex topography (Grand Mesa and the Swiss Alps) and smooth topography in North Dakota suggest the hypothesis that BSC multi-scaling behavior may be attributed to scattering mechanisms controlled by heterogeneous stratigraphy and surface roughness at small mesoscales (100's m) vis-à-vis snow mass modulated by regional winds (snow-form) at larger mesoscales (kms). This work demonstrates time-varying spectral slopes are an emergent metric of the overall scattering behavior of heterogeneous snowpacks. More extensive multi-year multi-site analysis is required to investigate whether the scaling break positions identified here are fixed conditional on climate, topography and land-cover, that is cold region physiography, or they also exhibit interannual variability as the spectral slopes for example in response to changes in wind climatology. Further research will focus on elucidating the scattering budget (volume scattering, surface and interface scattering) of heterogeneous snowpacks and developing model constraints and a data assimilation framework to capture snow physics heterogeneity at sub-km scale. Supplementary Materials: The following are available online at www.mdpi.com/xxx/s1. Figure S1(a). Spatial distribution of meteorological stations in the Swiss Alps region. Table S1(a). Spectral slopes of Sentinel-1 BSC for the grassland region in Grand Mesa, CO in 2017. Author Contributions: AB conceived the work; SM processed the Sentinel-1 data, conducted data analysis, and produced graphics and quantitative summaries with guidance from AB; SM and AB jointly wrote the manuscript.
2020-02-06T09:08:41.815Z
2020-01-26T00:00:00.000
{ "year": 2020, "sha1": "9414c241eef587d51f2bb0c03e6f41939dadd403", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-4292/12/3/483/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "a4f2faac521c1b7c4036a024519efa5aef345d20", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geology", "Computer Science", "Environmental Science" ] }
2229156
pes2o/s2orc
v3-fos-license
Modern analyses on an historical data set: skull morphology of Italian red squirrel populations Abstract Recent molecular evidence suggests that Sciurus vulgaris populations from Calabria (southern Italy) are distinct from those occurring in northern and central Italy. Here, we re-analyzed using multivariate and univariate techniques an historical dataset provided by Cavazza (1913), who documented measurements for the now extinct squirrel population from Campania. Both univariate and multivariate analyses confirmed that the sample from Calabria was homogenous and relatively distinct compared to the rest of the squirrel samples. Introduction The Eurasian red squirrel, Sciurus vulgaris Linnaeus, 1758, is characterized by great variability in fur coloration, which led to the description of more than 40 subspecies throughout its wide geographic distribution across the Eurasian continent (Corbet 1978). Currently only 17 of these subspecies are considered valid (Lurz et al. 2005), with the Italian populations being ascribed to three subspecies (Toschi 1965). These Italian subspecies are: 1) S. vulgaris fuscoater Altum, 1876 (European form occurring in the Alps and in the northern Apennines), characterized by relatively small size and a strong degree of coat-colour polymorphism both within and between populations; 2) S. vulgaris italicus Bonaparte, 1838 (endemic to Central Italy), also characterized by relatively small size, albeit bigger than the previous subspecies. This subspecies shows some degree of coat colour polymorphism, with the dark brown morph dominant in mountainous forests at higher altitudes. The populations of the southern tip of the range are black (subspecies alpinus, sensu Costa 1839); 3) S. vulgaris meridionalis Lucifero, 1907 (endemic to the most southern Apennines), with uniform fur colour, always having black dorsal fur with grey shades on the sides, a black tail, and a contrasting white belly. It is also the largest Italian subspecies (Wauters and Martinoli 2008). Although widespread in Italy, this species' distribution is associated with forested areas, and affected by their fragmentation (Celada et al. 1994, Wauters et al. 1994a, Wauters et al. 1994b, Wauters 1997, Hale et al. 2001. Thus, the European squirrel currently occurs in the whole of the Italian Peninsula with some distribution gaps: the species does not currently occurs in Campania, Apulia and Basilicata (cf. Wauters and Martinoli 2008). However, the squirrel was present in historical times also in the extreme northern part of Campania (i.e. Mt. Somma -Vesuvio) (Costa 1839, Trouessart 1910, Cavazza 1913, where it is now extinct (Capolongo andCaputo 1990, Maio et al. 2000). Recent molecular data (Grill et al. 2009) revealed the presence of two main mitochondrial phylogroups: (i) a clade comprising the individuals from the region of Calabria in southern Italy belonging to the subspecies S. v. meridionalis, and (ii) another including the rest of the Italian populations. Cavazza (1913) studied morphological variability of Italian populations of Sciurus vulgaris, and provided a useful set of skull measurements for squirrels collected throughout Italy. Among various populations, he analyzed specimens from an area where the species is now locally extinct (Campania), which is geographically closer to the populations of the subspecies italicus than to those of meridionalis. Cavazza's (1913) data are important for evaluating whether the extinct Campanian squirrels were more similar to those currently inhabiting Calabria, or to those typical of central Italian regions. In this paper, we reanalyzed Cavazza's original dataset using modern statistical multivariate analyses with the aim to evaluate whether morphometric and genetic data agree with respect to patterns of geographic differentiation in Italian squirrel populations. Materials and methods We used the data reported in Cavazza (1913) for skull measurements of adults (Table 1). Cavazza (1913) divided specimens into the following groups: (a) Alps, (b) northern and central Italy including Latium and excluding Abruzzi, (c) southern Italy including Abruzzi and Campania, and (d) Calabria. The localities where Cavazza (1913) collected his specimens are reported in Figure 1. Unfortunately, we cannot re-measure specimens from Cavazza's (1913) paper because several of them have now become lost. Moreover, although it is possible that some of the specimens originarily measured by Cavazza (1913) are still available in private or public collections in Italy, unfortunately there is no labeling indication in Cavazza's paper for any of his specimens, and this fact impeded us from any further analysis of the vouchers. Univariate measurements were log-transformed in order to achieve normality and then compared across groups by one-way Analysis of Variance (ANOVA). In this analysis, the same four groups as defined by Cavazza (1913) were used. Specimens were divided into four Operational Taxonomic Units (hereby OTUs), according to their geographical provenance and corresponding to the Italian subspecies. These four OTUs followed exactly the subdivisions made by Cavazza (1913). We performed a cluster analysis in order to show dissimilarities among all of Cavazza's (1913) specimens in terms of their skull measurements. Skull measurements were logtransformed prior to analysis. Dendrograms were prepared using the single linkage as the algorithm, with Euclidean distances. This method was used because it provided the highest cophenetic index. In the single linkage (nearest neighbour), the clusters are joined based on the smallest distance between the two groups. Branch support was calculated with 10,000 bootstrap replicates. We also used neighbour joining clustering (Saitou and Nei 1987), which is an alternative method for hierarchical cluster analysis. In contrast with ultrametric methods (like the Unweighted Pair Group Method with Arithmetic Mean, UPGMA), two branches from the same internal node do not need to have equal branch lengths. A phylogram (unrooted dendrogram with proportional branch lengths) is given in this paper. We studied the dispersion of specimens in multivariate space with Principal Components Analysis (PCA) using the covariance matrix (Davis 1986, Harper 1999) (PC1 scores serve as a proxy for size, while the other PCs capture shape variation). Results The original dataset reported by Cavazza (1913) is summarized in Table 1. Mean and standard deviations for each measurement considered are reported in Table 2 with all specimens pooled, and in Table 3 with samples divided into OTUs. Using the same categories as in Cavazza (1913), there were among-group statistical differences for skull table 2. Mean and dispersion measures of the five skull variables analyzed in this study (original dataset from Cavazza (1913), for all sampled specimens pooled together. length (one-way ANOVA F 3,70 = 14.76, P < 0.00001), skull width (F 3,70 = 13.50, P < 0.00001), skull height (F 3,70 = 18.93, P < 0.00001), and mandible length (F 3,70 = 56.83, P < 0.00001), but not for interorbital length (F 3,70 = 1.92, P < 0.133). Post-hoc Tukey HSD tests revealed that Calabria specimens differed significantly from every other group for mandible length (all P < 0.01), and for skull width (all P < 0.001). For skull length, Calabria specimens differed from Alpine and central Italian specimens (all P < 0.01) but not from Campania specimens (P = 0.088). For skull height, they differed from Campania (P = 0.024) and Alpine specimens (P = 0.018) but not from central Italian specimens (P = 0.43). Principal component scores indicated that there were significant statistical shape differences among the four populational groups (one-way ANOVA: F 3,70 = 30.362, P < 0.0001), and a Tukey HSD post-hoc test revealed that (i) the Calabria population differed significantly from all the others (at least, P < 0.000154), (ii) the Campania population significantly differed, other than from Calabria specimens, also from Alpine specimens (P = 0.022) but not from central Italian specimens (P = 0.470). Both sets of multivariate analyses revealed that the sample from Calabria was homogenous and relatively distinct compared to the rest of the squirrel samples (Figures 2 and 3). In the PCA (variance explained by the first two axes: 56.5%; with axis 1 explaining 28.7% and axis 2 explaining 27.8% of the total variance; see Table 4 for the loadings) there was a trend suggesting clinal variation from the Alps to Campania, with Calabria specimens, while distinct, being more similar to those of Campania than to those of northern Italy (Figure 2). The Campania group showed less variance (Levene's test; F = 6.67, P < 0.03) compared to the rest of the central and northern Italian samples in the PCA than in the neighbor joining analysis (Figure 3). Discussion Both multivariate and univariate tests identified some morphometric differentiation among different squirrel populations that were previously highlighted by the molecular results of Grill et al. (2009). That is: the populations from Calabria differed from the others morphologically (this study) and genetically (D-Loop: Mean genetic distance between groups: 6%, within group: 2%; see Grill et al. 2009). Our analyses also suggest that the currently extinct population from Campania belonged to a central Italian grouping. It may be that patterns of craniometric variation in Italian red squirrels represent a clinal size trend within a formerly contiguous population once occurring from the Alps south to Campania, and, with expectations fitting Bergmann's rule (e.g., Freckleton et al. 2003;Blackburn and Hawkins 2004). On the other hand, Calabria specimens do appear to be quite distinct from the rest of the Italian squirrels in size (Figure 2), though we note that our analyses involve quite small sample sizes (Cardini and Elton 2007). Notably, Calabria populations occur mainly at relatively high altitudes, closely linked to that of extensive high-altitude mixed forest dominated by the native Calabrian black pine Pinus laricio (Cagnin et al. 2000, Rima et al. 2010) and they are characterized both by large size and monomorphic color fur. Overall, our study could neither substantiate nor reject the hypothesis that Sciurus vulgaris meridionalis is a full species, as previously suggested by Gippoliti (2013). However, some morphological differentiation is certainly evident also with respect to the Campania extinct population (this study), and remarkable genetic differences are found between Calabria populations and all the remaining European populations (Grill et al. 2009). Indeed, the majority of individuals analyzed by Grill et al. (2009) formed one monophyletic clade without particular differentiation, whereas Calabrian squirrels were clearly separate. The Calabrian lineage appears to have experienced a different history from the rest of European squirrels probably due to the fact that it became isolated after glaciations and never reconnected to Central Italian populations (Grill et al. 2009). It should be stressed, however, that the sample sizes available for Campania and Calabria were too small to make any firm conclusions. Our approach in this paper highlights the lasting value of historical publications on biodiversity, especially when they present data on populations which are now extinct. These often overlooked publications -such as Cavazza's, published in Italian in a regional journal -can be important sources of data that can be re-analysed, for renewed insight, using modern statistical tools. 3. Neighbor joining dendrogram of skull measurements (with 10,000 bootstraps) based on Cavazza's (1913) dataset.
2017-03-31T04:23:51.186Z
2014-01-08T00:00:00.000
{ "year": 2014, "sha1": "cfd907ea50007a0d12664209acfa2bbb3209d72c", "oa_license": "CCBY", "oa_url": "https://zookeys.pensoft.net/lib/ajax_srv/article_elements_srv.php?action=download_pdf&item_id=3180", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5eda61c1d4fd6e55feee3eeefabcb9cf1e1299a9", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Geography", "Medicine" ] }
53717234
pes2o/s2orc
v3-fos-license
SCFSen: A Sensor Node for Regional Soil Carbon Flux Monitoring Estimation of regional soil carbon flux is very important for the study of the global carbon cycle. The spatial heterogeneity of soil respiration prevents the actual status of regional soil carbon flux from being revealed by measurements of only one or a few spatial sampling positions, which are usually used by traditional studies for the limitation of measurement instruments, so measuring in many spatial positions is very necessary. However, the existing instruments are expensive and cannot communicate with each other, which prevents them from meeting the requirement of synchronous measurements in multiple positions. Therefore, we designed and implemented an instrument for soil carbon flux measuring based on dynamic chamber method, SCFSen, which can measure soil carbon flux and communicate with each other to construct a sensor network. In its working stage, a SCFSen node measures the concentration of carbon in the chamber with an infrared carbon dioxide sensor for certain times periodically, and then the changing rate of the measurements is calculated, which can be converted to the corresponding value of soil carbon flux in the position during the short period. A wireless sensor network system using SCFSens as soil carbon flux sensing nodes can carry out multi-position measurements synchronously, so as to obtain the spatial heterogeneity of soil respiration. Furthermore, the sustainability of such a wireless sensor network system makes the temporal variability of regional soil carbon flux can also be obtained. So SCFSen makes thorough monitoring and accurate estimation of regional soil carbon flux become more feasible. Introduction Soil respiration is a process of soil releasing carbon dioxide, which is produced by the oxidation of organic matter and the breath procedure of plant roots, and a little part of which is released by soil animals and chemical oxidation. The changes of the soil respiration rate reflects the sensitivity and tolerance of ecological systems subjected to environmental stress [1][2][3][4]. Soil respiration is an important index of soil quality and soil fertility and to a certain extent reflects the soil oxidation ability [5]. Soil respiration is one of the parameters for the prediction of the response of ecosystem productivity to climate change. In particular, the basic part of soil respiration reflects biological characteristics of soil and the metabolism intensity of soil material [6]. The process of soil releasing carbon dioxide to the atmosphere through soil respiration is a key ecological process leading to global climate change, which has been one of the core problems concerned in global carbon cycle researches [7,8]. Soil carbon flux, also called the intensity of soil respiration, means the rate of carbon dioxide released from soil. The measurement of multi-position soil carbon flux is important to reveal the detail distribution of the soil carbon flux in a region and the estimation of regional soil carbon flux, which can be used to explore the carbon cycle and carbon balance in the atmosphere [9][10][11]. The measurement of soil carbon flux at one or several near locations can be carried out using some existing soil carbon flux measuring systems such as the LI-8100 series of instruments manufactured by LI-COR Company (Lincoln, NE, USA) [12]. In order to monitor soil carbon flux of a wide area, the sampling should be carried out in much more positions within the area [13]. However, currently existing instruments are not very suitable for large-area, long-term and continuous monitoring of regional carbon flux of a terrestrial ecosystem. The main reasons include limited measuring positions, inconveniences for synchronized measurements and high energy consumption. Even if a LI-8150 Multiplexer, an accessory for the LI-8100, is used, maximum 16 individual chambers can be connected to LI-8100 analyzer control unit and be controlled and sampled in a field with maximum diameter of 30m, which is not completely sufficient for large-area monitoring. If multiple systems based on LI-8150 is used, besides the cost, the cooperative controlling among them is another issue what needs to be considered for synchronized measurements. If only a few positions are chosen to carry out soil respiration measurements at different time for a short period, the estimation of the regional soil carbon flux in an area is undoubtedly inaccurate for the spatial heterogeneity and temporal variation of soil respiration [14,15]. So, to correctly measure and accurately estimate the soil carbon flux of a given region, there are some requirements that should be met and can't be easily and well met using currently existing devices and traditional methods. (1) Measurements should be taken in multiple positions to dominate the whole monitored region. For the spatial heterogeneity of soil respiration, the sampling positions should be sufficient to express the region based on the spatial correlation. (2) Measurements should be carried out and the data should be kept on gathering for a relatively long time. For the temporal variation of soil respiration, each measurement only denotes the soil respiration situation at that moment. We need measure the soil carbon flux at different time in each sampling position in the monitored region. The wireless sensor network technology can meet these requirements well. A wireless sensor network consists of many sensor nodes with sensing and communicating abilities, and these nodes are deployed in the monitored area and formed an autonomous networking system [16,17]. Besides the three advantages mentioned above, a wireless sensor network has an additional convenience that data can be transferred to remote central server for real-time displaying, storing, processing and analyzing [18]. The wireless sensor network technology has been widely used in environmental or ecological applications, such as precision agriculture [19,20], wild environment monitoring [21], canopy closure sustainable estimation [16], volcano monitoring [22], wildlife monitoring [23], marine environment monitoring [24] and so on. In previous works, the wireless sensor network technology has already been used for the measurement or monitoring about soil. For example, soil property monitoring [25], soil parameters estimation [26], soil moisture [27,28], soil slopes [29], wireless communication in soil [30] and so on. In order to apply wireless sensor network technology to regional soil carbon flux monitoring, a basic requirement that should be met firstly is the corresponding sensor nodes. These nodes should be able to not only measure soil carbon flux but also support communication protocol of wireless sensor networks at least. However, there is no such device available. This is just the main motivation of this paper. The theory, designing and implementation of a kind of soil carbon flux measuring instrument, SCFSen, which can serve as sensor nodes, are illustrated in detail in this paper. The exterior of SCFSen is shown in Figure 1. The key contributions of this paper are as follows. (1) The designing and implementation of SCFSen, a new instrument for soil carbon flux measurement, are introduced. From the aspect of functionality, SCFSen can support WSN communication besides soil carbon flux measurement, which make it suitable for regional soil carbon flux monitoring by constructing a sensor network. (2) The energy consumption of SCFSen is analyzed and compared with that of LI-8100. The working time of SCFSen can be about 23 days if three consecutive measurements are taken per hour, which is more than two times longer than that of LI-8100 for the same measurement task. SCFSen can keep working for about 55 days if it is set to take one measurement per hour. Furthermore, SCFSen can be recharged by a solar panel in practice, which leads to much longer working time and the possibility for sustainable monitoring. (3) A grouped calibration method for SCFSen nodes is proposed and tested. After calibration, the mean relative errors of SCFSen nodes can be reduced from over 15% to about 6%, taking the result of LI-8100 as ground truths. The difference between the results from two different instruments is reasonable. The remaining parts of this paper are organized as following. The model of soil carbon flux measurement used in SCFSen is demonstrated in Section 2. The detail of designing and implementation about the mechanical structure, the electrical structure, and the analysis of energy consumption of SCFSen are introduced in Section 3. The calibration method of SCFSen are given in Section 4. Experiments of SCFSen are given and analyzed in Section 5. At last, the paper is concluded by Section 6. Model of Dynamic Chamber Method Soil carbon flux can be measured using chamber methods, including static chamber method and dynamic chamber method [31,32]. The static chamber method means measuring the carbon dioxide contents in the chamber just before and after a period and calculating the carbon flux according to the difference between the two measurements, which are usually carried out manually with subsequent offline analysis of gas chromatograph. The time needed for this method is a little long, usually from dozens of minutes to several hours, and the result is coarse -grained and its accuracy is relatively low. Dynamic chamber method is considered as an ideal soil respiration measurement method [33]. The dynamic measurement method can get soil CO 2 emission values more accurately than the static measurement method, and is more suitable for the determination of the rate of CO 2 emission for a period of time. As to the dynamic chamber method, the rate of CO 2 diffusion into the air is estimated though the measurement of the changing rate of CO 2 concentration in the chamber, which is carried out in an in situ way. The structure of dynamic chamber method is shown in Figure 2. During the measurement, a chamber is covered on the sampling spot. There is a loop back tube with the chamber, one end of which is for the extraction of gas in the chamber and feeding to the module of measurement of concentration of carbon dioxide in the gas, and the other end is for the return of gas after measurements. During the measuring procedure, the concentration of carbon dioxide in sampled gas from the chamber is measured for multiple times periodically. In addition, then the amount of soil releasing or absorbing carbon dioxide can be calculated according to the changing rate of the concentration of carbon dioxide in the chamber based on the principle of air diffusion and convection between soil and atmosphere. To keep the balance of pressures between inside and outside of the chamber, there is the third hole with the chamber. In general, the soil layer contains a large number of microorganisms, and continuously releases CO 2 into the air. In addition, there is some water content in the soil, which is being evaporated into the air all the time. Assuming that the total volume of the chamber and the tube in the measurement system is v m 3 , the area of soil surface covered by the chamber is S m 2 , the emission rate of carbon dioxide is f c mol·m −2 ·s −1 , and the emission rate of water vapor from the soil is f w mol·m −2 ·s −1 . The gas in the chamber is mainly composed of dry air, water vapor and CO 2 . Let ρ denote the total gas concentration and its unit is mol·m −3 . So we get Equation (1). In Equation (1), ρ d , ρ c and ρ w mean the concentrations of dry air, CO 2 , and water vapor in the chamber, respectively, and their units are all mol·m −3 . Let c s denote the mole fraction of CO 2 in soil whose unit is mol·mol −1 , c c and w c denote the mole fraction of CO 2 and water vapor in the chamber, respectively, whose unit are all mol·mol −1 , and µ denotes the rate of gas emission for the balance whose unit is mol·s −1 . As to c c and w c , we can get Equations (2) and (3). According to the principle of gas flow balance, we can get Equations (4) and (5). As the content of water vapor is much higher than that of CO 2 in the chamber, we can get Equation (6). According to the ideal gas law, we have ρ = p 0 /(R · T), in which p 0 stands for the pressure of gas and its unit is Pa, R = 8.31441 means the gas constant and its unit is Pa·m 3 ·K −1 ·mol −1 , and T denotes the absolute temperature and its unit is K, so Equation (10) can be rewritten as Equation (11). As Equation (11) shows, we can see that f c can be calculated when p 0 , T, w c and ∂c c ∂t are measured, as v and S are constant once the chamber is determined. p 0 , T and w c can be measured directly using corresponding sensors, so the remaining challenging issue is the measurement of changing rate of carbon dioxide concentration in the chamber. Measurement of Changing Rate of Carbon Dioxide Concentration in the Chamber According to dynamic chamber method, the carbon dioxide emission rate in per unit area of soil is calculated through measuring and analyzing the variation of the carbon dioxide concentration in the chamber. As the concentration of carbon dioxide in the chamber can be measured directly, the changing rate can be calculated by fitting some temporally adjacent measurements of the concentration. However, the changing rate of concentration of carbon dioxide in the chamber is not a constant value during a measuring procedure, because the releasing rate of carbon dioxide from soil is affected by the difference between carbon dioxide concentration in soil and that in the chamber. The carbon dioxide concentration in the chamber is keep increasing along with the emission of carbon dioxide from soil to chamber, which makes the difference become smaller and smaller. Now we analyze the changing pattern of concentration of carbon dioxide in the chamber along with time to find a proper method for the calculation of the changing rate. As the concentration of CO 2 released from the soil to the chamber is determined by the difference between the CO 2 concentrations in them, so there is a relationship between f c and (c s − c c ) as Equation (12). The g in Equation (12) denotes the gas conductive coefficient of CO 2 whose unit is m·s −1 . Combine the two Equations (10) and (12), we can get Equation (13). In Equation (13), Assuming that the initial value of c c is c c (0), solve the differential Equation (13), we can get Equation (14). As to a determined implementation, α can be regard as a constant value because g, S and v are all constants during a measuring procedure. In addition, the value of c s − c c (0) can also be regard as a constant during a measuring procedure because c c (0) is a constant and c s does not vary sharply. As a result, the changing rate of concentration of carbon dioxide in the chamber, c c , is mainly affected by t in a measuring procedure. As we know, the curves of α × e −α·t are as Figure 3, in which the values of α for the three curves are 0.1, 0.4 and 0.7, respectively. Here the three values of α are just for the illustration of trends of the curves. From the comparison of the three curves, we can see that the bigger α is, the more sharply the values of longitudinal coordinates change. This can be explained by that the bigger S is or the smaller v is, the more easily the air in the chamber is affected by the soil respiration, and the more quickly the changing rate of carbon dioxide concentration in the chamber varies. There is no chamber in a real environment and the concentration of carbon dioxide above soil do not change sharply. The initial changing rate of carbon dioxide in the chamber should be adopted for the calculation of soil carbon flux, because the air in the chamber in the beginning phase is closer to real environment without a chamber than that in the later phase. To calculate the changing rate of carbon dioxide concentration in the chamber, multiple measurements of carbon dioxide concentration during a period should be linearly fitted. There is a confliction between the duration of time and the accuracy of the changing rate calculation. If the duration of time is too short, the differences among measurements during this period are not obvious and the changing rate cannot be fitted well. If it is too long, the fitting can be easily done, but the accuracy may decrease because the measurements is not linear for the decent of changing rate along with time. As a trade-off based on experiments, we adopt the periodical 60 measurements in the beginning 3 min in every procedure of soil carbon flux measurement as the source data to calculate the changing rate of concentration of carbon dioxide, on the basis of which the value of ∂c c ∂t in Equation (11) can be calculated. Calculation of Soil Carbon Flux Based on the periodical measurements of carbon dioxide concentration in the chamber, the changing rate of concentration of carbon dioxide is get. In addition, then, the values of the measurements can be converted to the standard condition and the soil carbon flux value of unit area and unit time under the standard condition, whose unit is mol ·m −2 ·s −1 , can be obtained according to the ideal gas law. The method for the calculation of soil carbon flux is described as follows. (1) Determine the total volume of the chamber, v (m 3 ), and the soil surface area in the chamber, S (m 2 ). Measure the initial pressure p 0 (Pa), the initial temperature, T 0 ( • C) in the chamber. (2) The relative humidity, the ratio between the mole fraction of water vapor in air and the mole fraction of saturated water vapor under the same temperature and pressure, are measured in SCFSen. Assume that the value of relative humidity is ϕ, the mole fraction of saturated steam under the same temperature T 0 and pressure p 0 is w 0 , which can be get by looking up the saturated steam table. Therefore, the mole fraction of water vapor in the air can be calculated using Equation (16). (3) Determine the changing rate of the concentration of carbon dioxide in the chamber by fitting the sampling values of that measured periodically in the first three minutes, which is the value of ∂c c ∂t and the unit is ppm·s −1 . The Design and Implementation of SCFSen In this section, the mechanical structure, the electrical structure and energy consumption analysis of SCFSen are introduced one by one. The Mechanical Structure SCFSen adopts the mechanical structure as Figure 4, and it can reduce the influence of measurement on soil environment. The mechanical structure consists of supporting and driving structures. The main supporting structure includes the base, the fixing ring and the measuring chamber cover. The driving structure is composed of the motor, the buffering connecting rod, the track connecting rod and the lock seat and the rocker arm. The motor is fixed on the base, and its shaft links to the motor connecting rod, the track connecting rod and the rocker arm using fasteners. The motor connecting rod is connected with the track connecting rod through a clamping sleeve, and can carry out axial relative movement. A spring is mounted with the buffering connecting rod, so as to distribute the motor force to the chamber cover evenly. The rotation and the vertical movement of the chamber cover are driven by a single motor along the specific track set by the track connecting rod. The fixing ring and the chamber cover construct a chamber together during the measurements. When deploying a SCFSen instrument to a selected position, a circular narrow groove need be dug according to the size of the fixing ring, and then the bottom of the fixing ring is plugged into the groove, so as to construct a chamber when the chamber cover is covered. After a SCFSen instrument is deployed, the controlling module controls the transmission structure to open or close the chamber periodically. During the interludes between measurements, the chamber is open and the cover is not above the fixing ring, as Figure 4a shows. The purpose is to let the measuring position, the field in the fixing ring, be exposed to sunlight, rain, wind and so on, so as to minimize the impacts of measurement behaviors on measurement results. Before the instrument starts to measure, the controlling module will turn on the motor, and the motor connecting rod and the track connecting rod will do relative rotation movement. The rotation of the chamber cover is realized through the special track on the track connecting rod driving the rocker arm to rotate. As a result, the chamber cover is rotated when the rocker arm rotates. When the chamber cover is rotated to a certain position, right above the fixing ring, as Figure 4b shows, the rotation will stop. Then the chamber cover will move along the vertical direction, until it is covered the fixing ring. The vertical movements of the chamber cover are carried out cooperatively by the motor connecting rod, the track connecting rod and the rocker pin. There is a specific track on the circumferential surface of the track connecting rod, and there is a fixed and non-connected pin embedded the track. Along with the rotation of the track connecting rod, the rocker arm and the chamber cover move downwards together under the interaction force between the pin and the track. After the motion of chamber is stopped, a measuring chamber is formed, as Figure 4c shows. Then the instrument will start to measure the soil carbon flux. After the measuring procedure, the rocker arm drags the chamber cover moving towards the opposite direction and returns to the initial position, as Figure 4a shows. The Control Circuit Structure The configuration of control circuit of SCFSen is shown in Figure 5. It includes the processor module, the wireless transceiver module, the carbon dioxide sensor module, the temperature and humidity sensor module, the motor drive module, the human-machine interface module and the power module. The chipsets adopted for these modules of SCFSen are listed in Table 1. There are different wireless communication technologies for the construction of a wireless sensor network, such as ZigBee, LoRa, WirelessHART, Z-Wave, and so on. LoRa has a transmit range of 5 km in urban areas, and up to some 15 km in rural environments, so it can be used for wide-area networks. WirelessHART is based on the highway addressable remote transducer protocol (HART). It is considered suitable to be used in industrial applications. Z-Wave is a low-power RF communications technology that is primarily designed for home automation for products such as lamp controllers. ZigBee is a short-range IoT protocol aimed at connecting several devices in close proximity. It does not have central controller and loads are distributed evenly across the network. There are some new versions of ZigBee, such as ZigBee PRO and ZigBee Remote Control (RF4CE). These new versions have some significant advantages in complex systems offering low-power operation, high security, robustness and high scalability with high node counts and is well positioned to take advantage of wireless control and sensor networks. The latest version of ZigBee is 3.0, which is essentially the unification of the various ZigBee wireless standards into a single standard. For the current version of SCFSen, we used CC2420 chip, which supports ZigBee protocol, to construct the wireless sensor network. The main reason is that SCFSen nodes are supposed to communicate and cooperate with the sensor nodes in GreenOrbs [16], a wireless sensor network constructed in 2009 for ecological monitoring deployed in a forest environment, as Section 5.2 shows. In the future, we will also design new versions of SCFSen using other communication protocols. The processor MSP430F1611 has the characteristics of low power consumption, and can work stably under wild conditions. As mentioned above, SCFSen uses CC2420 chip as the wireless transceiver module, which is a kind of radio transceiver conforming to the of IEEE 802.15.4 2.4G Hz standard, which can ensure the communication efficiency and reliability with the adjacent instruments within 200 m. The processor communicates with the wireless transceiver module using the serial peripheral interface. MSP430F1611 works in the master mode, and CC2420 is in the slave mode. The carbon dioxide sensor T6615, a kind of dual channel infrared carbon dioxide sensor, is a small and light Non-Dispersive Infra-Red (NDIR) CO 2 sensor. NDIR is a method based on the theory of gas absorption. After the absorption of a certain gas whose concentration is to be measured, the spectral intensity of the infrared ray emitted by an infrared light source will change. According to the theory of gas absorption, the decrement of the spectral intensity is proportional to the concentration of gas, so the concentration of the gas to be measured can be calculated by measuring the decrement of the infrared ray spectral intensity. T6615 has some plug-pins which makes it very convenient to connect with other instruments. Furthermore, T6615 has several kinds of output interfaces for transmission and reading. T6615 communicates with other modules via a 19200-baud universal asynchronous receiver transmitter (UART) interface. The digital temperature and humidity sensor SHT15, which belongs to surface mounted devices (SMD) encapsulation series, is suitable for reflow soldering. The sensing element and the signal processing circuit, integrated on a micro circuit board, output completely calibrated digital signal. SHT15 includes a capacitive polymer humidity sensitive element and a temperature measuring components made from band-gap material. These two elements are on the same chip, and are connected with a fourteen bits A/D converter and a serial interface circuit. The motor drive module uses an H-bridge to realize the forward/reversal rotation of the motor, and the transmission mechanism drives the motion of the chamber cover. The drive motor, a DC general motor ZGA17RU877i5600, uses 12 V voltage to drive, and can output 5 r/min speed, which can meet the speed requirement of the system. When the air chamber moves downwards to the end, the electric current of the drive motor will increase sharply, according to which the closed or open state of the air chamber cover can be judged. The power module includes a lithium ion rechargeable battery YSD-12980 with the capacity of 9800 mAh and an ultra small DC-DC buck converter chip MAX1836, a product of the Maxim company, to get the +3.3 V voltage. MAX1836 high-efficiency step-down converters is a micro power regulator, and can provide quiescent current as low as 12 µA. Its input voltage is 4.5 ∼ 24 V and its rated output voltage is +3.3 V. In order to prolong the working time of the device, a solar panel is added to charge the battery YSD-12980. The human-machine interface module includes some buttons for the configuration of parameters such as the measuring frequency and a LCD screen, QC12864B, for the display of working state and the real-time display of parameters. Energy Consumption Estimation For the application of long-term automatic monitoring of soil carbon flux in the wild environment, low power consumption is an important issue of the control circuit design. So the sensors modules, the wireless transceiver module and the human-machine interface module are designed for low power consumption, that is to say, the power supply of these modules are switched off when they are in standby mode in order to save energy consumption. The electric current consumptions of all modules in every one hour are listed in Table 2, if measurement is carried out at intervals of one hour. As can be calculated from the data in Table 2, the power consumption of once soil flux measurement using SCFSen is about 320 J if it is set to measure once every hour, because the battery is 12 V DC and its capacity is 9800 mAh. A SCFSen node can keep working continuously for 1323 h in theory, i.e., over 55 days, which, by and large, can meet the requirement of soil carbon flux monitoring in the wild environment. We can recharge a SCFSen node at intervals of almost one and a half months, and an additional solar panel can be connected to a SCFSen node for recharging. For a LI-8100 system with one chamber, to get a soil carbon flux "reading" after the chamber is closed, it needs maximum 60 s for a dead band and 90 s for the observation, so total 150 s time is need. As to SCFSen, it takes 3 min to do the measurement, which needs more time than LI-8100, in order to get abundant raw data for the calculation of soil carbon flux. After every measurement, LI-8100 needs a 2-min observation delay including an about 45-s purge time before next measurement. According to the LI-8100 manual, if three consecutive measurements every hour are set to take, a LI-8100 system with one chamber can keep working for 240 h (10 days). For SCFSen, if the same amount of measurements is required to take, it will consume about 752 J energy every hour. Because the battery SCFSen used is 9800 mAh and 12 V, it can keep working for 563 h (23 days) for such measurement requirement even if there is no solar panels attached. Calibration of SCFSen The readings of sensors tend to be error-prone. Due to the existence of instrumental errors of sensors, the raw measurement of a SCFSen node often is not completely correct. So the calibration for every SCFSen node before its use for the first time is very important. Preliminary Experiment To explore the methods of calibration, we tried to find the correlation between the readings of SCFSen and those of other instruments such as LI-8100. At first, we measured the change of carbon dioxide concentration in the same place using both SCFSen and LI-8100 for adjacent 180 s, respectively, and the results in an experimental position are shown in Figure 6. As can be seen in Figure 6 that there is a difference of almost forty ppm between the first measurements of the two instruments, which is caused by the absolute error of carbon dioxide concentration values for using different sensors. However, the differences between corresponding readings of SCFSen and those of LI-8100 are not a constant. We cannot calibrate the readings by simply giving a value compensation. On the other hand, the results of the two instruments are both nearly linear, and the situations of the measuring results in other experimental positions are similar to that in this position. The reason is that the carbon dioxide concentration is increasing along with the releasing of carbon oxide from soil and the trend is as Figure 3 shown. The nearly linear property of carbon dioxide concentration can be used to calculate the changing rate of carbon dioxide concentration. According to Equation (11), the absolute error of measurements does not affect the measurement of soil carbon flux, because the measurement accuracy of soil carbon flux is mainly in relation to the changing rate other than the absolute value of carbon dioxide concentration. The changing rates of carbon dioxide concentration measured using them are also different, which is caused by the different designing parameters of these two kinds of instruments. However, the carbon dioxide concentration measured using both of the two instruments are all nearly linear with time in the first three minutes. Therefore, we adopt the measurements in the beginning three minutes as the data to calculate the changing rate of concentration of carbon dioxide, which is the value of ∂c c ∂t in Equation (11). Because the nearly linear property of the measurement in the front three minutes, the changing rate of carbon dioxide concentration can be regarded as a constant value when using SCFSen, which is used for the calculation of soil carbon flux. If the error of changing rate of carbon dioxide concentration could be eliminated, the measurement of soil carbon flux will be correct. So we intend to carry out calibration of changing rate of carbon dioxide concentration other than the individual measurements of carbon dioxide concentration. The increment of carbon dioxide concentration is caused by the releasing of carbon dioxide from soil, and the changing rate means the increment of carbon dioxide concentration in unit time. Then an intuitive hypothesis comes into our mind: The difference of sensitivities of different carbon oxide sensors leads to different changing rates in the same time zone, and the changing rates are related to the sensitivity of sensors. Method of Calibration To calibrate the changing rate of carbon dioxide concentration measured by SCFSen instruments, we carried out some measurements in m different positions using n SCFSen instruments and one reference instrument such as a LI-8100 or a calibrated SCFSen. Let S 0 denotes the reference instrument and S 1 , S 2 , ..., S n denote the n randomly selected SCFSen instruments, P 1 , P 2 , ..., P m denote the m different locations, respectively. R ij (0≤i≤n, 1≤ j≤m) denotes the changing rate of carbon dioxide concentration in three minutes using instrument S i in the position P j . After calculating and analyzing the Pearson product-moment correlation coefficients between S i (i > 0) and S 0 using Equation (18), we found that the changing rates of carbon dioxide measured using SCFSen are nearly linearly correlated with each other and to the results using LI-8100. More details are described in Section 5. Thus we can calibrate SCFSen by linearly transforming the changing rate according to the result measured by LI-8100 or other similar devices as reference. If we can know the calibration coefficients a i and b i for a SCFSen instrument S i , the changing rate of carbon dioxide concerntration measured by it, R ij , can be calibrated to R ij using Equation (19). Now we focus on the method of finding the calibration coefficients a i and b i for a SCFSen instrument S i (i > 0). As the existence of measuring error, for a SCFSen S i , we cannot make every calibrated value R ij equal to its reference value R 0j in all m positions using the same calibration coefficients a i and b i . We can only consider about the total calibration performance for all m positions. Here we set the requirement as the expectation of R ij (i > 0) equaling to that of R 0 j, as Equation (20). Furthermore, a i and b i should try to minimize the deviation of R ij to R 0j , as Equation (21) shows. Solve the three Equations (19), (20) and (21), and the value of a i and b i can be reached as Equation (22). Once the calibration coefficients a and b for a SCFSen instrument are determined according to the mentioned method, we will use them directly in the calculation of soil carbon flux to calibrate its results. Of course the calibration coefficients for a SCFSen instrument depend on the parameter m and the features of the m positions during the calibration procedure. The bigger m is, and the wider range the features of the m positions include, the more accurate the calibration coefficients a and b can be reached, and the closer the calibrated measurements are to the real situation. Calibration In our experiment, we used 5 randomly selected SCFSen instruments and a reference instrument (LI-8100) in 8 different positions, which means that n equals 5 and m equals 8 in the equations in Section 4.2. The measured changing rates of carbon dioxide concentration using these instruments in these positions, R ij (0≤i≤5, 1≤j≤8), are shown in Figure 7a. Then we applied calibration to the measurement results according to the method in Section 4. The coefficients during the calibration are shown in Table 3. The column r i denotes the correlation coefficient between S i (1≤i≤5) and S 0 , and the a i and b i denote the calibration coefficients of S i (1≤i≤5). As we can see that the values of r i are all above 0.9, which means the linear correlation between R ij (1≤i≤5) and R 0j . Using the calibration coefficients a i and b i , the calibrated changing rates of carbon dioxide concentration of R ij (1≤i≤5), R ij , can be get, which are shown in Figure 7b. As we can see that the calibrated results of different SCFSen instruments are all stay nearly accordant with the result of the reference instrument in all experimental positions after calibration. The performance of calibration is shown in Figure 8. As we can see from Figure 8a, the mean relative errors of every SCFSen node in these positions are all almost 15% before calibration, and decrease to near 6% after calibration. Because there is no real soil carbon flux data, the results of LI-8100 in every position are used as the ground truths in our analysis. The difference between the results from two different instruments is reasonable, and can be eliminated by appropriate conversion if the user would like to. Besides, the standard deviations of the changing rates measured using the 5 SCFSens in each position before and after calibration are shown in Figure 8b. Obviously, the calibration operation makes standard deviations smaller in all positions, which means the stability of SCFSen after calibration. Here the situation in the 7th Position is a little interesting, where the standard deviation before calibration is the biggest while that after calibration becomes much smaller, which is an occasional case for the calibration coefficients are very suitable for the measurements in this position, as can be easily understood if we analyze the data in Figure 7. There are mainly two aspects of reasons for a calibrated SCFSen result to be different from the result of the reference instrument at the same position. The first one is the temporal variation of soil carbon flux. Because we cannot take the measurements using a SCFSen instrument and a reference instrument in a position at the same time, the real values may also be different. During the calibration procedure, we can only try to minimize the influence of temporal variation of soil carbon flux by doing the measurements using different instruments with small intervals. The second reason is that the count and the features of positions selected during the calibration procedure, which may also have influence on the values of calibration coefficients and the calibrated results as mentioned in Section 4.2. To obtain good calibration performance during the practical application, we should try to minimize the influence of these two aspects. As for the first one, the calibration operation prefers to be done in the period when the environmental parameters, such as soil temperature, humidity and so on, are relatively stable, so as to get relatively stable changing rate of soil carbon flux using the SCFSen and the reference, because these parameters are the influence factors to the soil carbon flux [34,35]. As to the second aspects, calibrations should be carried out in more positions with different soil conditions. Deployment and Measurement After calibration, SCFSen instruments can be used as sensor nodes to construct a sensor network for the measurement and estimation of regional soil carbon flux. Deploy calibrated SCFSens in an area and measurements of each sampling position can be taken synchronously and periodically, and the soil carbon flux values in every position and at different time can be obtained. Based on the sensing data from these SCFSen measurements, the spatial heterogeneity and temporal variation properties of the regional soil carbon flux can be examined. In practice, when we want to monitor the soil carbon flux in an area, the first step is to determine the positions to carry out the measurements, which should be done by the ecologists based on their environment domain knowledge. Once the positions are chosen, we will deploy a SCFSen node in each position. Due to the selection of positions by the ecologists mainly focus on the requirements of ecological representation of the measurements data, and the requirements of networking connectivity and robust are not taken into account by the ecologists, some relaying nodes have to be added in proper locations to make the network is well connected and data of measurements can be well collected. Our previous environment monitoring project, GreenOrbs [16], is successfully deployed in the forest. We use the similar type of nodes in GreenOrbs as the relaying nodes, which is more lightweight than SCFSen nodes and needs less energy and lower costs. Using SCFSen as sensor nodes in a field of 10 m × 10 m as Figure 1 shows, we carried out measurements based on the experimental sensor network. Figure 9 shows the deployment schema of the experimental sensor network, where the pentagrams denote the SCFSen nodes and the circle means the relaying node. The arrows between nodes demonstrate the topology of the network for two different rounds of data collections. Figure 10 shows the results of the regional soil carbon flux after interpolation based on one round of measurements of these nodes using Kriging method, which is commonly used in geostatistics applications. Conclusions In this paper we introduced the theory, design, implementation, and calibration of an instrument, SCFSen, for the measurement of soil carbon flux. Because of the sensitivity and precision of the sensor chip used in SCFSen, the measurements have certain relative errors compared with the measurements using LI-8100, and the errors can be decreased by calibrations. It will be of benefit to the accurate estimation of regional soil carbon flux and explore the spatial heterogeneity and temporal variation properties of the regional soil carbon flux, because it can be used to measure soil carbon flux persistently in multiple positions and under synchronized control, which cannot be accomplished well using currently existing instruments. It uses a wireless transceiver chip to support wireless communication, which enable a soil carbon flux sensor network to be built if multiple such instruments in different positions are deployed. The wireless sensor network technology provides the abilities of synchronization and cooperation of multiple instruments. This instrument is of low power consumption and can keep measuring periodically, which satisfies the requirement of outside monitoring. In the following research, we will focus on the effective nodes placing strategies for the measurement of regional soil carbon flux using SCFSen, which is a very challenging issue for high spatial heterogeneity and temporal variability of soil carbon flux. We intend to solve the problem via taking advantage of some domain knowledge of soil respiration, according to which the changes of soil carbon flux along with time or space possess some regularities depending on some influencing factors. Then we can fulfill the purpose of accurate measurement and estimation for regional soil carbon flux using the instrument SCFSen.
2018-12-02T16:19:45.802Z
2018-11-01T00:00:00.000
{ "year": 2018, "sha1": "40676e0aca81f5b4276fb34f66be7ba6087f5c45", "oa_license": "CCBY", "oa_url": "https://res.mdpi.com/d_attachment/sensors/sensors-18-03986/article_deploy/sensors-18-03986.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "40676e0aca81f5b4276fb34f66be7ba6087f5c45", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science", "Environmental Science" ] }
263332685
pes2o/s2orc
v3-fos-license
Stroke risk related to intentional discontinuation of antithrombotic therapy for invasive procedures OBJECTIVE Antithrombotic medications pose a challenge for conducting surgical or invasive procedures, because their discontinuation is required to avoid postprocedural hemorrhagic complications but potentially increases the ischemic risk for the patient. This study aimed to estimate the increased risk of developing cerebral ischemic events during hospitalization requiring discontinuation of antithrombotic therapy. METHODS This investigation was a single-center retrospective observational study. Clinical data in patients scheduled for admission between January 1, 2021, and December 31, 2022, were collected. Patients requiring discontinuation of antithrombotic therapy were identified by referring to the admission database. Patients who developed cerebral ischemia were identified by referring to the institution’s stroke center database. RESULTS Seven hundred ninety-six patients scheduled for nonneurosurgical procedures and 39 scheduled for neurosurgical procedures underwent discontinuation of antithrombotic therapy. Anticoagulation therapy was prescribed in 40.0%, and antiplatelet therapy was prescribed in 69.1% of the patients. A total of 9.2% of the entire cohort of patients were receiving both anticoagulation and antiplatelet therapy. Bridging therapy was administered in 20.9% of nonneu-rosurgical patients. No ischemic event was observed in the patients undergoing neurosurgical procedures. Among the entire cohort, 3 patients encountered some kind of thrombotic event—2 of which were cerebral ischemia—accounting for an incidence of 0.24%, which was significantly higher than incidental in-hospital stroke unrelated to discontinuation of an - tithrombotic therapy (p = 0.04). Patients undergoing both anticoagulation and antiplatelet therapy harbored a significantly higher risk for cerebral ischemia related to discontinuation of antithrombotic therapy (p < 0.0001). CONCLUSIONS Discontinuing antithrombotic therapy during hospitalization for elective invasive procedures—includ-ing neurosurgical procedures—entailed a relatively small risk of developing cerebral ischemic events, but the risk was significantly higher compared to hospitalized patients without discontinuation of antithrombotic therapy. [9][10] Although there is no nationwide documentation, nearly all academic medical institutions in Japan now strictly demand consideration of the discontinuation of antithrombotic therapy when conducting surgical procedures under general anesthesia.Thus, it is necessary to elucidate the actual ischemic risks involved in discontinuation of antithrombotic therapy from a medical safety standpoint.In order to ensure patient safety, medications of all patients scheduled for admission to our institution are centrally reviewed and registered before hospitalization, and antithrombotic therapy is discontinued for those preparing for elective surgery under general anesthesia.By taking advantage of our admission registration database, this study aimed to estimate the increased risk of developing cerebral ischemic events during hospitalization requiring discontinuation of antithrombotic therapy and to further identify any factor contributing to developing cerebral ischemic events. Patient Cohort This investigation was a single-center, retrospective, observational study approved by the institutional review board at our institution.Written informed consent was waived.We collected and analyzed information on all patients scheduled for admission between January 1, 2021, and December 31, 2022, to Asahikawa Medical University Hospital, who are required to check in at the Asahikawa Medical University Hospital Admission Center.Thus, emergency patients were excluded from the analysis.Furthermore, emergency and elective neurosurgical cases were searched during the same period, because neurosurgeons decided to continue or discontinue antithrombotic drugs before referring the patients to the admission center.Patients who developed cerebral ischemia were identified by referring to the Asahikawa Medical University Hospital's stroke center database and defined according to the Trial of Org 10172 in Acute Stroke Treatment (TOAST) classification. 11The presentation of neurological symptoms identified all patients with cerebral ischemia, and no patients with radiological stroke (silent stroke) were included, because the primary goal of the study was to elucidate the disadvantage of discontinuation of antithrombotic therapy from a patient's perspective.Given that patients could undergo multiple surgeries, we used patient-level data for analysis. Protocol for Discontinuation of Antithrombotic Therapy The institutional protocol for discontinuation of antithrombotic therapy basically follows published guidelines. 12,13Direct oral anticoagulants (DOAC) and warfarin are discontinued approximately 1-3 and 3-5 days, respectively, before surgery.Antiplatelet therapy such as aspirin, clopidogrel, cilostazol, prasugrel, and ticlopidine are discontinued approximately 7-14, 7-14, 2, 14, and 7-14 days before surgery, respectively.Heparin or cilostazol bridging therapy was allowed according to each physician's independent decision.Heparin bridging therapy usually consists of 200U/kg/24 hrs of intravenous low-molecularweight heparin administration during discontinuation of antithrombotic therapy.Cilostazol bridging therapy, on the other hand, usually consists of 200 mg/day of oral cilostazol administration during discontinuation of antithrombotic therapy.Although there was no strict institutional guideline for resuming antithrombotic therapy, antithrombotic drugs were restarted approximately 1-3 days after surgery. Statistical Analysis Statistical analysis was performed using Prism 9 for macOS (GraphPad Software).Fisher's exact test was used for analyzing contingency tables, and multiple linear regression was used for multivariate analysis.A p value of less than 0.05 was considered statistically significant. Characteristics of Patients Undergoing Discontinuation of Antithrombotic Therapy for Nonneurosurgical Procedures Seven hundred ninety-six patients scheduled for nonneurosurgical procedures underwent discontinuation of antithrombotic therapy.Detailed patient characteristics are listed in Table 1 and Supplementary Data 1.In summary, the patient age was 72.7 ± 8.7 years (mean ± SD), and 220 patients (27.6%) were female.A total of 607 patients (76.3%), 457 patients (57.4%), and 413 patients (51.9%) had hypertension, hyperlipidemia, and valve/coronary artery diseases as preexisting conditions, respectively.On the other hand, cerebral disease was identified as a preexisting condition in 233 patients (29.3%).Major reasons for antithrombotic therapy were valve disease/atrial fibrillation (34.5%), coronary artery disease (30.9%), and previous stroke (30.4%).Patients were mainly asked to discontinue antithrombotic therapy due to scheduled elective surgery (70.9%) and gastrointestinal endoscopy (18.0%). Characteristics of Antithrombotic and Bridging Therapy for Nonneurosurgical Procedures Characteristics of antithrombotic and bridging therapy are listed in Table 2. Anticoagulation therapy was prescribed in 40% of the patients, with 80% of them receiving DOAC.On the other hand, antiplatelet therapy was prescribed in 69.1% of the patients, with more than half (57%) receiving aspirin.Of the entire cohort of patients, 9.2% were under both anticoagulation and antiplatelet therapy.Bridging therapy, either by heparin or cilostazol, was performed in 20.9% of the patients. Characteristics of Patients Undergoing Discontinuation of Antithrombotic Therapy for Neurosurgical Procedures During the time of the survey, there were 557 neurosurgical procedures, and 323 procedures were elective.Seventy-seven cases were receiving antithrombotic therapy, and 39 underwent discontinuation of antithrombotic therapy.Those who did not undergo discontinuation of antithrombotic therapy were mostly receiving endovascular treatments.Detailed patient characteristics are listed in Table 3 and Supplementary Data 2. In summary, the mean patient age (± SD) was 71.5 ± 9.7 years, and 11 patients (28.2%) were female.Twenty-eight (71.8%) and 24 (61.5%)patients had hypertension and hyperlipidemia as preexisting conditions, respectively.Cerebral disease was identified as a preexisting condition in 8 patients (20.5%).Major reasons for antithrombotic therapy were previous stroke (38.5%) and coronary artery disease (33.3%).More than half of the patients were asked to discontinue antithrombotic therapy due to elective craniotomy (53.8%). Characteristics of Antithrombotic and Bridging Therapy for Neurosurgical Procedures Characteristics of antithrombotic and bridging therapy are listed in Table 4. Anticoagulation therapy was prescribed in 28.2% of the patients, with 91% of them receiving DOAC.On the other hand, antiplatelet therapy was prescribed in 71.8% of the patients, with clopidogrel leading the frequency of prescriptions (38.5%).No patients were receiving both anticoagulation and antiplatelet therapy.Preoperative management was carefully done, as can be seen in Supplementary Data 2. Thrombotic Event During Discontinuation of Antithrombotic Therapy and its Risk Factor There was no thrombotic event for patients undergoing neurosurgical procedures (Supplementary Data 2).When we expand into the entire surveyed cohort, 3 patients encountered some kind of thrombotic event related to discontinuing antithrombotic therapy during hospitalization or right after discharge, 2 of which were cerebral ischemia-accounting for an incidence of 0.24% (Tables 5 and 6).The in-hospital cerebral ischemic event was unrelated to the discontinuation of antithrombotic therapy and was observed in 6 of 18,368 patients, with 4 of the events presumably related to postoperative surgical complication (Tables 5 and 6, and Supplementary Data 3; incidence rate = 0.03%).The cerebral ischemic event occurred significantly more frequently for patients undergoing antithrombotic therapy (p = 0.04, Fisher's exact test).Both univariate (data not shown) and multivariate linear regression models revealed that patients under both anticoagulation and antiplatelet therapy possessed significantly higher risk for cerebral ischemia related to discontinuation of antithrombotic therapy (p < 0.0001, Table 7).The introduction of bridging therapy did not reduce the risk of ischemic events related to discontinuation of antithrombotic therapy.The invasiveness of the nonneurosurgical procedure also did not correlate with the risk of cerebral ischemic events (p = 0.51, linear regression analysis). Discussion The current study aimed to elucidate the real-world frequency and risk of developing cerebral ischemic events related to discontinuing antithrombotic therapy during hospitalization for elective invasive procedures.We showed that the frequency of cerebral ischemic complications associated with discontinuing antithrombotic therapy is as low as 0.24% for nonneurosurgical procedures.Furthermore, our patients did not experience any symptomatic ischemic complications for neurosurgical procedures.The current guideline for antithrombotic therapy for coronary artery disease recommends discontinuing antithrombotic therapy depending on individual perioperative bleeding and thrombotic risks. 12,14,15However, most elective noncardiac surgeries or treatments requiring general anesthesia or those targeting deep-seated lesions demand discontinuation of antithrombotic therapy due to the fear of postoperative hemorrhagic complications. Although the rationale for continuing or discontinuing antithrombotic therapy during the perioperative period has mainly been established by looking into the risks associated with coronary artery disease, 16,17 its risk has not been thoroughly investigated, instead focusing on cerebral ischemic complications. 6Although the frequency of cerebral ischemic events related to discontinuing antithrombotic therapy was relatively low, it was significantly higher than that seen for hospitalized patients not requiring discontinuation of antithrombotic therapy (Table 6).This could result from multiple causes, given that patients requiring antithrombotic therapy harbor numerous risk factors that can lead to cerebral ischemia.Nonetheless, clinicians should be aware of a 10-fold increased risk of developing cerebral ischemic events for patients requiring discontinuation of antithrombotic therapy.We would also like to emphasize that the observed extremely low frequency of cerebral ischemic complications for neurosurgical procedures could be the product of meticulous management of antithrombotic therapy discontinuation by neurosurgeons (Supplementary Data 2).Our investigation also revealed that patients receiving both anticoagulant and antiplatelet therapy harbor a significantly higher risk of developing cerebral ischemic events, which could be a valuable observation helping clinicians and patients to understand the overall risk associated with the treatment.Notably, patients receiving anticoagulant therapy showed a stronger correlation with cerebral ischemic development than those receiving antiplatelet therapy (Table 7; p = 0.08 vs 0.34). Perioperative bridging therapy with unfractionated heparin is commonly used to counteract the risk associated with discontinuing antithrombotic therapy. 9,10According to the evidence established from clinical trials, however, the advantage of bridging therapy is under suspicion.It was shown that bridging therapy increased the risk of hemorrhagic complications while not reducing the risk of thromboembolism. 7Despite these negative findings, 20.9% of the patients in our study received heparin or cilostazol bridging therapy (Table 2).Due to the small number of patients who developed some kind of thrombotic complication among the entire cohort, we could not determine with certainty the benefit of bridging therapy.It should be noted, however, that one of the two-stroke patients was receiving bridging therapy during discontinuation of antithrombotic therapy (Table 5, case 300), which could align with previous findings demonstrating minimal protective benefit of bridging therapy. We should note several limitations regarding this investigation.First, this was a single-center retrospective study with a limited number of patients who developed positive findings (i.e., cerebral ischemic event).Thus, the statistical power is limited, and the multivariate linear regression analysis results should be interpreted with great caution.Second, the clinical backgrounds of the analyzed patients were heterogeneous.Ideally, the risks associated with discontinuing antithrombotic therapy should be determined according to the treatment categories.Despite these limitations, our findings should still hold value to those involved in treating patients under similar circumstances, and should provide data for proposing future clinical trials. Conclusions Discontinuing antithrombotic therapy during hospitalization for elective invasive procedures-including neurosurgical procedures-entailed a relatively small risk of developing cerebral ischemic events, but the risk was significantly higher compared to hospitalized patients without discontinuation of antithrombotic therapy.Patients receiving both anticoagulant and antiplatelet therapy were significantly associated with a higher risk of developing cerebral ischemia related to discontinuation of antithrombotic therapy. TABLE 1 . Characteristics of patients undergoing discontinuation of ATT for elective nonneurosurgical procedures AF = atrial fibrillation; ATT = antithrombotic therapy; DM = diabetes mellitus; HL = hyperlipidemia; HTN = hypertension; VC = valve/coronary artery disease.Unless otherwise indicated, values are expressed as the case count. TABLE 2 . Characteristics of ATT and bridging therapy for elective nonneurosurgical procedures Values are expressed as the case count (%). TABLE 3 . Characteristics of patients undergoing discontinuation of ATT for elective neurosurgical procedures Unless otherwise indicated, values are expressed as the case count. TABLE 5 . Characteristics of patients having any thrombotic event during hospitalization Ce = cerebral disease; NA = not available.Case number refers to patient ID in the supplementary data. TABLE 7 . Multivariate linear regression model evaluating factors potentially contributing to stroke during discontinuation of ATT * Statistically significant at p < 0.05.
2023-10-03T06:16:47.637Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "451cbe428c93e89b4f0ae948c179e34c7cd11097", "oa_license": null, "oa_url": "https://thejns.org/downloadpdf/journals/neurosurg-focus/55/4/article-pE7.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ffed2d774c9053646c6735cf0a8712e67d9191fb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
201884084
pes2o/s2orc
v3-fos-license
Research on Influence of Converter Transformer Grouping and Receiving End Power Grid Structure on Converter Transformer DC bias When ±800kV UHVDC project was put into operation, DC bias problem can not be ignored. In this paper, a calculation model for DC bias current is established based on Xiluodu-Zhexi UHVDC project data. Converter transformer grouping and receiving end power grid structure are considered. The converter transformer DC bias current is calculated by node voltage method. The influence of different converter transformer grouping modes, distance between grounding electrode and substation, and the feeder length and the number of feeder loop are analysed. The results show that, single-phase converter transformer bias current is maximum with full load and 6 converter transformers under 1/2 monopole earth mode. And bias current level of converter can be reduced in planning and designing. Introduction During the operation and debugging of Chinese Xiluodu-Zhexi ±800kV HVDC transmission project, DC bias problem is noticed. When operating in monopole earth mode and bipolar unbalanced mode, the DC bias current of neutral point in AC power grid and converter transformer is larger. The ±800kV UHVDC transmission project capacity of is from 5000MW to 8000MW. Considering the factors such as the manufacturing capacity of the converter transformer, the dual 12 pulse series connection mode is adopted for the two poles of the 800kV converter station [1]. Under different operation mode, grounding current and converter transformer groups change, the DC bias also changes. When the substation ground potential is constant, AC bus feeder circuit number and length change, the DC resistance of the bias current of the transmission path changes [2]. When the feeder circuit number and length is constant, feeder connected AC substation and grounding distance have an impact on the AC substation earth potential [3][4]. This paper provides an equivalent model of bias current flow path to analyse the influence of converter transformer grouping. Then analysis model of influence of distance on the bias current is established to influence of distance between grounding electrode and substation. Ultimately, the influence of the feeder length and the number of feeder loop is considered. Influence of converter transformer grouping Because of the different converter transformer grouping modes of the ±800kV converter station, the grounding current is different, and the degree of DC bias is different [5]. As table 1 shows, all possible operation modes of ±800 kV converter station are given. The number of converter transformers put into operation, transmission capacity and grounding current under various operation modes are listed. 25%~100% 1%~99% There are 8 converter transformer grouping operation modes. The complete monopole earth, 1/2 monopole earth and bipolar unbalance operation have larger grounding current, and the grounding current under bipolar unbalanced mode is less than I r . According to the number of transformers under the three operation modes, the equivalent model of the three-phase flow path between the converter station and the AC substation connected by the feeder can be established, as shown in Figure 1. In Figure 1, E 0 , E 1 , E 2 ...E n are earth surface potential of converter stations and substations at rated grounding current, respectively. R g0 , R g1 , R g2 …R gn are grounding resistance of converter stations and substations, respectively. R x , R t1 , R t2 … R tn are three phase parallel DC resistance of transformer windings in converter stations and substations, respectively. R l1 , R l2 … R ln are three phase DC resistance of the feed lines of the converter station. The single-phase double-windings group transformer is used in the ±800kV converter transformer, and the single converter transformer DC resistance is defined as R t0 . E 0 and E are defined as the potential value at the rated grounding current. According to the three grouping modes above, the grounding bias current and the bias current flowing through the single-phase converter transformer are calculated by the formula respectively. As table 2 shows, under bipolar unbalanced operation mode, the grounding current is less than I r . Earth surface potential is less than that with rated grounding current. α is defined as the coefficient which is more than 0 and less than 1. The converter station grounding bias current and the single-phase converter transformer bias current are compared under three modes. ( ) 1 1 1 1 1 6 12 12 6 12 The converter station grounding bias current and the single-phase converter transformer bias current are compared under three modes. It can be seen that the converter station grounding bias current is the maximum under complete monopole earth mode. The single-phase converter transformer bias current is the maximum under 1/2 monopole earth mode. So, the problem of DC bias under 1/2 monopole earth mode is the most serious. Parameter selection The single-phase transformer of ZZDFPZ-321000/500 model is adopted for the ±800kV converter transformer. The windings DC resistance is 0.704Ω by the calculation of the nameplate parameters. The converter station grounding resistance is 0.1Ω. Single-phase autotransformer is adopted for500kV AC transformer. The high voltage windings DC resistance and the medium voltage windings DC resistance are 0.238Ω and 0.097Ω, respectively. The grounding resistance is 0.2Ω. The 500kV AC line uses four split wires and the single-phase line resistance is 0.0187Ω/km. When the influence of distance and other factors on the bias current of converter station is analyzed, the length of feeder, the number of loops and the distance between substation and grounding electrode have great influence on the bias current of converter station. It is advisable to select more parallel converter stations to reduce DC resistance in converter stations. Therefore, the operation mode of maximum bias current in converter station is adopted. That is the complete monopole earth operation mode. According to the six-layer soil structure data of the Jinsi grounding electrode, a horizontal homogeneous layered earth resistivity model is established [7][8][9]. Based on the ANSYS software, the earth current is set as 5000A. The boundary condition, the potential is 0, is added to the place away from grounding electrode 50km. The earth potential data of the Jinsi grounding electrode in 50km range is simulated, as shown in table 4. Among them, the distance between the Jinhua converter station and the grounding electrode is 23.5km, and the earth potential is 40.052V. Combined with the earth potential of each substation, the nodal voltage method is used to calculate the earth potential. The bias current of feeder branches and converter stations in three cases are shown in 1) The bias current flowing into the converter station of the substation nearest to the grounding electrode is the maximum. The bias current flowing out the converter station of the substation farthest to the grounding electrode is the maximum. 2) When the distance between substations and grounding electrode increases gradually, the sum of the bias current flowing into the converter station decreases gradually. When the distance increases further, the direction of bias current changes to outflow converter station. The influence of the feeder length and the number of feeder loop The ±800kV converter station is usually connected to three or four 500kV substations. and 500kV feeder loop is generally 8~10 loops. There are four 500kV substations, Shuanglong, Danxi, Wanxiang and Ningde, connected to Jinhua converter station by 10 lines. Shuanglong Station is connected by four loop lines, and the other stations are two loop lines [10][11][12][13]. With the earth potential remaining unchanged, Changing the length and the loop of feeder in the converter station. The DC resistance of the feeder branch changes accordingly, which affects the bias current. Because the influence of feeder length increasing (decreasing) and loop decreasing (increasing) on branch DC resistance are the same, only the influence of feeder loop on bias current is analyzed. The conclusion is also applicable to the analysis of the influence of feeder length. Case 2 in table 3 is taken as the object, combined with the model shown in figure 1. The effect of the number of feeder loop on the bias current is studied in two cases. 1) The number of all the feeder loop in the converter station is changed to 1, 2, 3, 4. The change of the bias current is shown in table 6. 2) Only the number of one feeder branch is changed. The feeder branches of the substation 1 and substation 4 are changed from 1 to 4. The variation of the bias current is shown in table 7 and table 8. Tab.6 Effect of each feeder loop number variation on bias current (A) Branch Conclusion 1) According to the study of converter transformer grouping, the results show that grounding bias current is maximum with full load and 12 converter transformers under complete monopole earth mode. Single-phase converter transformer bias current is maximum with full load and 6 converter transformers under 1/2 monopole earth mode. Therefore, the change of DC bias under 1/2 monopole earth mode is the most serious. 2) The DC bias is the most serious when converter transformer is running under 1/2 monopole earth mode with full load, the AC substations are closer to the grounding electrodes, and the bus feeder length is shorter and the number of loops is larger. It is suggested that in the planning and design stage, the bias current level of the converter can be evaluated according to the above effects. The influence of converter transformer grouping and receiving end power grid structure, reasonable planning and design of the distance between the receiving end power grid substation and grounding electrode, and the length of the feeder and the number of loops of the converter station should be considered.
2019-09-09T05:26:24.080Z
2019-08-01T00:00:00.000
{ "year": 2019, "sha1": "a40cc0f5776d5d009bdd5e4126f184826a62e709", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1304/1/012002", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "429610f81d44a88929cf1cfbcc037c6f6c7214a8", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
201124410
pes2o/s2orc
v3-fos-license
Wedge-shaped vertebrae is a risk factor for symptomatic upper lumbar disc herniation Background At present, much is unknown about the etiology and pathogenesis of ULDH. However, it is interesting to note that many ULDH patients have a radiographic feature of adjacent vertebral wedge deformation. The purpose of this study is to investigate the relationship between symptomatic upper lumbar disc herniation (ULDH) and wedge-shaped vertebrae (WSV). Methods This was a retrospective study of 65 patients with single-level ULDH, who had undergone surgery at our medical center between January 2012 and December 2016. Clinical data including clinical and radiological evaluation results were performed. Results The incidence of WSV in the ULDH group (44.6%, 29/65) was more than in the lower lumbar disc herniation group (21.5%, 14/65). And there were statistically significant differences in WSV (χ2 = 7.819, P = 0.005), wedging angle of the vertebrae (WAV) (t = 9.013, P < 0.001), and thoracolumbar kyphotic angle (TL) (t = 8.618, P < 0.001) between two groups. Based on multivariate logistic regression analysis, WAV (OR = 0.783, 95% CI = 0.687–0.893, P < 0.001) and TL (OR = 0.831, 95% CI = 0.746–0.925, P = 0.001) were independently associated with ULDH. The cutoff values of WAV and TL were 5.35° and 8.35°, which were significantly associated with ULDH (OR = 3.667, 95% CI = 1.588–8.466, P = 0.002). Conclusion The WSV is an independent risk factor for ULDH. WAV > 5.35° and TL > 8.35° were the predictors for ULDH. It should be noted that the patients with vertebral wedge deformation combined with thoracolumbar kyphosis have a higher risk of ULDH. Background Lumbar disc herniation (LDH) is defined as a prolapse of the nucleus pulposus from a defect in the annulus fibrosus forming the circumferential rim of the disc. Most LDH occurs at the levels of L4/5 and L5/ S1 (90-97%). L1/2 and L2/3 disc herniation, which defined as upper lumbar disc herniation (ULDH), are very rare (< 5%) [1,2]. ULDH may have different clinical signs than ordinary lower lumbar disc herniation (LLDH) at the levels from L3/4 to L5/S1 in clinical practice. And high rate of neurological disability has been noted in patients with ULDH, and its surgical results differ significantly from those of LLDH [3][4][5]. To the best of our knowledge, at present, much is unknown about the etiology and pathogenesis of symptomatic ULDH. It is generally known that the vertebral shape is a major factor in determining the general configuration of the spinal column. We noted that numerous symptomatic ULDH patients visiting our institution had adjacent vertebral wedge-shaped deformities. Although symptomatic ULDH in the context of wedgeshaped vertebrae (WSV) has been recognized to occur, it is still controversial and limited number of cases reported made it difficult to judge the relationship between the ULDH and WSV [6][7][8]. In this study, a retrospective radiographic review was conducted on 65 symptomatic ULDH patients to investigate the relationship between ULDH and WSV by examining the incidences of associated WSV and its radiologic signs in the ULDH patients from January 2012 to December 2016. And another group of 65 LLDH patients served as controls. We designed the present study to examine the relationship between predictors and ULDH, particularly the WSV. This exploration of the causes of ULDH provided insight for the diagnosis by spine surgeons. Study population selection This was a retrospective clinical study. A total of 79 patients underwent single-level posterior lumbar interbody fusion (PLIF) surgery after a diagnosis of symptomatic ULDH (L1/2 or L2/3) at our department between January 2012 and December 2016. Among them, 14 patients who had previous spinal surgery or incomplete radiographic materials were excluded. Finally, 65 patients were enrolled as the ULDH group. There were 33 males and 32 females with a mean age of 42.2 (23-61) years. All patients had neurologic symptoms that warranted surgery. Furthermore, these patients who developed gradual neurological changes followed 6 months of unsuccessful conservative treatment. However, the patients with spine trauma, tumor spinal pathologies, neoplasm, spinal infections, congenital deformations, and chronic systemic illnesses such as rheumatoid arthritis and neurodegenerative diseases were excluded from this study. Data from these ULDH patients were compared with a group of controls that presented with LLDH. They were randomly sampled patients surgically treated (percutaneous endoscopic lumbar discectomy, PELD) in the same period for single-level symptomatic LLDH (L4/5 or L5/ S1). The sample size was set at 65 cases in the LLDH group because there were 65 patients in the ULDH group. This study had been approved by Ethics Committee of The Third Hospital of Hebei Medical University. There is no need to obtain informed consent from patients because this is a retrospective study and all data were collected and analyzed anonymously. Evaluation criteria Clinical data including clinical and radiological evaluation results were collected by two independent authors pre-and postoperatively. The thoracolumbar kyphotic angle (TL) was measured from the T10 superior endplate to the L2 inferior endplate by the Cobb method, and lumbar lordosis (LL) was measured from the L1 superior endplate to the S1 superior endplate. In this study, the wedge-shaped vertebrae (WSV) show at least 5°of anterior wedging on the lateral X-ray. And wedging angle of the vertebrae (WAV) was defined as the larger angle adjacent to the herniated disc formed between a line drawn parallel to the superior endplate and a line drawn parallel to the inferior endplate ( Fig. 1). In the LLDH group, WAV was measured at each vertebral body from L1 to L3 of every subject and the biggest angle was chosen for study. Two independent radiologists assessed the radiographs. In the event of disagreement about fusion healing, a third independent reading was obtained. Statistical analysis All data were collected, and the software SPSS version 17.0 (SPSS Inc., Chicago, IL) was used for the statistical evaluation. Results were presented as mean ± SD. The independent two-sample t test was used to identify a significant difference between two groups. Categorical data were compared via the chi-square test. Multivariate logistic regression analysis was used to predict the risk factors, and P value < 0.05 was set for univariate analyses. P values of respective predictors were given on the basis of adjusted odds ratio (OR) with 95% confidence interval (CI). The analysis of receiver operating characteristic (ROC) curves was protracted to evaluate the cutoff values for the continuous variables. The relationship between ULDH and the number of risk factors was examined by logistic regression analysis. In all analyses, P value < 0.05 was considered statistically significant. Discussion So far, there is some confusion about the levels of ULDH. Although some literature also included the L3/4 and T12/L1 disc levels into ULDH [1,[7][8][9][10], the general consensus considers only L1/2 and L2/3, as does this current study, as ULDH. Many studies have demonstrated that the development of LDH may be influenced by several factors, including the sex, age, trauma, smoking history, chronic cough, obesity, chronic degeneration, and kyphosis [11][12][13]. However, because of the rarity of ULDH, its pathogenesis has not been thoroughly studied. In clinical practice, we noted that the ULDH patients visiting our institution had one significant radiologic feature which is WSV. Moreover, some previous authors have been performed to discuss the function of the WSV contributing to ULDH [6,7]. However, Wu et al. [8] proposed that there are no significant correlative analyses between isolated ULDH and adjacent WSV. In the present study, the incidence of WSV was detected in 44.6% (29/65) of ULDH patients treated, and the average WAV was 11.2°, which were significantly different from the LLDH group; these findings are similar to Xu et al.'s study [6]. We further found that the WSV is an independent risk factor for ULDH, and multivariate logistic regression analysis and cutoff values have shown that the existence of two factors (WAV > 5.35°and TL > 8.35°) was significantly correlated with ULDH. How does WSV affect the formation of ULDH? Firstly, we believed that the WSV can increase the shear and compressive forces of adjacent segments by changing the angle of endplates, thereby accelerating the degeneration of adjacent intervertebral discs and even leading to disc herniation [6,[14][15][16]. Secondly, WSV contributes greatly to the composition of thoracolumbar kyphosis, which is prone to local kyphosis. At present, the relationship between ULDH and local kyphosis remains inconclusive. But Bradford and Garica [17] and Leroux et al. [18] believed that when the kyphosis deformity occurs, the relative local weight-bearing line of the spine moves forward, the pressure on the front of the intervertebral disc increases, and the traction tension on the back increases, which makes the posterior annulus of the intervertebral disc prone to tear, leading to or accelerating the herniation of the intervertebral disc. In our current study, we found the patients with WAV > 5.35°and TL > 8.35°were more likely to suffer ULDH. Finally, previous studies have suggested that the wedge deformation of vertebral body may be related to endplate injury [6,7,19,20]. And the endplate injury is also considered to be one of the main causes of disc degeneration [20][21][22][23]. In the process of injury, the integrity of the endplate was impaired, the blood supply to the intervertebral disc was affected, and its nutritional pathway was damaged, which eventually leads to the degeneration of the intervertebral disc and even the herniated disc. Consequently, from the findings of this study, it should be noted that the patients with vertebral wedge deformation combined with thoracolumbar kyphosis have a higher risk of ULDH. However, there are some limitations to this retrospective study. The number of ULDH in this study is relatively low because of rarity of its incidence. There may be a selection bias resulting in this finding. And there is still a need for a large sample multicenter study to further confirm this result. In addition, many other factors leading to disc herniation need to be investigated in future studies for more accurate evaluation. Conclusion In our study, the incidence of WSV was detected in 44.6% of ULDH patients treated, and the average WAV was 11.2°. We further found that the WSV is an independent risk factor for ULDH, and multivariate logistic regression analysis and cutoff values have shown that the existence of two factors (WAV > 5.35°a nd TL > 8.35°) was significantly correlated with ULDH. We should recognize that patients with vertebral wedge deformation and thoracolumbar kyphosis have a higher risk of ULDH.
2019-08-22T15:15:10.374Z
2019-08-22T00:00:00.000
{ "year": 2019, "sha1": "5a8ee05e9c544865e5d5a4f52c3e64add9a20f4f", "oa_license": "CCBY", "oa_url": "https://josr-online.biomedcentral.com/track/pdf/10.1186/s13018-019-1314-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5a8ee05e9c544865e5d5a4f52c3e64add9a20f4f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
139635670
pes2o/s2orc
v3-fos-license
Integration of Press-Hardening Technology into Processing of Advanced High Strength Steels Development of high strength or even ultra-high strength steels is mainly driven by the automotive industry which strives to reduce the weight of individual parts, fuel consumption, and CO2 emissions. Another important factor is the passenger safety which will improve by the use of these materials. In order to achieve the required mechanical properties, it is necessary to use suitable heat treatment in addition to an appropriate alloying strategy. The main problem of these treatments is the isothermal holding time. These holding times are technologically demanding which is why industry seeks new possibilities to integrate new processing methods directly into the production process. One option for making high-strength sheet metals is press-hardening which delivers high dimensional accuracy and a small spring-back effect. In order to test the use of AHSS steels for this technology, a material-technological modelling was chosen. Material-technological models based on data obtained directly from a real press-hardening process were examined on two experimental steels, CMnSi TRIP and 42SiCr. Variants with isothermal holding and continuous cooling profiles were tested. It was found that by integrating the Q&P process (quenching and partitioning) into press hardening, the 42SiCr steel can develop strengths of over 1800 MPa with a total elongation of about 10%. The CMnSi TRIP steel with lower carbon content and without chromium achieved a tensile strength of 1160 MPa with a total elongation of 10%. Introduction High-strength steels are promising materials for applications in the automotive industry. As they typically contain a multiphase microstructure and exhibit numerous strengthening mechanisms, they can attain a wide range of mechanical properties [1,2]. Chemical composition is another of their advantages, being very cost effective and comprising relatively few alloying elements. Their mechanical properties are obtained, for the most part, by means of appropriate heat treatment or thermomechanical processing sequences. With their good combination of strength and ductility, TRIP steels fall into this group as well [3][4][5]. They are treated using intercritical annealing which involves isothermal holding in the bainitic transformation region. At this stage, bainite forms and retained austenite becomes stable. Stability of retained austenite (RA) is governed by the carbon content and by the RA morphology and distribution [6]. Higher strength levels are obtained in martensitic steels by Q&P processing. It comprises isothermal holding between the M s and M f temperatures; and leads to a mixture of martensite and foil-like retained austenite between martensite needles [7]. Strengths of more than 2000 MPa combined with up to 10% elongation are obtained. Given their favourable properties and ability to absorb crash energy, these materials could be used for making body-in-white safety components. One of the processes by which such highstrength components can be manufactured is press-hardening. It enables sheets of hardenable materials to be worked using lower forming forces and to have the springback effect reduced [8,9]. Therefore, processing of these steels without isothermal holding needs to be tested or post-forming heat treatment must be added. Experimental Programme For this experiment, several heat treatment routes were proposed. They were based on presshardening and applied to two experimental steels. One of them was CMnSi TRIP steel, a typical TRIP steel, and the other was 42SiCr, one of the steels for Q&P processing. The proposed sequences reflected press hardening in a tool at RT and heat treatment. The purpose was to determine and describe, firstly, the effect of varying heat treatment parameters on microstructure and mechanical properties and, secondly, the suitability of these steels for press-hardening. Experimental Materials. CMnSi TRIP is a low-alloy steel with 0.2% carbon, whose only alloying elements are manganese and silicon (Table 1). This chemistry was chosen for stability of retained austenite, solid solution strengthening, and to prevent carbide precipitation during bainite formation [4]. Specimens for heat treatment were made by waterjet cutting from a soft-annealed sheet of 1.5 mm thickness. Its microstructure consisted of ferrite and pearlite; the hardness was 180 HV10. Characteristics of phase transformations were calculated using the JMatPro software (Release 9.0, Sente Software Ltd., 2016). The M s temperature was found to be 370°C and the M f was 257°C. 42SiCr had a higher carbon content than CMnSi: 0.43%. It also contained chromium which greatly strengthens solid solution and improves hardenability ( Table 1). The initial microstructure of the sheet of this steel consisted of pearlite and a small amount of ferrite. Its hardness was 290 HV10. Owing to the higher carbon level, the M s was 290°C and the M f temperature was 178°C. Material-technological modelling of press-hardening process. In order to be able to test press-hardening on these newly-developed high-strength steels, material-technological modelling was employed. Using this technique, thermal and deformation routes which were measured in a real-world forming process can be tested on a chosen material. It is performed in a thermomechanical simulator which uses high-frequency electric resistance heating and offers high heating and cooling rates (up to 200°C/s). The data for developing the model was measured in a realworld process, with the tool either at room temperature or pre-heated to various temperatures. It allowed various cooling profiles to be designed to match the materials' characteristics. In the first sequence proposed, the tool was at room temperature ( Fig. 1). The initial heating to 937°C was followed by soaking for 100 seconds. Then, a 7-second step represented air cooling of the workpiece during transfer to the forming tool. The temperature dropped to 760°C. The next step was a simulation of press-hardening in a tool at room temperature at a cooling rate of 100°C/s (the CMnSi-01 sequence). In other sequences, the impact of cooling rate was tested. The cooling rate was reduced to 12°C/s and 6°C/s (CMnSi-02 and CMnSi-03 sequences, respectively). TRIP steels are intercritically annealed to obtain a mixture of ferrite, bainite and retained austenite. Therefore, isothermal holding was incorporated into cooling in some sequences. In the first one of these, cooling rate changed at 425°C from 51°C/s to 1.5°C/s (the CMnSi-04 sequence). Then, sequences with isothermal holding at a bainitic transformation temperature for 600 seconds and 900 seconds were used (the CMnSi-05 and 06 sequences, respectively). The purpose of the last sequence was to study the effect of the rate of cooling after isothermal holding at 425°C (the CMnSi-07 sequence). The first sequence applied to 42SiCr steel represented cooling of sheet metal in a tool at room temperature (the 42SiCr-01 sequence) ( Table 3). Other sequences simulated the typical processing of this steel, the Q&P process. In the second sequence, the cooling rate changed below 200°C (the 42SiCr-02 sequence). In the next sequence, cooling stopped at 200°C and was followed by reheating to a partitioning temperature of 250°C and holding for 600 seconds (the 42SiCr-03 sequence), while the cooling rate prior to placement in the tool was kept at 100°C/s. Lower cooling rates of 50°C/s and 10°C/s were tested as well (42SiCr-04 and 05). The time at the partitioning temperature plays a role in the stability of retained austenite. For this reason, sequences with holding times of 800 s and 400 s were used (42SiCr-06 and 07). The last sequences introduced change partitioning temperatures: 230°C and 270°C (42SiCr-08 a 09). Methods of evaluation. Microstructures were examined by optical (OM) and scanning electron microscopy (SEM). Tescan VEGA 3 and Zeiss EVO MA 25 scanning electron microscopes were employed. The amount of retained austenite was measured by X-ray diffraction. The automatic powder diffractometer AXS Bruker D8 Discover with a HI-STAR position-sensitive area detector and a cobalt X-ray source (λKα = 0.1790307 nm) was employed for this measurement. Measurements were taken in the centres of metallographic sections at diffraction angles in the interval of 25 ÷ 110°2ϑ. Mechanical properties were measured by HV10 hardness testing and tensile testing. Results and Discussion The sequence which represented press-hardening in a tool at RT at a cooling rate of 100°C/s caused the CMnSi TRIP steel to develop a ferritic microstructure with martensite and 3% of retained austenite (Fig. 2). The hardness was 241 HV10. The ultimate strength was 876 MPa and elongation reached 17% (Table 2). After the cooling rate had been decreased from 100°C/s to 6°C/s, neither substantial differences in microstructure nor pearlite formation were detected (Fig. 3). The ultimate strength decreased by 100 MPa and the elongation level was 22%. The sequence which comprised holding at 425°C for 600 s promoted formation of bainite. The resulting microstructure was a mixture of martensite, bainite and a small amount of free ferrite (Fig. 4). The isothermal hold contributed to stability of retained austenite (RA). The amount of RA was 11%. The reduced volume of ferrite and the increase in the amount of hardening microstructure were reflected in a notable increase in UTS to 1160 MPa and in reduced elongation: 10%. Neither an extended holding time of 900 seconds, nor a reduced cooling rate after isothermal holding have further stabilised retained austenite whose amount therefore decreased to 6% and 7%, respectively. In both these schedules, the resulting ultimate strength was in the 744-758 MPa interval. The elongation levels were about 20%. After the 42SiCr steel, which had a higher carbon level and contained chromium, was processed according to the first schedule which represented press-hardening in a tool at RT, its microstructure consisted of a majority of martensite and a small amount of bainite. The volume fraction of retained austenite was a mere 4%. Hardness was 653 HV10. The ultimate strength was 1906 MPa and elongation reached 1%. Elongation did not increase even after the rate of cooling below 200°C had been reduced in the 02 sequence. Improved elongation was obtained by incorporating the Q&P process into cooling. Elongation increased to 10% and high ultimate strength remained: 1850 MPa. As in previous cases, the microstructure was martensitic and contained a small amount of bainite. However, the fraction of retained austenite increased to 14% (Fig. 5). After the cooling above 200°C had been slowed down from 100°C to 50°C/s, a small amount of free ferrite formed (Fig. 6). As a consequence, hardness decreased slightly from 575 HV10 to 545 HV10, as did the ultimate strength whose value was 1802 MPa. Elongation was 9%. Yet another reduction in the cooling rate to 10°C/s led to an increased proportion of ferrite, to even lower mechanical properties, and to no increase in elongation (Fig. 7). THERMEC 2018 Extended holding time at the partitioning temperature, from 600 s to 800 s, resulted in reduced ultimate strength of 1798 MPa and elongation of 8%. Reducing the partitioning temperature from 250°C to 230°C caused the ultimate strength to increase from 1850 MPa to 1919 MPa, whereas elongation slightly decreased to 8%. By contrast, increasing the partitioning temperature to 270°C has led to a lower ultimate strength, 1722 MPa, and no change in elongation. In this case, the microstructure was very similar to the others. It consisted of martensite with a small amount of bainite, no free ferrite and no visible precipitates. Summary In this experiment, a press-hardening route was tested which had been designed on the basis of data from a real-world forming process. It was applied to two high-strength steels: CMnSi TRIP and 42SiCr. It was found that in order to obtain the desired mixed microstructure of martensite, bainite, ferrite and retained austenite in the CMnSi TRIP steel, press-hardening must be followed by isothermal annealing at 425°C. By this means, strengths of more than 1100 MPa combined with up to 10% elongation can be obtained. The incorporation of the Q&P process into the cooling process of the 42SiCr steel, which had a higher level of carbon and chromium, was tested successfully. As a result, it became possible to increase elongation to 10% while the ultimate strength was 1850 MPa.
2019-04-30T13:08:51.761Z
2018-12-01T00:00:00.000
{ "year": 2018, "sha1": "891d52ac4c7c286c3307e0bacaaf5cc3ae283bd1", "oa_license": "CCBY", "oa_url": "https://www.scientific.net/MSF.941.317.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "206d506d6bff2f7c9298e0319723ad82b0943f88", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
266850023
pes2o/s2orc
v3-fos-license
Shearer parameter optimization and low energy consumption mining based on 3D point cloud characterization of coal wall To achieve efficient and low energy consumption mining under different cutting depths of a shearer, a multiparameter coupling optimization method for the shearer based on three‐dimensional (3D) characterization of the coal wall was proposed. First, a seven‐axis absolute articulated arm measuring machine was used to obtain 3D point cloud data of the coal wall, and then the 3D of the coal wall surface was reconstructed by using segmentation, filtering, and stitching processing, thereby obtaining the average thickness of different coal wall areas. Second, through the quadratic rotation regression orthogonal combination experiment, the optimal combination of drum speed, traction speed, and cutting depth was obtained, further obtaining the order of primary and secondary influences, and the regression model. Moreover, a particle swarm optimization algorithm was used to obtain the optimal drum speed and finally, the laboratory and field test experiments were conducted to verify the effectiveness of the proposed optimization algorithm in reducing the cutting energy consumption of shearer. The experiment results show that the given optimization algorithm can adaptively optimize the traction speed and drum speed based on the corresponding cutting depth, which significantly reduces the cutting specific energy consumption of the shearer. Thus, it provided an important technical means for the shearer to achieve low energy consumption and efficient mining. | INTRODUCTION During the mining process of a shearer in a fully mechanized mining face, the traction speed and drum speed are generally controlled by the operators based on the observed and heard mining conditions.However, the illumination of a fully mechanized mining face is poor, the dust concentration is quite large when the shearer cutting coals, and the mining process is accompanied by obvious mechanical noise.Thus, it is difficult for a shearer driver to determine the actual working condition of mining accurately in real time.Especially, affected by mining and free caving, the surface of the coal wall to be mined is usually uneven, namely, the cutting depth of the shearer will constantly change with the different cutting positions.Thus, to ensure the high efficiency and low energy consumption of a shearer, it is necessary to adjust the traction speed and drum speed in real time according to the actual cutting depth. Among the existing multiparameter coupling optimization methods of a shearer, a variety of qualitative analysis methods were mainly used to study the coupling relationship between drum speed, traction speed, and cutting depth, and the simulation is carried out by means of force analysis, discrete element method and finite element modeling.Qin et al. analyzed the variation law of different coal mining performance indexes with drum motion parameters, established a multi-objective optimization model with different performance indexes as subobjectives, and then obtained the motion parameters with the best comprehensive performance under different coal seam hardness by using particle swarm optimization (PSO) algorithm. 1Combined with a fuzzy control algorithm, radial basis function neural network, and proportional-integral-derivative control, Liang enables the parameters of the controller to be adjusted adaptively according to different environments, so as to provide a better speed regulation scheme for realizing intelligent, efficient, and high-quality mining. 2 Sun designed a new automatic speed control system of drum shearer based on strain sensor to realize the automatic speed control. 3Chen et al. proposed a speed collaborative optimization control method based on the dual machine energy consumption model, which can effectively reduce the energy consumption of the dual machine system of shearer and scraper conveyor. 4The EDEM, a discrete element simulation software, was used by Zhao et al. to establish the coupling model of spiral drum and coalcontaining gangue.By analyzing the influence of shearer traction speed, drum speed, spiral angle of blade, and cutting depth on the shearer's loading rate, lump coal rate, and specific energy of cutting, the parameters were optimized with genetic algorithm for back propagation neural network, which is capable to realize the efficient mining of a shearer. 5,6Gao et al. used the modeling test method to obtain the coupling relationship of the key factors affecting the loading ratio of drum. 7Ge et al. established a multi-objective optimization model of shearer cutting performance by studying the cutting performance indexes of the shearer when mining and used a genetic algorithm to optimize the optimal traction-cutting motion parameters with changes in cutting impedance under normal coal seam, gangue seam, and rock fault circumstances, realizing safe operation and efficient mining of coal mining equipment under diverse cut-off working situations. 8Zhang et al. used the iterative method to establish a multi-parameter coupled optimization matching model for the optimal control of dust reduction in shearers, combined with the experimentally set final index, to determine the iterative termination conditions and obtain the results of optimizing the multi-cut parameters of shearers based on the lowest coal dust production. 9Feng created a rolling prediction model based on an Elman neural network and used the shearer operating speed, the amount of fallen coal, the scraper conveyor's operating speed, and the maximum load in the experimental process as samples for continuous optimization training to achieve cooperative control of the shearer and scraper conveyor. 10Li established the relationship between cutting depth and cutting load by analyzing the speed regulation mode of shearer-cutting operations and formulating a corresponding speed adjustment control scheme to increase the speed of the mining operation of shearer-cutting operations. 11Liu et al. used wavelet packet decomposition theory and a back propagation neural network to identify cutting resistance in the cut-off process and a PSO algorithm to optimize motion parameters such as traction speed and drum speed to achieve adaptive control of a shearer by establishing the adaptive control system model. 12Hu et al. proposed a drum speed control strategy and a joint traction-drum speed control strategy to adapt to different abrupt load conditions, as well as established and analyzed a cutting drive system model that not only effectively reduces dynamic load but also achieves efficient coal mining under abrupt load conditions. 13owever, in the above study, the simulation is only carried out under ideal conditions, due to the complex environment of a fully mechanized mining face, it is difficult to obtain an accurate three-dimensional (3D) characterization of the coal wall in time, which will lead to deviations between the simulation model and the actual cutting condition, and then the drum speed and traction speed of the shearer cannot be adjusted effectively and accurately.The research on adaptive speed regulation of shearers using experimental methods combined with actual cutting conditions and mining environments can realize efficient mining of shearers, but it has not yet taken into account the uncertain random coal wall thickness distribution, and it has not yet cut at the optimal traction speed and drum speed WANG ET AL. | 737 before cutting, which will then affect the stability and mining efficiency of the entire shearer.Therefore, it is necessary to study the influence of coal wall thickness characteristics on the shearer when cutting and adjust the drum speed and traction speed of the shearer in realtime according to the average thickness of the coal wall to be cut to achieve mining with low energy consumption, high efficiency, and high-quality shearers.In this paper, a new method was proposed to optimize the coupling of cutting parameters of the shearer based on the 3D reconstruction of the coal wall surface, which is capable to achieve the optimal joint speed regulation of drum speed and traction speed.First, a multiple point cloud data algorithm was used to process the point cloud data, then calculate the average thickness of the coal wall accurately to improve the preperception ability of the coal mining machine on the surface of the coal wall before mining.Next, a regression model was constructed for the relationship between drum speed, traction speed, average coal wall thickness, and cutting energy consumption.Finally, a PSO algorithm was used to obtain the optimal drum speed and traction speed under random average thickness, further improving the mining efficiency and enhancing the stability of the shearer. | Coal cutting test-bed The actual structure and mining principle of the shearer should be comprehensively considered while building the cutting test-bed.In this paper, we mainly study the coupling optimization method of multicutting parameters of a shearer based on coal walls with different thicknesses, thus, the walking and cutting functions of the shearer were mainly considered.The built coal cutting test-bed is composed of the mechanical system, the control system, data acquisition, and the analysis system, as shown in Figure 1.A detailed experimental procedure is shown in Figure 2. As seen in Figure 1, the selected cutting motor of the test-bed is the three-phase asynchronous motor, the rated power is 0.75 kW and the rated speed is 1400 r/min.The worm gear reducer was used to reduce the rotating speed of the drum and increase the load torque of the drum, and its reduction ratio is 38:1.The cutting diameter of the drum is 300 mm, and the minimum speed of the drum is 24 r/min.The type of the traction motor is 5IK120RGN- CF, the rated power is 120 W, the rated speed is 75 r/min, and the reduction ratio of the reducer is 20K:1.The length of the slide rail is 1000 mm, and the distance between two slide rails is 800 mm.The RACO-Elektrozylinders are used to provide pressure for fixing the coal wall.Moreover, the three-phase electricity parameter acquisition module and the present acquisition module of the traction motor are used to monitor the electrical signals of cutting motor and traction motor in real time. | Cutting performance of shearer The specific energy of cutting, namely, the value of the energy consumed by the shearer to obtain the unit volume of coal, 14 is the Key economic indexes of shearer in the process of cutting the coal wall, which is capable to characterize the utilization rate of energy during mining.The smaller the value of cutting specific energy consumption is, the smaller the energy loss and the higher the cutting efficiency of the shearer will be.To obtain the specific energy of cutting, the powers of cutting motor and traction motor were acquired to calculate the cutting energy consumption, and then calculate the specific energy of cutting according to Equation (1). where A denotes the cutting resistance, namely, the hardness characteristics of the coal wall, N/mm.K is the coefficient of correction, 2.78.b is the width of the pick's notched edge, mm.φ denotes the angle of break, rad.n is the speed of the drum, r/min, and v is the traction speed, m/min.m is the number of picks on each section of the drum.Obviously, from Equation ( 1), the shearer needs to carry out the simulated cutting experiment at high traction speed and low drum speed, which can obtain the minimum specific energy of cutting and improve the cutting efficiency of the shearer. In this paper, the specific energy of cutting is the total power of cutting motor and traction motor when cutting a unit volume of coal, thus, the mathematical relationship model of cutting energy consumption and specific energy of cutting is shown in Equation ( 2).The cutting energy consumption needs to sum the cutting power and feeding power within a certain period, and then divide it by the volume of the cut coal to finally obtain the value of the specific energy of cutting. where H w is the specific energy of cutting, kW h/m 3 .W H is the total cutting energy consumption of the shearer in a certain time, kWh.P t ( ) is the instantaneous power of cutting drum and traction motor, kW.K b is the loose coefficient of coal wall specimen during crushing, 1.2.B is the cutting depth, m.H is the average mining height of shearer, m. v ¯is the average traction speed of the test-bed, m/min.t is the cutting time, s. From Equation ( 2), the cutting energy consumption is positively correlated with the specific energy of cutting. When the traction speed and cutting depth are constant, the power decreases with the decrease of drum speed.Moreover, when the drum speed and cutting depth are constant, the higher the traction speed is, the smaller the power will be. | Boundary conditions of parameters In the mining process of a shearer, the main parameters affecting the specific energy of cutting are drum speed, traction speed and cutting depth.According to the size parameters of the cutting test-bed and the actual cutting conditions of the shearer, the boundary conditions of drum speed, traction speed, and cutting depth need to be preset. 15) Boundary condition of drum speed The higher the set value of drum speed is, the smaller the cutting amount of the shearer will be, which leads to an increase in the unit energy consumption of the shearer while cutting coal walls in a certain period. 16According to the adjustment range of drum speed in the actual mining process of the shearer, the boundary condition is set as follows: (2) Boundary condition of traction speed The traction speed directly affects the vibration amplitude of the ranging arm when the shearer cuts the coal wall.With the increase of the traction speed, the vibration displacement of each part of the shearer increases significantly.If the vibration is too large, it will affect the stability of the shearer. 17But the cutting efficiency will be affected if the traction speed is too small.Therefore, combined with the actual speed regulation range of the shearer, the boundary condition of traction speed is set as follows: (3) Boundary condition of cutting depth With the increase of cutting depth, the mean stress of the pick will increase continuously, accelerating the cutting energy consumption of the shearer. 18The maximum cutting depth of the test-bed's drum is 80 mm, and comprehensively considering the cutting amount of the drum, the boundary condition of cutting depth is set as follows: (5) 3 | THREE-DIMENSIONAL POINT CLOUD CHARACTERIZATION OF COAL WALL | Three-dimensional point cloud model of coal wall To obtain the average thickness of the coal wall, the 3D point cloud data of the coal wall's surface should be measured first.Thus, a seven-axis absolute articulated arm measuring machine was used to obtain the 3D point cloud data of the coal wall.The parameters of the seven-axis absolute articulated arm measuring machine are shown in Table 1. Two coal specimens with stepwise changes in the average thickness were poured for the verification experiment. 19By using the seven-axis absolute articulated arm measuring machine, the 3D point cloud data were measured as shown in Figure 3 and the final point cloud images of two coal specimens are shown in Figure 4. | Point cloud data processing To obtain the 3D point cloud data in areas with different thicknesses, the 3D point cloud segmentation algorithm based on region growth is used for point cloud segmentation.The segmentation algorithm gathers point clouds with similar properties in the same area, and then uses the growth of seeds to expand and screen the region.First, select the initial seed point of the area, and judge the properties of nearby seeds according to the normal vector or curvature during seed growth.Next, determine whether the seeds have similar properties, if they meet the characteristics of growth, they will be merged into the area where the initial seeds are located.At the same time, the new seeds will continue to grow around and determine the range of seeds that can be merged.While expanding the position that does not meet the set value, the present seed growth area is finally determined. 20The specific steps of the algorithm are as follows. Step 1: Select the initial seed point and seed area.average thicknesses as the initial seed point, and set the area where the seed point is located as the initial seed area. Step 2: Search for adjacent areas. Set the initial seed point cloud sequence of the coal wall specimen as an empty set, select the initial seed points from the known point cloud sequence, and add them to the set.Next, search the field points near the area to determine whether the field space meets the point cloud curvature of the coal wall specimen.If so, it can be used as the growth area of the seeds. Step 3: Judgment of similarity criterion. Compare the angle between the neighborhood point cloud data with different average thicknesses and the normal of the present seed point.If the angle is less than the calculated smoothing threshold of the point cloud of the coal specimen, the present area is added to the present seed area.If the curvature of the neighborhood point cloud data is less than the curvature threshold, add that to the seed set. Repeat steps 1-3 until the obtained seed sequence is empty. The segmented 3D point cloud images of the coal wall are shown in Figure 5. 21 | Average thickness calculation of coal specimens In a spatial coordinate system, to calculate the average value of different thicknesses of coal specimens, it is necessary to determine the present number of point clouds and the distance from each point cloud data to the standard plane, namely, the coordinate information in the Z-direction.Accumulate the z-axis coordinates of all point cloud data in each region, and then divide by the total number of point clouds.Finally, the actual average thickness of each coal specimen can be calculated, as shown in Table 2. The error comparison between the calculated coal sample thickness based on the 3D point cloud model and the actual coal sample thickness is shown in Figure 6.Obviously, the maximum error between the calculated thickness of the scanned average coal wall and the actual thickness is merely 0.06 mm, which proves the accuracy of the proposed calculation method in this paper. | Preparation of standard coal specimens To test the cutting energy consumption of the shearer under different cutting depths, drum speed, and traction speed, and combined with the boundary condition of cutting depth, five kinds of coal specimens suitable for different cutting depths were poured, the size of the specimens is 600 mm × 450 mm × 120 mm, as seen in Figure 7.The coal specimens were poured with coals, cement, and adhesive, 22 the proportion of each material is shown in Table 3, and the material properties of the poured coal specimens are shown in Table 4. | Quadratic rotation orthogonal combination experiment The drum speed, traction speed, and cutting depth significantly impact the cutting efficiency of the shearer, thus, the quadratic rotation orthogonal combination method was used to carry out the cutting experiment. Specimen-I (mm) Specimen-II (mm) According to the experimental results, the variance, significance, and main parameter effect were analyzed, meanwhile, the primary and secondary influence and interaction of each parameter were clarified.Finally, the optimal combination of parameters of each influencing parameter was obtained. The cutting experiment uses three factors and five levels.Considering the determined boundary conditions of drum speed, traction speed, and cutting depth, the horizontal factor coding table of quadratic rotation orthogonal combination experiment with three factors and five levels was defined, as seen in Table 5. According to the horizontal factor coding table, 23 combinations need to be tested in total, as seen in Table 6.Next, using Design-Expert mathematical statistical software, as well as taking the specific energy consumption of shearer cutting as the experimental evaluation index, the 23 groups of experiments were carried out.The experimental process of coal specimens with different cutting depths and the test results of the actual cutting depth are shown in Figure 8. Obviously, the actual cutting depth of each specimen is consistent with the set value. | Analysis of experimental results According to the test results in Table 6, by using the quadratic rotation regression analysis, the regression equations of drum speed, traction speed, and cutting depth on the specific energy of cutting are established as below. 23 The variance and significance analyses of the experimental results are shown in Table 7, the mismatch value p of the model is less than 0.0001, which indicates that the regression equation has high significance and a good fitting degree. Combined with the mismatch value p, the influence of drum speed, traction speed, and cutting depth on the specific energy of cutting can be judged.As seen in Table 6, the influence of the shearer's parameters on , x 2 x 3 , x 1 x 3, and x 1 x 2 .Among them, the mismatch values of x 1 x 3 and x 1 x 2 are greater than 0.1, which indicates not significant.Thus, x 1 x 3 and x 1 x 2 , namely, the interaction terms of drum speed and traction speed, as well as drum speed and cutting depth, were merged into residual terms for further analysis of The influence laws of a single parameter on the specific energy of cutting are shown in Figure 7, from Figure 9A, when the traction speed and cutting depth are constant, the specific energy of cutting is positively correlated with the drum speed, namely, the specific energy of cutting increases with the increase of drum speed.From Figure 9B, when the drum speed and cutting depth are constant, the specific energy of cutting is negatively correlated with the traction speed.Similar to Figure 9B, when the drum speed and traction speed are constant, the specific energy of cutting is also negatively correlated with the cutting depth, as shown in Figure 9C.Moreover, processing the data to obtain the response surface of significant interaction among drum speed, traction speed, and cutting depth to the specific energy of cutting, as shown in Figure 10. From Figure 10A, when the cutting depth is constant and the drum speed is in the range of 24-29 r/min, the specific energy of cutting is negatively correlated with the traction speed.When the cutting depth is constant and the traction speed is in the range of 2.6-3.1 m/min, the specific energy of cutting is positively correlated with the drum speed, as shown in Figure 10B.When the traction speed is constant and the cutting depth is in the range of 40-60 mm, the specific energy of cutting decreases with the decrease of drum speed. Combined with the influence trend of drum speed, traction speed, and cutting depth on specific energy of cutting in Figure 9 and the interaction between any two parameters in Figure 10, the optimal drum speed range is 24-29 r/min, the optimal traction speed range is 2.6-3.1 m/min, and the optimal cutting depth range is 40-60 mm. According to the analysis results, the constraint conditions of each parameter are determined as 34, 1.7 3.5, 40 60. According to the constraint conditions in Equation ( 8), the actual effects of drum speed, traction speed, and cutting depth on the specific energy of cutting can be obtained, as shown in Figure 11.Obviously, the specific energy of cutting decreased significantly with greater traction speed, smaller cutting depth, and cutting speed. According to the value range of each parameter in Equation ( 8), limit the drum speed, traction speed, and cutting depth, and then obtain the optimal combination of cutting speed, cutting depth, and traction speed with the target of minimum specific energy of cutting, as shown in Table 8. of particle location information is d, the characteristics of each particle are as follows 24 : (1) Present location of particles: (2) Historical optimal location of particles: (3) Velocity of particles: In Equations ( 9)-( 11), i = 1, 2, …, N, compare the present location of particle i with the historical optimal location, if the present location is better than the historical optimal location of particle, a new iteration is realized to complete a self-update. 25Next, combined with the update formulas of velocity and position, calculate the present location of particle i + 1.Moreover, compare and iterate with the historical optimal location, determine each optimal location repeatedly, and record them in the set.The update formulas of velocity and location are as below. The velocity update is given by and the location update is given by where v id k is the present velocity of the ith particle in ddimensional location space, v id k+1 is the update velocity of the ith particle in d-dimensional location space, pbest id k is an individual historical optimal location of the ith particle in d-dimensional location space, gbest d k is the historical optimal location of the entire population in ddimensional location space, x id k is the present position of a particle, c 1 and c 2 are the acceleration factors and a nonnegative constant, the value of them is 1. where w is the weight coefficient.The parameter optimization process of PSO is shown in Figure 12. According to the established regression model and experimental results, we found that the values of drum speed, traction speed, and cutting depth can directly affect the cutting energy consumption of the shearer.However, the change of cutting depth is limited by the actual distribution of the coal wall surface, thus, it is necessary to adjust the drum speed and traction speed according to the change of the thickness of the coal wall.Considering the complexity of the coal wall surface distribution, the average thickness of the coal wall was defined as the independent variable, and the drum speed and traction speed were defined as the dependent variable. 27ouring two specimens with thickness changes as shown in Figure 13, according to the installation method of specimens on the test-bed and the installation position of the drum, the cutting depths of different thickness areas of the two specimens are obtained by using a sevenaxis absolute articulated arm measuring machine, as shown in Table 9. According to the cutting depth of two specimens in Table 9, a PSO algorithm based on dynamic weights was used to optimize the drum speed and traction speed corresponding to the minimum cutting specific energy consumption of the shearer, which the constructed mathematical regression model as a fitness function.The optimization results are shown in Table 10, and the corresponding optimization iteration curve is shown in Figure 14. | Experimental analysis of optimal parameters Table 10 shows the theoretical optimal drum speed and traction speed for different cutting depths.However, in actual cutting experiments, the optimal drum speed, traction speed, and cutting depth obtained from the quadratic rotation regression orthogonal combination model need to be used for cutting.Thus, the initial drum speed, traction speed, and cutting depth are determined to be 24.71r/min, 3.07 m/min, and 46 mm.When the cutting depth changes, Compare the optimal cutting depth to calculate the optimal drum speed and traction speed corresponding to the current cutting depth.The cutting experiments of two specimens with different cutting depths are shown in Figures 15 and 16, respectively. To verify the effectiveness of the proposed optimization method, we obtained the cutting specific energy consumption for the following three situations separately. (1) the theoretical optimal cutting specific energy consumption; (2) the cutting specific energy consumption based on optimal parameters; (3) the cutting specific energy consumption using constant speeds, where the drum speed is 24.71 r/ min and traction speed is 3.07 m/min. For the above three situations, the cutting specific energy consumption under different cutting depths is shown in Figure 17. As shown in Figure 17, obviously, regardless of the cutting depth, when the feed speed and drum speed remain constant, the cutting specific energy consumption is significantly higher than that of two parameters optimized F I G U R E 13 Specimens with random thickness. T A B L E 9 Cutting depth of two specimens. Specimen-I (mm) Specimen-II (mm) with changes in cutting depth.Especially, when the cutting depth is 71.986 mm, the maximum deviation in cutting specific energy consumption reached 0.48 kW h/m 3 , with an increase of 85.87%.Although the cutting specific energy consumption using optimized parameters is still higher than that of the theoretical optimal, it is mainly due to losses during the mechanical assembly and transmission process of the test-bed.cutting specific energy consumption under different hardness conditions of coal, and analyzing the impact law of the coal's hardness on the cutting specific energy consumption.In addition, the given method optimized the parameters of drum speed and traction speed based on cutting depth, so further consideration can be given to collaborative optimization of cutting depth, traction speed, and drum speed based on the surface characteristics of the coal wall, ensuring the lowest cutting specific energy consumption during the mining process. T A B L E 1 Parameters of seven-axis absolute articulated arm measuring machine. Calculate the curvature value of point cloud data with different average thicknesses on the same coal wall specimen to determine the point cloud sequence, then set the minimum curvature value of point cloud data with different F I G U R E 3 Point cloud data acquisition process.F I G U R E 4 Point cloud images of specimens.(A) Specimen-I and (B) Specimen-II. F I G U R E 5 Segmented three-dimensional point cloud images.(A) Specimen-I and (B) Specimen-II.T A B L E 2 Average thickness of coal specimens. 6 4 Error Comparison.(A) Error comparison of Specimen-I and (B) error comparison of Specimen-II.F I G U R E 7 Five kinds of coal specimens.T A B L E 3 Proportion of materials.Material properties of coal specimens. F I G U R E 8 Cutting experiments with different cutting depths: (A) 43 mm, (B) 50 mm, (C) 60 mm, (D) 70 mm, and (E) 77 mm.variance, which is capable to optimize the regression equation.The final optimized regression equation is shown below. 1 | Analysis of cutting parameters based on PSOAssuming the number of particles generated in this optimization experiment is n, and the spatial dimension F I G U R E 9 Influence of single parameter on specific energy of cutting.(A) Drum speed, (B) traction speed, and (C) cutting depth.F I G U R E 10 Interaction between parameters.(A) Drum speed and traction speed, (B) drum speed and cutting depth, and (C) cutting depth and traction speed. F I G U R E 11 12 Influence trend of parameters on specific energy of cutting.T A B L E 8 Optimal combination of parameters.Parameters Drum speed n (r/min) Traction speed v (m/min) Cutting depth H (mm) Parameter optimization process of particle swarm optimization. F I G U R E 14 Iterative curve of optimization.Cutting depth is (A) 54.929 mm, (B) 58.030 mm, (C) 68.970 mm, and (D) 71.986 mm.F I G U R E 15 Specimen-I.Cutting depth is (A) 46 mm, (B) 58.030 mm, and (C) 71.986 mm. Where x 1 denotes the drum speed, x 2 denotes the traction speed, and x 3 denotes the cutting depth. Note: 5, and rand k 1 and rand k 2 produce pseudo-random values between [0, 1].Additionally, by considering that a basic PSO can easily become trapped within a locally optimal value, the weight coefficient w was used to balance the global and local search capabilities of the PSO.The global search capability is stronger with a larger w, and conversely, the local search capability is stronger with a smaller w.The updated formulas are given below.
2024-01-09T16:44:51.683Z
2024-01-02T00:00:00.000
{ "year": 2024, "sha1": "962ef5f9907089fa57db26fdc5f856411749f51f", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ese3.1646", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "15b08b0a6fb4b60c5af50779edf7729e15a60a84", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
219782916
pes2o/s2orc
v3-fos-license
‘Safer opioid distribution’ as an essential public health intervention for the opioid mortality crisis – Considerations, options and examples towards broad-based implementation Highlights • Canada experiences excessive opioid mortality, mainly from toxic opioid exposure.• Many interventions have been implemented, but are limited in reach and impact.• ‘Safer opioid distribution’ (SOD) is a crucial preventive measure for overdose.• SOD needs to be implemented for a large, ‘at-risk’ opioid user population.• Other community-based public health interventions may guide SOD organization. Context Canada has been experiencing a long-standing public health crisis from excessive opioid-related mortality (i.e., acute overdose poisonings) [1]. In 2018the most recent 'peak year' -there were 4623 opioid-related deaths in Canada, with mortality rates similar to those in the United States. Opioid deathsa large extent of which occur among young adultshave negatively impacted on life expectancy in the Canadian population. While earlier in this crisis, a large proportion of overdose fatalities were caused by pharmaceutical opioid products, these patterns have recently shifted also in conjunction with decreasing opioid prescribing, to increasingly involve illicit/synthetic opioid products [2][3][4]. These illicit/synthetic opioid products (e.g., fentanyls and analogues etc.) are highly potent and toxic, and have vastly increased the incidence and fatal outcomes of opioid-overdose incidents [3,5]. In 2018/2019, three-quarters of opioid-related fatalities in Canada involved some element of fentanyl. Crucially, many of the illicit/synthetic opioid products are not recognizable yet often either mimic other (e.g., prescription) drugs in appearance, or are mixed in with other psychoactive substances (e.g., cocaine, heroin etc.) [6]. Current interventions In response to the excessive opioid mortality toll, a large menu ofboth prevention and treatmentinterventions have been implemented or expanded. These interventions, mainly, have included supervised consumption services, naloxone distribution (for overdose reversal) and increasing (e.g., oral, injectable) opioid pharmacotherapy options [7,8]. Unquestionably, these measures have prevented a substantial extent of additional opioid-related harm; however, the abovemostly behavioral or environmentalmeasures have been naturally limited in their reach and mortality-preventive impact among at-risk opioid users [9,10] for multiple reasons, including resource and inherent or practical limitations for scale-up. For example, many opioid-users at overdose-risk use their drugs alone or in private settings, and so cannot be reached for timely assistance in case of overdose [11]. A fundamental limitation of the above measures is that most are not designed to eliminate users' exposure to toxic opioid products driving the recent overdose mortality crisis; in fact, many measures (e.g., SCS, naloxone) are 'reactive', and chiefly aim to reduce or revert the adverse consequences of toxic drug exposure [12]. Further important, the 'at-risk' non-medical opioid user population is uncertain in size yet estimated to be large, likely comprising 500,000 or more individuals across Canada [13]. Rationale and community-based models for SOD On this basis, persistent calls have been voiced for 'safer opioid distribution' (SOD) programming as an essential, while currently lacking strategic response to the opioid mortality crisis, specifically to directly and better protect at-risk opioid users from risk for overdose death [14]. From a public health perspective, illicit/synthetic opioids constitute the primary risk vector or pathway for fatal overdose in current circumstances; thus, there is primary reason to aim for replacement of these high-risk productsespecially in contexts of addictive usewith 'safer' (i.e., less toxic, more predictable in quality) opioid products for at-risk users towards reducing overdose and death risks [12,15]. While the SOD concept had long been neglected by key decision-makers, it is increasingly being embraced by user advocates, scientists and service-providers alike. Recognizing the need for SOD as an essential public health intervention to reduce opioid fatalities, arising questions include how to feasibly organize, deliver and scale up SOD especially for large at-risk populations. For this, useful practical experiences and models from both addiction and other public health arenas currently exist, for example, including: Injection opioid agonist treatment (IOAT): Following multiple, international clinical trials demonstrating the effectiveness of injectable diacetylmorphine as a 'last-resort' treatment for severe opioid dependence, a small number of IOAT programs have been implemented in Canada [16]. However, these programs typically are highly-resource-intensive (e.g., specialist clinic-based) and expensive, operate mainly within a 'treatment' (e.g., rehabilitation) paradigm, including (e.g., psycho-social) ancillary measures, and involve small (e.g., 50-60) patient numbers [17]. Present IOAT programs are not realistically scalable towards population-wide SOD provision. For illustration, only about 0.5% of opioid agonist therapy (OAT)-patients in British Columbia received IOAT in 2020 [18]. Local 'safer opioid prescribing': A small number of local, adhoc 'safer opioid prescription' programs providing 'strong opioid' medication to high-risk users operate in Canada. A first one was initiated in Ottawa, providing pharmaceutical-grade hydromorphone ('Dilaudid') to a small cohort of high-risk users with regular toxic opioid use [19]. A handful of similar, locally limited programs were or will be launched in Vancouver, and other Canadian sites [20]. A variation on the concept has been trialled in downtown Vancouver, where hydromorphone pills are distributed to a registered pool of high-risk users through an externally-mounted, biometrically controlled dispensing machine [21]. Community-based OAT provision: Oral (e.g., methadone-or buprenorphine-based) opioid-agonist therapy (OAT) is the 'gold standard' treatment for opioid dependence [22]. While OAT availability was highly restricted in Canada until pre-2000, systematic de-regulation and community-based programming led to major de-thresholding and increased utilization [23]. Concretely, this allowed OAT-prescribing by general practitioners (rather than mainly addiction specialists) and other health (e.g., nurse) professionals, together with community health centres and pharmacies for medication delivery [22,24,25]. Nowadays, an estimated >120,000 patients receive OAT through these structures in Canada. Naloxone-distribution: Given rising opioid-related overdose deaths, widespread availability of naloxonethe opioid overdose 'antidote' agent has become increasingly important [8,26]. Facilitated by respective regulatory revisions, naloxone distribution has been substantially broadened in recent years, including provision through community-based health service entities, pharmacies, as well as essential 'first responders' (e.g., ambulance, police, firefighters) [27]. Some 590,000 naloxone kits were distributed through some 8700 sites in Canada by 2019, indicating effective community-based mobilization and distribution. Influenza vaccination and nicotine-replacement-therapy: Other exemplary, public health-focused interventions exist that have been implemented through community-based structures. For example, 'nicotinereplacement-therapy' programs to assist tobacco smokers in quitting are available across Canada, mostly through family practices, community health and pharmacy-based (or other remotely, e.g., via telephone helplines, organized) distribution structures for eligible individuals [28,29]. Similarly, seasonal 'influenza vaccinations' are regularly delivered to about one-third of general adults, and two-thirds of seniors in Canada through family practices, community health and pharmacies, as well as workplace and other institution-based clinics [30,31]. The above programs are mostly (provincial) government-organized, facilitating access for large target and risk populations [32]. Other organizational considerations To more effectively reduce the excessive opioid mortality toll in Canada, broad-based 'SOD' programming for high-risk opioid users constitute an urgent intervention need complementing other measures already implemented [7,8,12,14]. Beyond conceptual acceptance, key issues of practical feasibility and organization warrant consideration. For example, candidate 'drugs' for SOD readily exist in Canada, and do not need to be developed or searched for: hydromorphone or slow-release morphine are orally-administered, pharmaceutical strong opioids widely-used for pharmaco-therapeutic purposes among various opioid-using populations [33][34][35]. Their advantages include thatother than diacetylmorphinethey can be used by different administration routes depending on preference. [36]. Key open questions include: 1) who would receive to access to 'SOD, and 2) how would broad-based distribution occur? Given the pharmacological characteristics of strong opioids, including risk for possible severe adverse outcomes (e.g., overdose, diversion), and despite the public health objectives described, access should probably not be universal or purely 'on-demand' yet involve reasonable, while minimal 'needs'-based criteria [37]. These, naturally, cannot be overly 'high-threshold' to ensure access by as many at-risk opioid users with risk for hazardous product exposure as possible. Basic 'eligibility'-testing for example, could involve a basic saliva-drug screen for opioids, combined with a brief questionnaire on opioid-related risks (e.g., similar to what is implemented for access to public health interventions like NRT) combined with registration (e.g., per personal health number) at community-based points of care. This process can be repeated in reasonably regular (e.g., monthly) intervals. While 'needs-testing' cannot perfectly safeguard against possible risks or misuse of such a public health-oriented intervention, it should assist in gearing SOD delivery mostly towards 'at-risk' users, while screening out those who opportunistically seek access to strong opioid drugs. A second issue concerns the infrastructural organization for comprehensive SOD delivery. Current OAT programs or local 'safer opioid prescription' initiatives are not nearly sufficient nor scalable to serve the estimated 'at-risk' opioid user population [13,14,33]. A much more broad-based, efficient infrastructure for delivery is required for implementation. Building on other public health intervention experiences, a combined system of community-based health care, public health clinics, and pharmacy distribution, combined with shelters and drop-in facilities typically serving marginalized individuals appears to be most feasible and scalable [38]. Individuals eligible for SOD could select a principal SOD care access-point for central registration, with their individual file linked to either an 'open prescription' or other required endorsement to receive their SOD medication. Pharmacists or select other health care providers could be authorized for SOD endorsement. Distribution could be based on regular/daily dose distribution by on-site/over-the-counter provision at designated point-of-care, complemented by 'smart' infrastructure or hardware (e.g., biometrically-controlled distribution machines) already experimentally in use, offering easy control of drug access, frequency, dosing, etc. Conclusions Given the persistent opioid mortality crisis especially in North America, the time has come to move towards providing risk populationwide SOD as an essential public health intervention. While originally a daunting idea to some, partly due to 'addiction'-related fears [12,22], similarly conceived and conceptualized interventions are standard and well-working practice in other areas of public health. These can serve as examples and blueprints for sensible, while comprehensive and effective design and implementation of broad-based SOD programming across Canada towards reducing the massive but certainly unnecessary opioid-death toll. Funding statement Dr. Fischer acknowledges support from the endowed Hugh Green Foundation Chair in Addiction Research, Faculty of Medical and Health Sciences, University of Auckland; the present study was, in part, supported by the Canadian Institutes of Health Research (CIHR), grant #SAF94814. Author contributions BF prepared the original outline and draft, and led the overall manuscript writing. AL and LV provided significant intellectual content, and reviewed and edited several iterative drafts of the paper. All authors read and approved the final version of the manuscript submitted. Declaration of competing interest Other than the funding support stated, the authors have no interests to declare.
2020-05-28T09:18:23.117Z
2020-05-26T00:00:00.000
{ "year": 2020, "sha1": "e74686c2a683c1a3e118d571e808b7d7f68bc38d", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.puhip.2020.100016", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8f49823e97599055a11585d560a776248e47e757", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
244270285
pes2o/s2orc
v3-fos-license
ORSA: Outlier Robust Stacked Aggregation for Best- and Worst-Case Approximations of Ensemble Systems\ In recent years, the usage of ensemble learning in applications has grown significantly due to increasing computational power allowing the training of large ensembles in reasonable time frames. Many applications, e.g., malware detection, face recognition, or financial decision-making, use a finite set of learning algorithms and do aggregate them in a way that a better predictive performance is obtained than any other of the individual learning algorithms. In the field of Post-Silicon Validation for semiconductor devices (PSV), data sets are typically provided that consist of various devices like, e.g., chips of different manufacturing lines. In PSV, the task is to approximate the underlying function of the data with multiple learning algorithms, each trained on a device-specific subset, instead of improving the performance of arbitrary classifiers on the entire data set. Furthermore, the expectation is that an unknown number of subsets describe functions showing very different characteristics. Corresponding ensemble members, which are called outliers, can heavily influence the approximation. Our method aims to find a suitable approximation that is robust to outliers and represents the best or worst case in a way that will apply to as many types as possible. A 'soft-max' or 'soft-min' function is used in place of a maximum or minimum operator. A Neural Network (NN) is trained to learn this 'soft-function' in a two-stage process. First, we select a subset of ensemble members that is representative of the best or worst case. Second, we combine these members and define a weighting that uses the properties of the Local Outlier Factor (LOF) to increase the influence of non-outliers and to decrease outliers. The weighting ensures robustness to outliers and makes sure that approximations are suitable for most types. I. INTRODUCTION Ensemble learning is the process of generating and combining multiple, diverse models in order to solve particular Machine Learning tasks. Many applications of ensemble learning focus on classification problems. In this context, ensemble learning is also known as multiple classifier systems or ensemble systems. In addition to ensemble learning, ensemble systems cover the combination of different types of base models [1]. In the following, the term ensemble system is used to define the present task and to highlight the system of estimators, each trained on a specific subset of the data. In this application, the system is limited by the number of subsets. Additional subsets cannot be generated with subsampling strategies because each subset corresponds to a different entity. The primary objective of classification applications aims to improve the classification performance of the overall model. Note that regardless of the application, there is no guarantee that the combination of multiple models will always result in a better performance than the best individual model performs in the ensemble [2]. It is obvious that combining multiple models reduces the bias and variance of a learning algorithm [3], [4]. This reduction effect is a primary reason to use ensemble learning. In [4], Dietterich lists three reasons in favour of using ensembles: (i) a statistical reason: the lack of adequate data results in an improper representation of the data distribution. Individual models suffer from high variance, but not their combination; (ii) a computational reason: different models may solve a given problem, but it is unclear which one to select. Combinations can exploit the strengths of multiple models; (iii) a representational reason: the model is not able to represent the data distribution. Individual models suffer from high bias but not combinations. In general, ensemble learning methods differ in two ways: (i) in the applied procedure to create individual models; (ii) in the strategy combining individual models. In the present task, the number of subsets are limited in (i), not in terms of different learning algorithms but by the number of subsets. In (ii), a strategy is developed having the objective to approximate underlying functions, e.g., best or worstcase approximations, instead of improving the accuracy of arbitrary ensembles. Many frequently used ensemble learning methods, e.g., Boosting [5], Bagging [6], or AdaBoost [7] fuse individual models in the combination process and use strategies that rely on (static) algebraic combination rules, e.g., majority voting, maximum, minimum, sum, etc., that are nontrainable [8], [9]. Few methods, e.g., Stacked Generalization [10], learn an additional model on top of the ensemble system and thus use trainable instead of non-trainable com-biners. Which combination strategy has to use cannot be answered in general as it strongly depends on the specific problem. Thus, no unique best combiner exists that works well for all problems [8], [11]. Circuits and systems often show large process variations in the context of semiconductor test and manufacturing. A suitable configuration of specific variables and registersso-called tuning knobs -needs to be computed to guarantee that the circuits stay within their specified limits and meet performance goals like robustness or power consumption. To optimize a configuration, the behavior of devices must be modeled. A possible approach in PSV is to learn the behavior of each device in a supervised regression task and aggregate device-specific models to describe the general device behavior. The aggregation process in PSV often does not have the objective to improve performance. Instead, the interest lies in a robust worst-case approximation that applies to as many devices as possible. Due to additional time constraints an aggregation method is required that is efficient and flexible at the same time. In this application, a robust approximation of the worst case is particularly challenging because outliers in the given set of devices are expected, e.g., defective devices. Thus, algebraic combination rules, e.g., maximum or minimum, are not suitable due to their sensibility to outliers. The same applies to other algebraic combination rules because they are not suitable for worstcase approximations. To tackle this challenge, we propose a method that uses a trainable combiner, e.g., a NN, to learn a robust combination rule that approximates the worst case over all devices in the presence of outliers. The combination rule, which we call 'soft-max' or 'soft-min', is learned in a two-stage process: (i) the selection of k devices that are representative for the worst case; (ii) the robust combination of selected devices. The weighting in (ii) uses the properties of the LOF [12] to ensure robustness to outliers and an approximation that represents the worse case across devices. Consequently, the subsequent optimization process has a better chance to find a universal tuning law. II. RELATED WORK To the best of our knowledge, there are only a few approaches that use trainable combiners [10], [13], [14], [15], [16]. The works [13], [14], [15] are variants of the Stacked Generalization method in [10], which was invented for classification tasks. In [13], Breiman used Stacked Generalization in regression tasks. In [14], [15], Smyth and Wolpert deployed Stacked Generalization in unsupervised learning tasks, e.g., non-parametric multivariate density estimation. Stacked Generalization or "stacking" is a technique in which the outputs of an ensemble of models are given as inputs to a second-level learning algorithm. This learning algorithm is trained to optimally combine the model outputs to obtain better final performance. Stacking has been applied successfully on a wide variety of problems, including spam filtering [17], sensor design [18], chemometrics [19], and tasks on other, large collection of data sets, e.g., the Netflix Prize data set of the UCI Machine Learning repository [20]. Recommendation systems, e.g. [20], [21], often use additional meta-features in the stacking process. Moreover, it is common to use multiple levels of stacking. The approach that is most related to this work is [16]. Instead of using meta-features, the stacking algorithm in [16] is based on weighted nearest neighbors, which change the weightings assigned to the individual models depending on the distance between a particular instance and its neighbors. None of the shown approaches has task-specific limitations, e.g., in the size of the ensemble system. Nonetheless, there are still open issues of stacking approaches [22]: (i) which type of model to choose as a high-level combiner?; (ii) which features to use as inputs to the high-level combiner?. To address these issues, we use a NN as a high-level combiner because they are powerful and act as general function approximators. Furthermore, the aggregation NN learns which features to use, and thus there is no need to define an explicit set of features. A. Setup In PSV, we aim for robust performance tuning to compensate impacts of process variations. These effects show as offsets or localized deformations of the output that appear at random and lead to abrupt changes of normal behavior. Due to millions of possible error locations, effects vary a lot, and there is no way to predict how outliers will behave. To determine a robust tuning, a set of N devices is typically given. Thus, the ensemble system is limited to a total size of N members. Each device has to be considered a black-box because the underlying functions of the devices are unknown. To analyze and model the device-specific behavior, devices are exposed to different environmental conditions c, e.g., temperatures or voltages, having various tuning configurations t. Additional metadata x, e.g., information about the manufacturing process, is given for each device, see Fig. 1. B. Data sets In the tuning stage of PSV, typically, 10-100k samples are generated per device. Thereby, input values should cover as much space as possible of the given parameter range. The total size of the data set is N * D, D is the number of samples per device. In this research, two tuning data sets are used: (i) a realworld data set provided by Advantest. It consists of 9 devices with 100k samples per device. In general, inputs are of mixed data types (real number, categorical values, . . . ), and the regression target is a real-valued scalar. Moreover, no prior knowledge of outliers is given, e.g., no information which devices are outliers; (ii) an artificial data set which we generate for a given number of devices and outliers, e.g., 30 devices with 4 outliers. To approximate the unknown mapping function of each device, we take the average output values that the devices in (i) produce. For non-outliers, we add a small noise value which we obtain by randomly sampling a normal distribution (mean µ = 0.0, standard deviation σ = 0.1). We either add a constant offset or distort the output values of outliers in random areas of the input parameter space. In the latter case, we define the offset by a smooth offset function that is zero at the boundaries and maximum at the center of the randomly sampled area. To further add offsets with varying sign and amplitude, we include a probability p 1 and a random scaling factor a, e.g., With probability p 1 , we choose the value of the smooth offset function, and with 1 − p 1 , we choose zero. To scale the amplitude, we multiply the offset with a. Thus, the resulting artificial data set contains four different types of outliers with increasing difficulties to detect, see Fig. 2: • Type 1: Outliers with constant offsets • Type 2: Outliers with an offset of smooth function • Type 3: Outliers with an offset of smooth function and probability p 1 • Type 4: Outliers with an offset of smooth function, probability p 1 , and scaling factor a In the case of an artificial data set a prior knowledge of which devices are outliers is given. The artificial data set and the characteristics of different types of outliers are aligned with the expertise of Advantest to be as realistic as possible. C. Preprocessing We convert the different input data types to real values. Due to different ranges of the input parameters, we normalize all input variables in the range -1 to 1 by applying a min-max normalization on the (real-valued) inputs, see (1). In case of device-specific modeling, we ignore metadata x. Therefore, device-specific models learn the mapping ( t, c) → y out . IV. ORSA Similar to most common ensemble system methods, our approach involves two key stages [23]: (i) a generation stage that generates individual ensemble models; (ii) a combination stage that aggregates the models to improve both final output and performance. This stage can involve a selection of individual ensemble members. In our case, it is impossible to generate an arbitrary amount of estimators because the number of devices limits the size of the overall ensemble system. Therefore, we train a single model for each devicespecific subset of the data. The goal is to find a suitable combination strategy for the limited amount of models. Thereby, the models show differences in performance or accuracy due to different behaviors of the devices. Moreover, the models can not simply be made more accurate by an additional amount of effort in training. Altogether, we face a new task with different characteristics compared to ensemble learning. We focus on the model combination stage as there is still room for improvement and challenges [11]: (i) normalization issues that arise due to incomparable output scales. This may cause problems because individual members might be inadvertently favored; (ii) issues finding a suitable function to combine output values. Common functions are maximum or (damped/pruned) averaging. In the following, it is assumed that the generation stage is finished and the individual, trained ensemble members are available. Thereby, the members can be any model of the underlying function, e.g., a NN or SVM. We consider a single sample s in the device-specific setup, which means that s = ( t, c). Each ensemble model i has learned a specific mapping f i : s → y out,i . The overall ensemble output is y out = [y out,1 , · · · , y out,N ] = [f 1 ( s), · · · , f N ( s)] ∈ R N . Depending on whether we aim for a robust approximation of the best or worst output value, we select the k largest or smallest values of y out respectively. Thus, we get y out k = [y 1 out , · · · , y k out ] ∈ R k . The loss function of ORSA, which is used to train the model stacked on top of the ensemble, is defined as follows: The loss definition in (2) is a weighted least square formulation that approximates a robust solution by minimizing the sum of squared errors made between the prediction and the true output values of the selected devices in y out k . Note that there is no ground truth for the approximation task. Thus, ORSA is an unsupervised method. Furthermore, the weights w i are calculated as the reciprocal of the LOF and are normalized such that i w i = 1. The definition of LOF involves the calculation of the kdistance d k (A) of point A, the reachability distance rd k (A), and the local reachability density lrd k (A). Given the value of parameter k, which defines the number of neighbors LOF is considering, the k-distance of any point is its distance to the k th nearest neighbor. With the k-distance, we can calculate the reachability distance as the maximum distance of two points, e.g., A and B, and the k-distance of the second point, see (3). The reachability distance rd k is used to calculate the local reachability density lrd k of point A. To get lrd k (A), we first calculate the reachability distance rd k to all k nearest neighbors and take their average. To get a density, we finally take the inverse, see (4). The points that lie in or on the circle with radius k-distance and with center at point A are called k-neighbors and are denoted by N k (A). The LOF calculates k ratios of lrd k of each point to its neighbors and take the average of these ratios, see (5). In general, if a point is an outlier, its density should be smaller than the average density of its neighbors. Thus, the resulting LOF of an outlier is larger than 1. Non-outliers have comparable densities to their neighbors, and therefore the LOF is approximately 1. By using the (scaled) reciprocal of the LOF, our weighting ensures that points in low-density areas contribute less to the total loss in (2). In other words, in contrast to the equally weighted case where w i = 1 k , Figure 3. Illustration of the proposed method ORSA. It shows the selection and combination process as well as the calculation of L ORSA , see (2). The loss L ORSA is used to train the 'stacked' aggregation NN (highlighted in blue). w i < 1 k in case of outliers. Similarly, the influence of nonoutliers is increasing. With the definition in (2) and given value for the parameter k, we train a Feedforward NN on top of the individual ensemble members that learns a combination rule by minimizing the loss L ORSA , see Fig. 3. Thus, our method allows a dynamic and outlier robust approximation of the best or worst case without any prior knowledge of outliers. A. Artificial data set In the following experiments, we use an artificial data set that consists of 30 devices. For each device, we generate 10k samples. Moreover, 4 randomly chosen devices are generated as outliers, one for each outlier type described in Sec. III-B. Here, the smooth offset function is a truncated normal distribution that is shifted and scaled in such that it is 0 at the boundaries b l and b u and -1 at the center c = b l + (b u − b l )/2 in a randomly-chosen area of the parameter space. On top of the individual members, we stack an additional Feedforward NN. It consists of two (hidden) layers, the first with 64 and the second layer with 32 nodes. Finally, we use the data of all devices jointly to train the stacked NN in an unsupervised learning task. We do not split the data into a training and a validation set and we train the stacked NN for 25k training steps. Each training step updates the stacked NN for one batch, e.g., 64 samples, of the artificial data set. The qualitative analysis of the results focuses on three properties of our method: (i) the frequency with which ensemble members are selected for the set of k worst devices; (ii) the loss contribution of each member; (iii) the outlier-sensitive weighting. At the end of this section, we discuss the properties of the parameter k and how to choose suitable values for k in the selection process and the calculation of the LOF. The results in Fig. 4 show the properties (i)-(iii) for k = 6. In this configuration, we choose the same k for the selection process and the calculation of the outlier-sensitive weighting, respectively. Moreover, we know that device 21 is a type 1, device 26 is a type 2, device 20 is a type 3, and device 0 is a type 4 outlier. Starting with device 21, we analyze the properties (i)-(iii) for the different outlier types. Type 1: (i) due to the large, constant offset, outliers of type 1 are (almost) always in the set of k worst devices; (ii) consequently, we observe that type 1 outliers have a large contribution to the total loss in (2), especially in case of equal weights. Minimizing the equally weighted loss will result in a correction of the (worst-case) approximation towards device 21. We see that our outlier-sensitive weighting reduces the loss contribution of device 21 by a factor of almost 70 to prevent correction of faulty devices; (iii) the heatmap visualization confirms that weights of type 1 outliers are much smaller in comparison to the equally weighted case. Due to large offsets, the weights are reaching zero, which corresponds to our initial suggestion that the more a device is an outlier, the less their assigned weights. Type 2: (i) since we define the offsets of type 2 outliers by a smooth, Gaussian-shaped function that is non-zero only in a random area of the parameter space, the influence of type 2 is more subtle than of type 1. Still, we can observe that device 26 has the second-largest amount of worst cases; (ii) we expect the total loss contribution to be smaller than that of a type 1 outlier. Even though type 2 outliers are challenging to identify in the first bar plot, the second bar plot shows that our methods identify device 26 as the second most severe outlier. The outlier-sensitive weighting reduces the loss of device 26 by a factor of almost 5; (iii) the heatmap shows that our method does not reduce the weights of type 2 outlier to the same extent for all data points. In the random area of disturbance, we observe that our method reduces the weights almost to the same extent as of type 1 outliers. Outside this area, we see larger weights because, in these regions, outliers of type 2 behave like regular devices. Thus, our method considers device 26 to be more trustworthy. Type 3: (i) since we define the offsets of type 3 outliers as probabilistic, the effects of type 3 outliers are even more subtle than those of type 2. Probabilistic signifies that we take the value of the offset function with probability p 1 , and with p 2 = 1 − p 1 , we set the offset to zero. Thus, it is hard to identify type 3 by observing the amount with which our method is selecting it. Although device 20 has the third most amount of worst cases, the bar plot shows that some regular devices have a similar amount; (ii) as a consequence of the probabilistic offset definition, we expect type 3 outliers to show smaller contribution to the total loss than type 2 outliers. The second bar plot shows that device 20, has the smallest contribution of all outlier types. However, the outlier-sensitive weighting reduces the loss of device 20 by a factor of almost 5, similar to type 2; (iii) the heatmap shows the impact of the probabilistic offset definition on the weighting. Because larger offsets appear with probability p 1 , e.g., p 1 = 0.3, only a certain percentage of the weights, e.g., 30%, are reduced to the same extent as weights of type 2 outliers. Since type 3 outliers behave similar to regular types for larger regions of the parameter space, the amount of nonreduced weights increases compared to type 2. Type 4: (i) as we scale the probabilistic offsets of type 4 outliers to random amplitudes, we expect significantly different behavior only in small regions of the parameter space. Thus, identifying type 4 outliers is impossible in the first bar plot because device 0 has fewer worst cases than many regular devices; (ii) the second plot shows that our method considers type 4 as an outlier and thus reduces its loss contribution by a factor of almost 50. The total loss contribution of device 0 mainly depends on the amount of large or small amplitudes that we choose randomly; (iii) similar to type 3 outliers, larger offsets only appear with probability p 1 . Due to the additional random scaling, we expect fewer cases of larger offset values. The heatmap shows that in comparison to type 3 our method increases weights of type 4 outliers. For all regular types, two aspects are valid: Firstly, the second bar plot in Fig. 4 shows that our method successfully minimizes the loss contributions of regular types to a value close to 0. Secondly, the corresponding weights of regular devices are larger than that of outliers, which means that our method puts a big effort into minimizing the loss contributions of the selected non-outliers as they are more trustworthy regarding the whole parameter space. B. Real-world data set The experimental setup for the real-world data set is similar to the setup in V-A. The real-world data set consists of 9 devices with 100k samples per device. Moreover, we have no prior knowledge of the data including outliers and their types. We use the same architecture for the additional aggregation NN, and train it for 25k training steps, with a batch size of 64 samples. The results are shown in Fig. 5. For the real-world data set, we use k = 3 in the selection process and the outlier-sensitive weighting. In the following, we qualitatively evaluate the results. Finally, we compare the final results of both experiments to analyze possible outlier types in the real-world data. The first plot in Fig.5 shows that the real-world data set does not contain devices that are always in the set of k worst devices. We observe that device 6 has a significantly higher amount of worst cases than the rest of devices. Regarding the total amount of worst cases, two other devices, namely devices 0 and 7, are noteworthy. In the first plot, it remains unclear whether devices 0, 6, and 7 are outliers or examples with poor performances. The second plot shows that our method reduces the loss contributions of devices 0, 6, 7, and 8, whereas it is slightly increasing the influence of remaining devices. In particular, device 6 has the highest contribution to the total loss and our method significantly reduces that contribution. In comparison to the results of the artificial data set, the significant reduction may be indicating that device 6 is a type 1 outlier, however, having a smaller offset value since the effect is less visible in the real-world data. For the remaining devices, the impact of increasing or reducing the loss contribution is not significant enough to infer if they are outliers and of which type they are. A reason for smaller impacts may be that the distortions of type 2 to 4 are even more subtle in the realworld data, e.g., in smaller areas of distortion. The heatmap in Fig. 5 shows that our method mainly reduces the weights of device 6. In contrast to the results in V-A, it seems that in the real-world data, type 1 outliers have smaller offsets that are not constant, e.g., function of a subset of input variables. By going from top to bottom, we observe that our method assigns increasing weights to the devices in the corresponding worst-case because our method considers them to be more trustworthy. Nonetheless, devices in the second-worst case, e.g. device 7, still show subtle effects that are similar to those of the outlier types 2 to 4 in the artificial data set. In general, we can conclude that our method detects unusual behavior of devices and successfully reduces their influence. Analogically, the influence of non-outliers, which are more similar to other devices, has been increased. Although assumptions about the type of outliers can be made, real-world data often include defective devices that show unusual behavior in various ways. In many cases, the effects of unusual behavior are very subtle and only appear in small areas of the possible parameter space. Thus, detecting and classifying defective devices without any prior knowledge, e.g., about different types of outliers, in a realworld data set remains a challenge. C. Setting of hyperparameter k Section V-A and V-B show the results of experiments in which we use the same value for k in both the selection process and the calculation of the outlier-sensitive weighting. While we found this setup to work well in many experiments, in general, we have to distinguish between k s and k lof . The parameter k s determines the number of devices we select for the worst case, and k lof determines the number of nearest neighbors we use in (3) - (5). For the densitybased definition of the weighting, a suitable rule of thumb is to choose k lof ≥ k s . Due to increasing computational complexity for large k lof , we usually select k s = k lof . Fig. 7 and Fig. 6 are comparing the influence of different values of k s and k lof in experiments realized with both the realworld data set and the artificial data set. For both data sets, we observe that our method predicts the hard minimum in case of minimal values for k s and k lof , here k s = k lof = 1. In the case of maximal values for k s and k lof , we observe that our method learns the average function over all devices. Depending on the characteristics of the underlying data set, choosing values between the minimum and maximum value of both k s and k lof , our method predicts outputs that are a trade-off between the hard minimum and the average of all device outputs. Fig. 6 shows the results of different hyperparameter settings for the artificial data set. Because outlier devices behave very differently in comparison to any regular device and due to similar behavior among regular devices, we observe that our method is robust for a large number of possible hyperparameter values. Thus, only for very small values, here k ≤ 3, Fig. 6 shows a significant difference between the output of our method and the average of all devices. Fig. 7 visualizes the results of different hyperparameter settings for the real-world data set. Here, it is observable that the larger k s or rather k lof is, the more our method predicts output values closer to the average. In order to summarize, choosing k s = k lof is a suitable default setting. Moreover, it is advisable to start with lower values and to incrementally increase them if the underlying data set appears to contain multiple outlier devices. Depending on a prior assumption getting a small or larger number of faulty devices, the value of the hyperparameter should be chosen accordingly. VI. CONCLUSION In this paper, we proposed an outlier robust aggregation methodology. The method has been demonstrated for worstcase approximations of device outputs, a typical but challenging task in PSV. It has been shown, through experiments on both artificial and real-world data, that our method is able to detect outliers of different types and to reduce their influence on the (worst-case) approximation. Moreover, the qualitative analysis of the real-world data has revealed which devices behave differently and which type of outliers these devices are. On the one hand, the experiments have shown that the weighting is outlier-sensitive and suitable for the present task. On the other hand, individually calculating the density-based weighting for each sample, which includes the distance calculation to k nearest neighbors, can be computationally complex. Furthermore, it is necessary to find suitable settings for the hyperparameter values. Nonetheless, the proposed method has shown encouraging results and may be well suited for a wider variety of tasks, in which worst or best case approximations are needed, e.g., sensor fusion or weather forecasts based on multiple predictions of different providers. Finally, the approximation properties for different hyperparameter settings are being discussed, and empirical parameter setting instructions are provided. In the future, the project will be to learn an outlier-sensitive weighting instead of individually calculating weights based on the LOF. Furthermore, clustering techniques, visualization analysis, and task-specific selection criteria may help to find suitable values for the hyperparameters.
2021-11-18T02:15:54.982Z
2021-11-17T00:00:00.000
{ "year": 2021, "sha1": "cbd6266f94d1f6fa1f2bb3ee57f1afdee7aa3d06", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "cbd6266f94d1f6fa1f2bb3ee57f1afdee7aa3d06", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
250155820
pes2o/s2orc
v3-fos-license
Employment Status and Well-Being Among Young Individuals. Why Do We Observe Cross-Country Differences? In this paper we analyse why in some countries the difference in subjective well-being between employed and unemployed young individuals is substantial, whereas in others it remains small. The strength of this relationship has important consequences, hence it affects the intensity of the job search by the unemployed as well as the retention and productivity of employees. In the analysis we are focused on youth and young adults who constitute a group particularly exposed to the risks of joblessness, precarious or insecure employment. We expect that in economies where young people are able to find jobs of good quality, the employment–well-being relationship tends to be stronger. However, this relationship also depends on the relative well-being of the young unemployed. Based on the literature on school-to-work transition we have identified macro-level factors shaping the conditions of labour market entry of young people (aged 15–35), which consequently affect their well-being. The estimation of multilevel regression models with the use of the combined dataset from the European Social Survey and macro-level databases has indicated that these are mainly education system characteristics (in particular vocational orientation and autonomy of schools) and labour market policy spending that moderate the employment–well-being relationship of young individuals. Introduction The detrimental effect of unemployment on well-being (usually proxied by the declared level of life satisfaction or happiness) is very well documented in the sociological, psychological and economic literature (for an overview, see Brand, 2015;Clark, 2018). We can distinguish two main channels through which the employment-well-being relationship is established. The first can be referred to as a direct psychological effect of a job loss on life satisfaction. The theoretical underpinning of this effect draws on the 'latent deprivation 1 3 model' of Jahoda (1982), who identified five functions of professional work: time structure, collective purpose, social contact, social status and activity. These functions satisfy basic human needs helping to sustain individual well-being. On the other hand, the unemployed are deprived of the source of earnings which impairs their happiness. Therefore the second, indirect income effect of unemployment can be identified. Parental and marital status are other characteristics mediating the employment-well-being relationship. Labour market status affects the likelihood to establish family which, in turn, affects life satisfaction. Moreover, the detrimental effect of past unemployment persists regardless of current employment status, which in the literature is referred to as the scarring effect (Clark et al., 2001). The core assumption of this paper states that job quality strengthens the employment-well-being relationship. We define this concept following Duncan Gallie who identified its five dimensions: skill use at work, work autonomy, opportunities for professional development, job security and work-life balance (Gallie, 2007, p. 6). 1 Within the so-called 'bottom-up approach' general life satisfaction depends on satisfaction with specific life domains, including professional work which, in turn, is influenced by job characteristics (Viñas-Bardolet et al., 2020). 2 In the extensive literature review Thomas Barnay (2016) showed that job features characterizing practically all dimensions distinguished by Gallie contribute to mental health or subjective well-being. Similar conclusions are presented in the study of Sonja Drobnič et al. (2010) who indicated that subjective well-being of workers increases with employment security, autonomy at work, good career prospects and task variety. Among all facets of employment quality, the moderating effect of job security has been most thoroughly studied (recent research, Viñas-Bardolet et al., 2020, the literature review, De Witte et al., 2015. On the other hand, life satisfaction of the unemployed also affects the employment-well-being relationship. The labour market policy has been the most recognized macro-level factor influencing their well-being (Vossemer et al., 2017;Wulfgramm, 2014). The aim of the analysis presented in this paper is to verify which factors measured at the country level moderate the relationship between employment status and well-being among young individuals. The strength of this relationship has important consequences, hence it affects the intensity of the job search by the unemployed (see e.g. Mavridis, 2015) as well as the retention and productivity of employees (see e.g. Clark, 2018). It also shows how efficient the labour markets and economies are in providing young individuals with goodquality jobs. We mostly focus on institutional features therefore our paper contributes to a wider group of analyses studying the impact of public policies on well-being (see e.g. Boarini et al., 2013;Vossemer et al., 2017;Wulfgramm, 2014). The contribution of this analysis is twofold. First, we do not focus, as the vast majority of analyses in this field, on all individuals in prime age but on a subsample of people in the age group 15-35. The literature on the employment-well-being relationship is very rich, although rarely focused on young individuals. It is surprising given the fact that youth and young adults constitute a group particularly exposed to the risks of joblessness, precarious or insecure employment (see, e.g. Pastore, 2015;O'Reilly et al., 2015). Some authors claim that disappointments experienced at the beginning of a professional career are responsible for a drop in life satisfaction observed in early adult life (Ferrante 2017). They might have a long-term detrimental impact on well-being (so called 'scarring effect'). Second, macro-level variables were selected based on school-to-work transition (SWT) literature. Therefore we take into account some institutional characteristics and public policies which were not analysed before (e.g. broad characteristics of education systems 3 ). The remainder of this paper is structured as follows: Sect. 2 presents the concept of school-to-work transition and discusses macro-level features potentially moderating the employment-well-being relationship among young individuals, Sect. 3 describes the empirical strategy and the data, and Sect. 4 discusses the results. The last section presents main conclusions. Macro-Level Determinants of Well-Being and the Employment-Well-Being Relationship In this paper we explain the cross-country variation in the employment-well-being relationship of young individuals by the international differences in SWT patterns. Raffe (2014, p. 177) defines SWT as a 'sequence of educational, labour-market and related transitions that take place between the first significant branching point within educational careers and the point when (…) young people become relatively established in their labour-market careers'. Raffe mentions various factors shaping national SWT patterns: features of the educational system (standardisation, stratification, educational orientation, and institutional linkages), labour market structure (degree of flexibility and regulation, dominant national form of labour market organisation), labour market policy and other relevant policies, the broader economic environment, family and cultural factors (Raffe, 2008, pp. 284-287). In the remaining part of this section we will refer to most of these factors in order to understand the cross-country variation in the employment-well-being relationship among young people. At the end of each subchapter we formulate our hypotheses on the expected moderating impact of those factors. Educational Policy The institutional features of education systems are characterized along several dimensions. Allmendinger (1989) has identified two of them: the standardisation of educational provisions and the stratification of educational opportunity. The first attribute of the educational system concerns the nationwide standards of education quality. Two forms of this dimension are further distinguished. The standardisation of input refers to the degree of freedom schools have with respect to what and how they teach. In highly standardized educational systems the quality and content of training provided by schools is regulated at the national level. The standardization of output refers to the way the educational performance of students is verified. In highly standardized educational systems the competences of graduates are tested through centralized exit examinations. The stratification of educational opportunity characterizes the selectivity of tracking system in education. High level of stratification of educational opportunity describes those education systems in which students are selected into tracks at an early age and the selection is based on abilities or interests, where the tracks differ in terms of curricula and the mobility between tracks is limited. Another attribute of the educational system refers to its vocational orientation (Shavit & Muller, 2000). In vocationally oriented education systems the proportion of students choosing the vocational track is high and the teaching process of occupation-specific skills includes practical training at the workplace (so-called dual apprenticeship system). Some authors refer to this latter aspect as institutional linkages of the education system (Levels et al., 2014). Numerous studies offer evidence showing that stratified and vocationally oriented education systems (those with strong institutional linkages with firms in particular) improve the labour market match 4 (see Scherer 2005, pp. 428-430) contributing to employment quality at least in three dimensions distinguished by Gallie-skill use at work, work autonomy and job security. This is usually explained with the use of a signalling/credential theory (vocationally oriented or stratified education systems send employers relatively precise information about graduates' skills) or human capital theory (vocational education equips graduates with skills required by employers). The empirical findings indicate that strong skill-or education-job match of graduates corresponds to a high level of stratification (Andersen & Werfhorst, 2010;Bol & Werfhorst, 2013;Levels et al., 2014). Similar findings with respect to vocational orientation are less conclusive (Andersen & Werfhorst, 2010;Wolbers, 2003). However, the results are stronger if we consider education organized as a dual apprenticeship system (Levels et al., 2014). Further evidence can be found in the analyses of the determinants of employment stability of youth which can be also treated as a measure of match quality. It has been found to correlate positively with vocational orientation (Lange et al., 2014;Shavit & Muller, 2000;Wolbers, 2003Wolbers, , 2007 and stratification (Bol and van de Werfhorst 2013;Shavit & Muller, 2000). Since the relationship between the level of match and employees' well-being is also well documented (Badillo-Amador and Vila 2013; Mavromaras et al., 2013;Wu et al., 2015;Zhu & Chen, 2016), we expect that the high level of vocational orientation or stratification strengthens the employment-well-being relationship. The high standardization of output in the education system should increase educational outcomes since the perspective of an exit examination motivates students, teachers and school authorities. Various studies have confirmed this claim (see e.g. Bishop, 1997;Hanushek & Woessmann, 2010;Woessmann, 2016). On the other hand we can expect a negative correlation between standardization in input and educational outcomes since low school autonomy harms competition between schools decreasing the quality of education (Fuchs & Woessmann, 2007;Horn, 2009;Woessmann, 2016). The level and the quality of education, in turn, increases the likelihood to find employment of a better quality which is well proved in the rich literature on the nonpecuniary returns to schooling (classic studies in economics, see e.g. Duncan, 1976;Lucas, 1977, for the overview, see Gunderson & Oreopolous, 2020;Oreopoulos & Salvanes, 2011). Therefore, we expect that a high level of standardization of output (input) strengthens (weakens) the employment -well-being relationship. Labour Market Flexibility One should expect that strong employment protection increases the difference in wellbeing between labour market insiders and outsiders by boosting the feeling of job security of employees and by diminishing the chances of the jobless to enter employment. Although there are studies confirming the existence of such a moderating effect (Boarini et al., 2013;to some extent Voßemer et al., 2017), there is a number of empirical analyses showing that employment protection has a detrimental effect on employees' well-being (Böckerman, 2004;Clark & Postel-Vinay, 2009). It might happen since the higher level of protection increases a cognitive job security (not expecting dismissal) but diminishes a perceived labour market security (expecting to find a comparable job easily) (Hipp, 2016:3). Depending on the relative importance of these two effects, the total impact of employment protection on employees' well-being can be either positive or negative. However, it is difficult to expect that under high employment protection employees should suffer more than the unemployed. Therefore we expect that employment protection will strengthen the employment-well-being relationship. Labour Market Policies The existing studies suggest that instruments of passive (e.g. unemployment benefits) and active (e.g. professional training) labour market policy (PLMP and ALMP respectively) are beneficial to the unemployed. They not only offer a financial cushion but also have significant non-pecuniary effects. A generous unemployment protection system fights the unemployment stigma, and many active labour market policy measures, like apprenticeship schemes, resemble paid employment and offer intangible benefits similar to those offered by professional work. Therefore, it is hypothesized that labour market policies should weaken the employment-well-being relationship (Wulfgramm, 2014;Voßemer et al., 2017). These analyses, however, abstain from the empirical evidence indicating that in countries with developed LMP also employees declare higher life satisfaction (Clark & Postel-Vinay, 2009;Di Tella et al., 2001;Green, 2011;Hipp, 2016). Many ALMP measures help to bridge the competency gap and generous benefits support the unemployed in their search for jobs that will match their skills. Therefore, both types of LMP contribute to a better quality of employment leading to greater well-being of employees. Since the developed LMP should increase the well-being of both unemployed and working individuals, their moderating effect on the employment-well-being relationship depends on the strength of the effect in these two groups. Broader Economic Environment At the macro level GDP per capita is strongly correlated with the average happiness in nations. On the other hand, the seminal study of Easterlin (1974) indicated that in the USA economic growth had not contributed to the increase in well-being since the end of World War II. The recent findings suggest that subjective well-being is affected by economic growth, however this effect depends on various circumstances (e.g. how the nation's 1 3 income growth is divided, Diener et al., 2013;Slag et al., 2019). Inflation and the unemployment rate, i.e. ingredients of the so-called 'misery index', are well recognized determinants of well-being. The latter factor has a much more detrimental influence (di Tella et al., 2001). The unemployment rate reduces well-being of both the unemployed and employees. However, it is not clear which group is hit harder. Social comparisons should soothe the determinantal influence of losing a job in areas where the unemployment rate is high (the so-called 'social norm effect', Clark, 2003Clark, , 2010. On the other hand, the high unemployment rate lowers the perceived employability, decreasing well-being of jobless individuals (Green, 2011). To sum up, the existing literature does not allow to formulate clear predictions how economic conditions moderate the employment -well-being relationship. However, measures of national income and unemployment rate should be considered in the international comparative research on well-being determinants. Cultural Factors-a Social Norm to Work Within this dimension the social norm to work is the most recognized moderator of the employment-well-being relationship. We can expect that in societies which attach a particular value to work, the detrimental effect of unemployment on well-being will be stronger. Jobless individuals in societies with a strong norm to work are more likely to be exposed to informal social sanctions (e.g. gossiping) and experience the feeling of guilt. Stutzer and Lalive (2004) indeed found such moderating effect using the results of a referendum deciding on the level of unemployment benefits as a proxy. This effect has been also observed when the social norm to work was operationalized with the use of survey questions concerning the value of work ethic (Eichhorn, 2014;Roex & Rözer, 2018). Transition Regimes and Clustering of macro-level factors Most of the abovementioned features are interdependent and various characteristics of SWT tend to cluster forming the so-called 'transition regimes'. The most prominent typology distinguishes five regimes: employment-centred, liberal, universalistic, sub-protective and post-communist (Pohl & Walther, 2007;Walther, 2006). In the employment-centred transition regime fitting the cases of German-speaking countries as well as France and the Netherlands to a certain extent (see Tamesberger, 2017), the educational system is highly tracked and selective. Vocational education often combines school-and firm-based training (dual-apprenticeship system) offering strong institutional linkages with employers. The features of this education system favour employment of graduates and strengthen education-job match. High employment protection combined with moderately developed (at least with comparison to Nordic countries) ALMP limits the employment prospects of unemployed graduates who have not experienced a smooth school-to-work transition. The liberal transition regime is typical for Anglo-Saxon countries. The schooling system is inclusive, not stratified, offering mostly general education which is not institutionally linked to the labour market. ALMP is not developed and aims at fast employment. The level of unemployment benefits is low and they are conditioned by job-search activities. The low level of employment protection increases employment insecurity of graduates, however does not discourage employers to hire young individuals. It results in dynamic flows between labour market states and instability of employment at the beginning of professional career. Nordic countries (and Belgium to a certain extent) (see Tamesberger, 2017) represent the universalistic transition regime. In this cluster the schooling system is inclusive, not stratified, and offers access to higher education to a vast majority of graduates. The employment protection is relatively low although significantly higher than in countries representing the liberal regime. However, the lower employment security is offset by highly developed active and passive LMP (so called 'flexicurity model'). In the sub-protective regime typical for many Mediterranean countries the schooling system is neither selective nor stratified, with the focus on general education. Vocational training is moderately developed and mostly school-based. Therefore the institutional linkages with employers are limited. High level of employment protection combined with underdeveloped LMP makes it difficult to enter the core segment of the labour market forcing many young individuals to accept peripheral and precarious jobs. The post-communist cluster is represented by the heterogeneous group of CEE countries which has some features of sub-protective and employment-centred regimes. However, the transition systems in CEE countries have little in common. Figure 1. shows patterns of the employment-well-being relationship represented by differences in the average life satisfaction between employed and unemployed young individuals. Countries are sorted in a descending order according to the well-being gap. In many cases the size of the gap coincides with the transition regime type confirming that the macro-level factors might moderate the employment-well-being relationship. The biggest gap is observed in countries representing the employment-centred regimes (DE, CH, AT) where the employment quality of young people is high due to the developed vocational system, strong institutional linkages and employment protection. However, for those who are not successful in the school-to-work transition process, the employment entry might be difficult. In countries representing the universalistic regime a well-being gap is smaller and life satisfaction of both employees and the unemployed is high (best seen in DK, NO, FI) which might be a merit of the flexicurity model. The low well-being gap is noticeable also in countries representing the sub-protective regime (in IT, GR, PT in particular). Contrary to Nordic countries, life satisfaction of both the unemployed and employees is considerably lower which might reflect low employment quality of young individuals and underdeveloped LMP measures. Perhaps for the same reason a similar pattern can be observed in GB representing the liberal model, however in IE the well-being gap is much larger. As discussed earlier, the post-communist cluster consists of countries with various transitions systems, which is reflected by heterogeneity in terms of the well-being gap. Micro-Level Variables The micro-level variables come from the European Social Survey (ESS), which is a project offering high-quality comparative data covering a broad range of European countries. The study has been organized on a biennial basis since 2002. Table 1 presents the detailed description of the micro-level variables used in the study. The dependent variable (life satisfaction) reflects only one, cognitive dimension of subjective well-being. The affective dimensions (positive and negative emotions) are excluded from the analysis, which is a common practice in socio-economic research. The theoretical framework of this study does not concern self-employed or inactive individuals, therefore we included only employees (regardless of the contract type or working time) and the unemployed in the sample. Due to numerous missing values and inconsistent coding of income groups in ESS, the proxy for income was created based on the respondents' assessment of financial status. Our focus is on the direct psychological impact of employment status on well-being thus we included the mediating variables proxying other, indirect channels of that relationship (income, parental and civil status). Other independent variables are treated as confounders, which are correlated both with well-being and employment status, and were therefore included in the model to calculate the unbiased effects. Moreover, unemployment in the past reflects the so-called 'scarring effect'-a determinantal influence of the past unemployment on present well-being regardless of the current employment status. Table 2 presents the macro-level variables used in all models as well as the summary of their hypothesised moderating impact on the employment-well-being relationship. The set of employment protection indices comes from the OECD and the database of Avdagic (2012Avdagic ( , 2015 who calculated EPR and EPT indicators for many CEE countries according to the OECD (2020a) methodology. In order to increase the comparability, LMP spending was expressed as a share of GDP and normalized by unemployment rate to account for cross-country differences in demand for such policies. 5 Table 3 includes additional macro-level variables which will be analysed only in one group of models (due to the data constraints) as well as the summary of their hypothesised moderating impact on the employment-well-being relationship. The scales of subindices characterizing the level of stratification and standardization of input were aligned. Therefore, the higher values of STRAT and STANDIN variables indicate the higher level of tracking and standardization of input respectively. The STANDOUT index is a dummy variable with a limited variance (in general, the lack of central exit exams is typical for some Mediterranean countries and for most federal states, i.e. Belgium, Switzerland, or Austria) and will be treated with caution. The index of a social norm to work is a countrylevel average score for five questions of the World Values Survey on a 1-5 scale: 'To fully develop your talents, you need to have a job', 'It is humiliating to receive money without working for it', 'People who don't work become lazy', 'Work is a duty towards society', 'Work should always come first, even if it means less free time.' Such variable is a popular proxy of a social norm to work (for the overview, see Stam et al.,p. 315). The study of Eichhorn (2013Eichhorn ( , p. 1667 confirmed that all these items indeed load on one factor. All variables except STANDOUT (a binary variable) are standardized with the mean value of 0 and the standard deviation of 1. Empirical Strategy In the empirical part of the analysis we use hierarchical regression techniques. This strategy allows to avoid the underestimation of standard errors when variables at different levels of aggregation are combined (Moulton, 1986). In particular we apply two types of models accounting for three-and two-level hierarchy respectively. First, we apply a three level random intercept model 6 where individuals are nested within country-years and countries: where, y ijt : a well-being proxy: a level of life satisfaction (0-10) of an individual i in country j in year t, (1) To adjust for the possible low coverage of LMP measures among young individuals, also other variants of ALMP and PLMP variables were used in regressions (ALMP and PLMP variables were additionally multiplied by the coverage rate in the age group < 25). Regardless of the LMP variables' variants, the results remain basically the same (see Table S6 and comments in supplementary materials). 6 According to Schmidt-Catran and Fairbrother (2016), optimally, the country-year level should be nested both in a country and in a year level. Such models are often computationally challenging. Therefore, specification with additional control for the year of the analysis (as presented in Eq. 1) is recommended (see Voessemer et al. 2017, Wulfgramm 2014). (2020) Control variable Table 3 Additional macro-level variables and their expected moderating impact on the employment-well-being relationship Despite the fact that the dependent variable's scale is ordinal, we treat it as the interval one, following the recommendations of Ferrer-i- Carbonell and Frijters (2004), and estimate the linear model where pq is the most important parameter. The positive value of this coefficient indicates a larger well-being divide between the employed and the unemployed, hence a stronger employment-well-being relationship. In the three-level models we analyse only a set of macro-level variables presented in Table 2. The extended set of contextual variables (presented in Tables 2 and 3) was available only for one year (2008). For this data we apply two-level random intercept models where individuals are nested within countries: The specification described in Eq. (2) is a two-level equivalent of model (1). In this case, parameter pq is again most important and concerns the interaction term between individual employment status and a macro-level variable. The interpretation of its values remains the same as in model (1). The empirical strategy is determined by the limitation of the sample size at higher levels (country-years, countries). Results of the simulations indicate that the minimum number of cases allowing to calculate unbiased coefficients and standard errors ranges between 15 and 25 for simple models (Bryan & Jenkins, 2016;Stegmueller, 2013). Therefore, we add macro-level variables carefully applying a step-wise procedure, 7 which is a popular empirical strategy used in such cases (see, e.g. Chung, 2016). First, we estimate models described by Eq. (1) with a full set of micro-level variables but only one macro-level variable (and the interaction term with the employment status) at a time. 8 In the next step we repeat this procedure controlling additionally for GDP and UNEMP. In the third step we add to the specifications the second macro-level variable (with the interaction term) controlling for GDP and UNEMP. Due to the limited number of observations at the macro level, models described by Eq. (2) are calculated even more carefully. First, we analyse specifications with one macro-level variable at a time (with the interaction term). In the next step we additionally control for GDP or UNEP. Finally we analyse models with two macro-level variables (with interaction terms) without controlling for economic conditions (GDP, UNEMP). The applied step-wise procedure allows for inspecting the robustness of the results in case of limitation of macro-level variables which can be included in the models. Under another robustness check we estimate a non-hierarchical version of model (1)-a pooled linear model with country and year fixed effects with standard errors clustered at the country level. Such model emphasizes more within-(over time) than cross-country (2) y ij = 0 + pE E ij + X ij p + I j q + E ij I j pq + u ij + e j variation in estimation of effects. The detailed specification of that model is presented in supplementary materials. Methodological Challenges The outlined empirical strategy bears some further methodological issues: reversed causality, omitted variable and overcontrol bias. The first problem is probable in our model since not only unemployment affects well-being but also unhappy individuals are less likely to find/maintain employment (Böckerman & Ilmakunnas, 2012;Oswald et al., 2015). The omitted variable bias will occur if we do not control for all differences (affecting wellbeing) between employees and the unemployed. In studies investigating the employment -well-being relationship, the empirical strategies addressing these two problems are similar and include the instrumental variable (IV) approach (plants closures are popular instruments, see e.g. Kassenboehmer & Haisken-DeNew, 2009;Marcus, 2013) or panel data regression techniques (Winkelmann & Winkelmann, 1998). The dataset used in this analysis does not contain good candidates for instrumental variables, and its cross-sectional structure precludes the application of panel data models. Therefore the estimation is vulnerable to both types of the above-mentioned bias. However, there are at least three arguments supporting our empirical strategy. First, the paper is focused on international differences in the employment -well-being relationship. Even if the estimated impact of employment status on well-being can be biased, we assume that the size of the bias is similar in all analysed countries. Second, it is difficult to compare IV estimates internationally since they represent the effects calculated for a subgroup affected by the instrument, not for the entire sample. These subgroups might differ internationally. Third, currently there are no international longitudinal micro-level datasets allowing to conduct similar analyses applying panel data models. Therefore most comparative studies on macro-level determinants of well-being use cross-sectional datasets (see e.g., Boarini et al., 2013;Calvo et al., 2015;Vossemer et al., 2017;Wulfgramm, 2014). The overcontrol bias arises when the model controls for characteristics lying on the causal pathway between the independent variable of interest (in our case -employment status) and the dependent variable (wellbeing). In our setting income, marital and parental status are such potentially mediating variables, and including them in the model reduces the estimated impact of employment status. Therefore some authors prefer more parsimonious models (e.g. Voessemer et al. 2017). In the theoretical part of the analysis we mainly refer to the psychological influence of employment status on well-being (which we consider the main and direct effect), therefore we decided to control for other indirect effects (income, parental and civil status). Sample The theoretical part of this analysis concerns the unemployed and employees. Studies on transition between other states (e.g. employment-inactivity) are built up on different theoretical grounds. Moreover, as noted by Vossemer et al., (2017Vossemer et al., ( , p. 1236) the anticipated moderating effects of policies (e.g. employment protection) often do not apply to the group of self-employed. For this reason analyses in this field of interest are either focused on differences between employees and the unemployed (Eichhorn 2013(Eichhorn , 2014Vossemer et al., 2017) or include other groups (e.g. inactive) but do not formulate hypotheses nor interpret results referring to them (e.g. Clark & Oswald, 1994;Stam et al., 2016;Wulfgramm, 2014). Therefore we restrict our sample to employees and the unemployed only. 3 Our sample consists of individuals aged 15-35. Such strategy is driven by the thematic scope of the analysis but also reduces the bias resulting from possible changes of education systems in time (micro-level characteristics are not lagged with respect to variables characterizing education systems). The final sample includes countries for which the full set of micro-and macro-level variables was available. In rare cases, at the macro level the missing values were substituted with values from the nearest year (see table S5 in supplementary materials). The final sample used in the three-level models covers 7 waves of ESS (2002,2004,2006,2008,2010,2012,2014) and consists of 39,665 individuals from 27 countries. In the estimation of the two-level models we use the data from wave 2008 describing 6990 individuals from 22 countries. Tables 10, 11 in the Appendix and S1-S2 in supplementary materials present the micro-and macro-level descriptive statistics for both samples respectively. Table 4 presents the estimation results of the first-step regressions (where only one macrolevel variable and its interaction term with employment dummy were included). The estimated micro-level effects are in accordance with the theoretical expectations. Working individuals declare on average 0.5-0.6 units higher life satisfaction than the unemployed. This result can be attributed to the direct psychological effect since the models control for characteristics potentially mediating the employment -well-being relationship, income, civil and parental status. The results confirm the existence of the scarring effect -regardless of the current labour market status, those who experienced unemployment in the past declare lower life satisfaction. In accordance with the literature, the relationship between age and life satisfaction is strong and non-linear (a statistically significant coefficient of age and its quadratic term). A significant and positive impact of the level of education is not unusual. However, it could also reflect the imperfect measurement of the income variable (strongly correlated with education). The effects of other mediating variables (parental, civil status) as well as confounders (disability, migrant status) are consistent with the current state of knowledge. Results Results confirm that in countries with vocationally oriented education systems the employment -well-being relationship is stronger, particularly if it is organized according to the dual apprenticeship model 9 (positive and statistically significant coefficients of interaction between the employment status dummy and VOC and VOCD variables). It accords to the expectations since such education systems offering hands-on experience for students and screening opportunities for employers increase the education-job match and employment quality of young individuals. The results indicating the stronger employment -wellbeing relationship in countries with a generous labour market policy can be interpreted in a similar way (both ALMP and PLMP contribute to the education-or skill-job match). However, contrary to the expectations, the general effects of these variables turned out to be insignificant suggesting that the well-being of the unemployed was not affected by LMP spending. It is less surprising with respect to the PLMP since all models control for household income. The insignificance of ALMP among the unemployed does not follow expectations, however, a similar lack of effect was already reported by other authors studying this topic (e.g. Vossemer et al., 2017). It suggests that ALMP measures do not reflect the conditions of professional employment. Their effects might also be reduced by some selection mechanisms or negative stigma effects. The estimates related to the employment protection legislation do not follow the expectations. Theoretically, we could explain why the stronger protection of regular contracts (EPR) negatively influences well-being of employees (it happens when the negative labour market security effect exceeds the positive job security effect). It is, however, difficult to explain why it does not affect the group of the unemployed or even has a positive impact on their well-being (positive and statistically significant coefficient of the EPT variable). The variance decomposition of the empty model (not reported) indicated that around 13 percent of (unexplained) differences in life satisfaction can be attributed to the country level -the result which is found in similar studies (Vossemer et al., 2017). The estimates presented in Table 4 show that ALMP, PLMP and VOCD best explain that variation (in models with those variables the unexplained variance at the country level amounts to 8 percent). The unexpected EPT effect disappears once the economic conditions are controlled for (see Table 5) in the second step of the estimations. The other effects remain stable with one noticeable exception. The general effect of VOCD becomes negative indicating decreased well-being of the unemployed in countries with developed dual apprenticeship model of vocational education. In such systems many graduates are hired directly by companies in which they were employed as apprentices. It deteriorates the prospects of the unemployed and might harm their well-being. The economic conditions affect life satisfaction in accordance with expectations. In general, well-being is higher in richer countries and in economies not suffering from unemployment. The economic conditions explain well the international differences in life satisfaction. Adding GDP and unemployment rate to the model reduced unexplained country-level variance to less than 5 percent. Under a robustness check we estimated a non-hierarchical version of models presented in Table 5, i.e. pooled linear models with country and year fixed effects and standard errors clustered at the country level. The estimated results are very similar -the moderating effects of VOCD and ALMP remain statistically significant (although at lower levels), other interaction effects have the same signs but became insignificant (results are presented in Table S7 in supplementary materials). The motivation for the last round of estimations is the phenomenon of institutional complementarity (Hall & Soskice, 2001) which can lead to correlation between macrolevel variables (see Table 6). For instance, the institutional complementarity may explain the correlation between vocational orientation, employment protection 10 and LMP spending. Since vocational education leads to acquisition of specific human capital (productive only in a limited number of sectors), it is considered to be a more risky human capital investment. Therefore it requires some incentives in the form of institutional arrangements. High employment protection secures the return to human capital, generous PLMP gives the opportunity to search for jobs matching specific competences and ALMP covers some costs of retraining. In order to avoid the potential omitted variable bias at the macro level, we analyse two macro-level institutional variables in the same model. We consider only the most correlated pairs of the variables (bolded in Table 6). The results of seven separate regression models with different combinations of institutional variables are presented in Table 7. The most stable coefficients concern the VOCD variable. Regardless of the specification of the model in countries with developed system of dual vocational education the employment Table 4 (continued) * p < 0.01; ** p < 0.05; *** p < 0.01, Coefficients of the three-level random intercept models, standard errors in the bottom rows. Variables not reported in the table: year dummies Table 5 Determinants of life satisfaction (2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014), models with one macro-level variable, controls for GDP and unemployment rate * p < 0.01; ** p < 0.05; *** p < 0.01, Coefficients of the three-level random intercept models, standard errors in the bottom rows. Variables not reported-as in Table 4 Variable -well-being relationship is stronger. In line with expectations generous labour market policy (both passive and active) as well as vocational orientation increase the difference in well-being between employees and unemployed. However, the results with respect to these variables are slightly less stable. Consequently, the hypotheses are not confirmed for employment protection proxies. The following tables present estimations of Eq. (2) which included the extended characteristics of education systems as well as proxies for cultural factors (social norm to work). Those variables, however, were only available for the year 2008. Table 8 presents the estimates of the two-level random intercept models including one macro-level variable (top panel), additional control for GDP (middle panel) and UNEMP (bottom panel). The effects of the micro-level determinants are practically the same as in the three-level models and have not been reported. The effects of the already considered institutional features (labour market policy, employment protection legislation, and vocational orientation) were also very similar (see Table S3 in supplementary materials). Out of three additional proxies characterizing the education system only the effects of input standardization (STIN) follow expectations. In countries where the standardization is weaker, i.e. schools have more autonomy with respect to how to teach, the employment -well-being relationship is stronger. The analysis of variance suggests that STIN is the feature of the education system that best explains the differences in well-being at the macro level (highest reduction in unexplained country-level variance). The coefficients of standardization of output and stratification have the predicted signs however are statistically insignificant. Contrary to expectations in countries with a stronger social norm to work (NORM) the employment -well-being relationship was weaker. The estimated effect was statistically significant and relatively stable across various model specifications. It is difficult to justify this finding, however, we suggest a potential explanation which could be tested under a separate study. The analysis of Stutzer and Lalive (2004) proved that in countries with a strong social norm to work, the unemployed used to find jobs quicker. It accords to the model proposed by Dos Santos Ferreira et al. (2015) in which a strong social norm to work decreases the reservation wage. Therefore a societal pressure can force the unemployed to accept jobs of lower quality. Moreover, a strong social norm to work may also distort the work-life balance. These processes can weaken the employment -well-being relationship. In the last step we analysed models including the correlated pairs of macro-level variables. We conducted that analysis focusing on the proxy for standardization of input since only that effect followed the expectations and was stable across various model specifications. The coefficients presented in Table 9 confirm the robustness of the results indicating that high standardization of input reduces the employment -well-being relationship. All four sub-indices comprising that synthetic variable contributed to that effect (see Table S4 in supplementary materials). Conclusions The analysis presented in this paper confirmed that many features of SWT systems explain why in some countries employment status of young individuals is strongly correlated with their well-being, whereas in others the difference in life satisfaction between the unemployed and employees is small. We assume that it depends on the extent to which the SWT systems provide young adults with access to jobs of good quality. On the other hand, the impact of SWT features on life satisfaction of the unemployed also matters. Our analysis suggests that vocationally oriented education systems, particularly those organized as a dual apprenticeship model, strengthen the employment -well-being relationship. This result conforms to the allocative function of education. Vocational education, at least in a short-term perspective, strengthens the education-job match increasing employment quality at least in three aspects: skill use at work, work autonomy, and job security. However, it has to be emphasised that the vocational orientation of education systems, particularly those with strong institutional linkages with firms might have a negative impact on life satisfaction of the unemployed. In such systems many graduates are hired directly by companies which have employed them as apprentices. It reduces the chance of finding employment through the labour market. The level of input standardization measuring the autonomy of schools with respect to what and how they teach, is another feature of the education system moderating the employment -well-being relationship. According to expectations in highly standardized education systems the employment -well-being relationship was weaker. It is compatible with other research findings suggesting that more autonomy of schools boosts the quality of education. This in turn increases the likelihood of graduates to find employment of higher quality. Positive moderating effects were also observed with respect to active and passive labour market policy spending. LMP measures perform the allocative function (like education) increasing the education-or skill-job match and contributing to the well-being of employees. We indicated that it was theoretically sound to expect that the LMP influences not only the well-being of the unemployed (as hypothesized by Vossemer Table 9 Determinants of life satisfaction (2008), models with two macro-level variables * p < 0.01; ** p < 0.05; *** p < 0.01, Coefficients and standard errors of the two-level random intercept models. Not reported in the table: micro-level characteristics as in Table 4 Variables Wulfgramm, 2014) but also of employees. That latter relationship was confirmed empirically. Contrary to our hypotheses, the LMP spending was not positively correlated with life satisfaction of the unemployed. The lack of impact of PLMP measures (e.g. unemployment benefits) might be justified since our models controlled for income. The lack of effects of ALMP might suggest that its measures (e.g. internships) do not reflect the conditions of paid employment as expected or the selection mechanisms biasing the results are at play: the effective ALMP increases the inflow to employment leaving a subset of those least employable in the population of unemployed. Their specific characteristics, for example the level of disappointment, might offset the positive influence of ALMP spending. This possible selection mechanism, as well as potential stigma effects, are worth further investigation. This paper is another analysis contributing to our understanding of public policy impact on life satisfaction. With the increasing availability and quality of subjective well-being indicators (e.g. OECD Better Live Initiative), the body of research in this area is steadily growing. The development of studies in this field seems natural since one of the main goals of public policy is to influence the well-being of citizens. The validity of subjective well-being as a policy evaluation criterion has been recognized by Romina Boarini and co-authors who empirically proved that it is policy amenable (Boarini et al., 2013). To date, the impact of selected policies has been analysed from this perspective in the areas of health (Boarini et al., 2013;Calvo et al., 2015), employment (Vossemer et al., 2017) and labour market (Vossemer et al., 2017;Wulfgramm, 2014). The analysis conducted in this paper enriches the above body of work by recognizing the impact of a new group of instruments, from the sphere of education policy in particular. Moreover, it verifies the impact of the already analysed policies on the well-being of young people. They constitute a specific group particularly vulnerable to job insecurity, precarious working conditions and unemployment. More specifically, our analysis contributes to the rich literature investigating labour market outcomes of SWT systems. Studies in this field analyse the impact of various SWT features on such outcomes as education-job match, youth unemployment rate, length of job search, employment stability and occupational status of young adults. We enrich this branch of literature studying how a broad range of SWT characteristics influence the strength of the employment -well-being relationship which could be considered an indirect proxy for job quality. As mentioned above, this study can be perceived as a particular evaluation of public policies, where their impact on life satisfaction is the main assessment criterion. What can we learn from this evaluation? Vocational orientation, autonomy of schools and developed LMP increase life satisfaction of young employees. The contemporary evidence shows that the well-being of workers is positively correlated with their productivity, retention, and mental and physical health (Clark, 2018, pp. 258-259). Some of these benefits take the form of positive externalities. Their existence is the traditional economic argument supporting state interventions. Moreover, according to the economic theory, the stronger the employment -well-being relationship is, the deeper the utility gap associated with the job loss becomes. This, in turn, increases the motivation of the unemployed to intensify their job search effort. This line of reasoning has been confirmed empirically (Gielen & van Ours, 2014;Mavridis, 2015). However, it cannot be extrapolated without limits. A very weak employment -well-being relationship might be in fact associated with voluntary unemployment (see, e.g. Blanchflower, 2001) while an extensive well-being gap can also have potentially adverse effects (Deter, 2021). Low life satisfaction of the unemployed might lead to discouragement, lower levels of skill acquisition or poorer performance in job interviews (Anderson, 2009, p. 348). That is why the evaluation of all relevant policies should address possible contrasting well-being effects in various groups. To make things even more complex, we should keep in mind that the macro-level factors studied in this paper are not independent but correlate as a consequence of institutional complementarity, forming a limited set of transition regimes. The analysis presented in this paper has its limitations. The estimation of the employment -wellbeing relationship with the use of cross-sectional data is potentially prone to reversed causality or omitted variable bias as discussed in Sect. 3.4. The macro-level indicators used in the analysis tend to be relatively constant over time. This is typical for variables characterizing institutional arrangements since radical reforms are rarely implemented. Therefore, the estimated effects were identified through international comparisons rather than changes in indicators' values over time. This also increases the risk of biasing the results by omitted variables, this time-at the macro level. In the conducted analysis, however, efforts were made to reduce this risk by using a step-wise approach and thorough examination of the effects of different combinations of variables at the macro level. Moreover, the estimation of the pooled model emphasising within-country changes (over time) in effects estimation confirmed the robustness of results, in particular with respect to ALMP and VOCD. It should be also highlighted that due to the data constraints macro-level variables were characterized at the national level, whereas there is evidence that SWT systems might be shaped at the regional level (Scandurra et al., 2021a). Finally, this paper, similarly to the majority of studies in this field, is mainly focused on supply-side determinants of the labour market. The better understanding of transition systems requires studying also the impact of demand-side factors reflecting the employers' perspective (Scandurra et al., 2021b, p. 853). This topic paves the way for further scientific exploration.
2022-07-01T15:08:40.794Z
2022-06-29T00:00:00.000
{ "year": 2022, "sha1": "8eb835c42f636a2568e53a22474df8cf9d0420ee", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11205-022-02953-2.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "53f84f1e299810f44607e985392cd0104fcf52c9", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [] }
9515341
pes2o/s2orc
v3-fos-license
Hermite-Hadamard type inequalities for the generalized k-fractional integral operators We firstly give a modification of the known Hermite-Hadamard type inequalities for the generalized k-fractional integral operators of a function with respect to another function. We secondly establish several Hermite-Hadamard type inequalities for the generalized k-fractional integral operators of a function with respect to another function. The results presented here, being very general, are pointed out to be specialized to yield some known results. Relevant connections of the various results presented here with those involving relatively simple fractional integral operators are also indicated. Introduction and preliminaries A function f : I → R is said to be convex if the following inequality holds: where I is an interval in the real line R. Here and in the following, let C, R, R + , and N be the sets of complex numbers, real numbers, positive real numbers, and positive integers, and let N  := N ∪ {} and R +  := R + ∪ {}. One of the best-known inequalities for convex functions is the following Hermite-Hadamard inequality: If f : I ⊆ R → R (I is an interval) is a convex function and a, b ∈ I with a < b, then The Hermite-Hadamard inequality in () has attracted many mathematicians' attention who have presented a variety of generalizations, extensions, and variants, which are called Hermite-Hadamard type inequalities (see, e.g., [-] and the references cited therein). Recently, several Hermite-Hadamard type inequalities associated with fractional integrals have been investigated. Here, we aim to establish several generalized Hermite-Hadamard type integral inequalities for the generalized k-fractional integral operators with respect to another function. The results presented here, being very general, are also pointed out to be specialized to yield some known results. Relevant connections of the various results presented here with those involving relatively simple fractional integral operators are also indicated. To do this, we recall some definitions and known results. Let [a, b] (-∞ < a < b < ∞) be a finite interval on the real axis R. The Riemann-Liouville fractional integrals J α a+ f and J α b-f of order α ∈ C ( (α) > ) with a ≥  and b >  are defined, respectively, by Here (α) is the familiar Gamma function (see, e.g., [], Section .). For more details and properties of the fractional integral operators () and (), we refer the reader, for example, to [-] and the references therein. be a finite or infinite interval on the real axis R. We denote by L p (a, b) ( ≤ p ≤ ∞) the set of those Lebesgue complex-valued measurable functions f on for which f p < ∞, where In particular, L  (a, b) := L(a, b). Raina [] introduced a class of functions defined formally by where the coefficients σ (m) ∈ R + (m ∈ N  ) form a bounded sequence. With the help of (), Raina [] and Agarwal et al. [] defined, respectively, the following left-sided and right-sided fractional integral operators: and where λ, ρ ∈ R + , w ∈ R, and ϕ(t) is a function such that the integrals on the right sides exist. Recently, certain new and interesting inequalities involving these fractional operators have appeared in the literature (see, e.g., [-]). It is easy to verify that J σ ρ,λ,a+;w ϕ(x) and J σ ρ,λ,b-;w ϕ(x) are bounded integral operators on In fact, and Here, many useful fractional integral operators can be obtained by specializing the function F σ ρ,λ (x). For instance, the classical Riemann-Liouville fractional integrals J α a+ and J α bof order α follow easily by setting λ = α, σ () = , and w =  in () and (). Budak et al. [] established a new identity involving the fractional integral operators () and () asserted by the following lemma. We recall the following generalized fractional integral operators (see, e.g., be an increasing and positive function having a continuous derivative g on (a, b). The left-and right-sided generalized fractional integrals of f with respect to the function g on [a, b] of order α are defined, respectively, by and provided that the integrals exist. The integrals ()) and () are usually called fractional integrals of a function f by a function g of the order α. Choosing g(x) = x in () and We recall some properties for the k-gamma function: Using the k-gamma function, Tunç et al. [] introduced a class of functions defined by is a bounded sequence as given in (). Tunç et al. [] used the function () to define the left-sided and right-sided generalized k-fractional integral operators with respect to another function as follows: Let k, ρ, λ ∈ R + and w ∈ R. Also, let g : [a, b] → R be an increasing and positive function having a continuous derivative g on (a, b). Then the left-and right-sided generalized k-fractional integrals of f with respect to the function g on [a, b] are defined, respectively, by in the integral operator () gives the generalized fractional integral operator of f with respect to the function g, the generalized k-fractional integral operator of f on [a, b], the generalized Hadamard kfractional integral operator of f , and the generalized (k, s)-fractional integral operator of f on [a, b], respectively, as follows: The special cases of () and () when k =  and g(t) = t reduce to yield the generalized fractional integral operators () and (), respectively (see [, ]). Further, setting k = , g(t) = t, λ = α, σ () = , and w =  in () and () gives, respectively, the Riemann-Liouville fractional integrals () and (). The Hermite-Hadamard type inequalities in [] have been generalized by Tunç et al. [] who used the generalized k-fractional integral operators () and (), which is recalled in the following theorem. where F(x) is defined as in (). Hermite-Hadamard type inequalities for fractional integral operators We begin by recalling some notations given in []. in such a way that I α a+,g f (x) and I α b-,g f (x) are well defined. We define the following functions: and Also, the following notations will be used throughout this paper: Taking s =  in () and (), respectively, gives The Hermite-Hadamard type inequalities for the generalized k-fractional integrals of a function with respect to another function in Theorem  can be modified as in the following theorem. Theorem  Let k, ρ, λ ∈ R + , w ∈ R +  , and σ (m) ∈ R + (m ∈ N  ) be a bounded sequence. Also, let g : It is easy to see that Multiplying both sides of () by and integrating the resulting inequality on [, ] with respect to s, with the aid of (), (), (), (), (), and (), we obtain Similarly, multiplying both sides of () by and integrating the resulting inequality on [, ] with respect to s, with the aid of (), (), (), (), (), and (), we get From () and (), we have which proves the first inequality in (). To prove the second inequality in (), using the convexity of f on [a, b], we obtain By adding these inequalities, we get Multiplying both sides of () by and integrating the resulting inequality on [, ] with respect to s, similar to the proof of the first inequality, we have Similarly, multiplying both sides of () by and integrating the resulting inequality on [, ] with respect to s, we obtain Adding () and (), we have which proves the second inequality in (). Hence this completes the proof. Setting k =  in Theorem , we get a little simpler inequalities asserted by the following corollary. [a, b] → R be an increasing and positive function on [a, b] having a continuous derivative g (x) on (a, b). If f is a convex function on [a, b], then the following Hermite-Hadamard type inequalities for the generalized fractional integrals of f with respect to the function g on [a, b] in () and () with where F(x) is defined as in (). Further, choosing λ = α, σ () =  and w =  in Corollary , we get simpler inequalities in the following corollary, which are a modification of the Hermite-Hadamard inequalities given in []. (a, b). If f is a convex function on [a, b], then the following Hermite-Hadamard type inequalities for the generalized fractional integrals of f with respect to the function g on [a, b] in () and () hold: Corollary  Let α ∈ R + and g : [a, b] → R be an increasing and positive function on [a, b] having a continuous derivative g (x) on where F(x) is defined as in (). It is remarked in passing that choosing g(t) = t in Corollary  yields the same result as in [], Corollary . Main results We begin by presenting an integral formula involving the functions () and (), which is asserted by the following lemma. Proof Using () and changing the variable Integrating () by parts, we have Similarly, using () and changing the variable and integrating the resulting identity by parts, we have Using () to add () and (), we obtain and applying ()-() to (), we obtain the desired identity (). Setting k =  in Lemma , we obtain an identity asserted by the following corollary. where the notations are given as above and Theorem  Let k, ρ, λ ∈ R + , w ∈ R +  , and σ (m) ∈ R + (m ∈ N  ) be a bounded sequence. Also, let g : [a, b] → R be an increasing and positive function on [a, b] having a continuous derivative g (x) on (a, b). Further, let f : [a, b] → R be a differentiable mapping on (a, b) (a < b) such that |f | q is convex for q ≥ . Then where the notations are given above: and Proof Using convexity of |f | q and the power-mean inequality in Lemma , we have where We, therefore, have This completes the proof.
2017-09-19T04:28:38.345Z
2017-09-04T00:00:00.000
{ "year": 2017, "sha1": "97455fc52bc030186a89846647e00824b976acd7", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s13660-017-1476-y", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ab8e043587f98fab6658919599e94b4f085f039b", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Medicine" ] }
263915069
pes2o/s2orc
v3-fos-license
Art as an Image of the Shah Art, Rhetoric, and Power in Shah Tahmasp’s Letter to Sultan Selim II In 1566, after Sultan Suleiman’s death, Shah Tahmasp (r. 1524-1576) sent condolences and con-gratulations to Sultan Selim II (r. 1566-1574) along with several gifts, including a magnificent Quran and an exquisite illustrated Shahnama copy, noteworthy in Iranian art history. However, the letter accompanying these gifts has often been overlooked, perceived as containing mere courtesies. This letter marked a significant exchange between Safavid kings and Ottoman sultans, with participation from secretaries across Iran. Its authors aimed to portray an idealized king and their notable characteristics, demonstrating that the actions of these rulers (Sultan Suleiman, Sultan Selim, and particularly Shah Tahmasp) aligned with this ideal. Art-related activities were among these characteristics. The authors detailed the Safavid king’s palace, garden, and the artistic gifts to highlight their connection with the king’s ideal image. This article explores the letter as a literary and artistic medium, delving into its intricate rhetoric as a tool for representing royal authority. Additionally, it addresses how the authors’ descriptions of artworks as integral to the king’s image conveyed political meaning, illustrating how art reflected royal power in public and political spheres. The Historical Background of the Letter After the death of Sultan Suleiman the Magnificent in 1566 and the succession of Sultan Selim II (r.1566-1574), Shah Tahmasp (r.1524-1576) sent a letter to the Ottoman court condoling the death of Sultan Suleiman and congratulating the succession of Sultan Selim.During this period, the Safavid and Ottoman empires enjoyed a relatively stable peace after the signature of the Amasya Peace Treaty in 1555, which ended years of bloody conflict between the two empires.Shah Tahmasp's letter was written in response to a message from Sultan Selim.It was a typical diplomatic missive regarding the relations between the courts and expressed adherence to the provisions of the peace treaty and interest in its continuation after the death of Sultan Suleiman. Nevertheless, Shah Tahmasp's letter was unique.According to Safavid historians such as Qadi Ahmad Qumi and Rumlu, the Shah summoned scribes and secretaries from all over Iran to write the letter over a period of eight months (Qumi 477; Rumlu 567).It is the most extended letter written in the history of Safavid-Ottoman relations with its length reaching seventy cubits (about eighty meters) (Qumi 478).It was written in a magnificent style and sent to the Ottoman court with the king's high envoys and numerous precious gifts (Qumi 478). The caravan, consisting of seven hundred men and nineteen thousand beasts, was greeted gloriously upon its arrival in Edirne early in 1568, two years after the death of Sultan Suleiman.Ottoman historians and ambassadors from other countries who were at the Ottoman court recorded this event and the associated celebrations .Several images of the ceremony have been depicted and recorded in Ottoman historical manuscripts such as Selim Khan's Shahnamei by Lokman (Topkapi Palace Library,MS 3595,fols. 53v,54r).The letter was politically successful, maintaining good relations between the two empires, and ensured peace which lasted until the death of Sultan Selim II. Amongst the gifts sent along with the letter, two splendid works drew the attention of art historians, namely Mushaf ʿAli 1 and Shah Tahmasp's Shahnama, arguably the most glorious illustrated Shahnama in Iran.However, little attention has been paid to the letter and its content. 2 From an art-historical the perspective, one of the notable aspects of this letter is the description of the gifts that accompanied it, including Shah Tahmasp's Shahnama and some other art productions related to Shah Tahmasp's court workshop.These kinds of descriptions are not very common in historical texts, and art historians have yet to consider this material.Colin Mitchell's research, published primarily in his book The Practice of Politics in Safavid Iran: Power, Religion and Rhetoric (2009), is a rare example of studies on the rhetorical features of Shah Tahmasp's letter and its political significance and meaning.Mitchell also deals with the letter's descriptions of artworks and focuses on the relationship between rhetoric and politics by showing how rhetorical and literary strategies construct and legitimize the image of royal power.However, his study focuses solely on the rhetorical description of these artworks in relation to royal politics and rhetoric and does not take into account the independent results about the works themselves and their relationship with politics. Mitchell concentrates on the political importance of rhetoric, showing how, in this letter, many imaginative metaphors and illustrations serve to legitimize the basis of royal power (128-137), as if Shah Tahmasp, in the beautiful and literary expressions, reminds the letter's readers and his rivals, of the foundations for the legitimacy of his monarchy. On the basis of Mitchell's research, this article centres on the letter's descriptions of artworks in order to understand the political intentions of their creation.In what follows, I examine these descriptions and bring to the fore the political meaning of the king's image representations.Furthermore, I suggest that Shah Tahmasp's letter, as a piece of rhetoric, may also be considered a work of art with a political dimension. The Letter and its Content In appearance, the letter contains long, tedious, and highly exaggerated praises about Sultan Suleiman, Sultan Selim, and Shah Tahmasp himself, which are recited repeatedly and can be found throughout the text.It mainly announces the continuation of good relations with the Ottoman court and the establishment of the peace of Amasya.Among the Safavid sources, the whole letter is recorded only in the Khulasat al-Tawarikh of Qadi Ahmad Munshi Qumi, one of its authors (Qumi 478). 3 It is interesting that in a copy of Qumi's book dated some hundred years later, the scribe did not include the text of this letter and wrote: This letter was contained in the original manuscript and did not add anything to the listeners' ears and intelligence except discomfort and pain.A letter that is seventy cubits long and is approximately fifty thousand verses and not read in a meeting does not bring pleasure to the listeners, and the readers do not benefit from it.Each phrase is repeated a thousand times.Hearing a non-religious man's condolences is nothing but boredom.It was not written for that reason.(Qumi 478) 4 This may be why modern historians and scholars have paid little attention to the letter's content. However, resituating the letter in the context of the political relations of the early modern era and the various tools used to shape political relations between governments, brings to the fore its relevance.One may conclude that the main objective of the kings' prolonged and repeated descriptions and praises is to depict the "image" of the king.The authors, with exaggerated credits to three sultans and kings, suggest that they are the supreme and perfect example of the idea of the kingdom.It does not matter if these descriptions do not correspond to reality; the writers' conception of the image they create of an ideal kingdom is what is essential.These constructions are relevant for the understanding of the era's historical events.It is clear that this constructed image also had great political significance in its time as the expression of the foundations of royal legitimacy and of a powerful kingdom.In this article, my concern is not that much the image of the king than the artistic aspects of the image.When the authors describe their ideal image of the 3. Abdulhosein Navaie, who collected all the letters related to the reign of Shah Tahmasp, does not mention the letter cited by Qumi.Instead, another relatively shorter text is quoted after Faridun Beg .Did Qadi Ahmad mention the main letter?The answer is hard to say, but other historical sources highlighted the letter's significant length (see, for example, Rumlu 567).Nonetheless, what we are looking at here is not about the authenticity of the letter quoted by Qadi Ahmad but rather his descriptions of artworks and the significant role he considered for them. 4. All translations are my own unless stated otherwise. kingdom, they are also referring to the components related to the realm of art.Or in other words, they valorise art forms as an aspect of the idea of the ideal kingdom.Behind the artworks created under the sponsorship of the court, one discovers the hidden political meanings of which this letter is evidence.The artistic elements mentioned in the letter are the king's rhetoric, his palace with its paintings, decorations and gardens and Shah Tahmasp's Shahnama. Royal Rhetoric The crucial feature of Shah Tahmasp's letter is its rhetoric; 5 indeed, the main purpose of sending it was to present and offer royal rhetoric.This is why such a wide range of authors came together to write it, and that so much time was spent writing it.Thus, this letter was not a long missive in vain.Rumlu, in a short phrase, describes the letter as "marked by its rhetoric (balaghat)" (567). Rhetoric was associated with activities such as poetry, painting, and music and was not excluded from art as in modern Western art history. 6The most basic rhetoric text in the ancient world Calligraphy is the central theme of the second chapter, which is very similar to the calligraphic treatises of the Safavid period.One may note that Qadi Ahmad Qumi, a master of rhetoric as seen in the Shah Tahmasp letter, is himself the author of one of the rarest artistic treaties in the Safavid period, Gulistan-i Hunar, where he recites the detailed descriptions of masters of painting and calligraphers.There was a close connection between the rhetoricians, calligraphers, and painters in the Safavid royal court.Therefore, rhetoric can be considered as an artistic medium, and sending such letters can be understood as sending a very delicate, precise, and eye-catching work of art. But why then would such a work have been sent to the Ottoman court? 5. I mean the classic definition of rhetoric as given in the Oxford dictionary: "The art of effective or persuasive speaking or writing, especially the exploitation of figures of speech and other compositional techniques".(Oxford English Dictionary, 2 nd ed., 1989) 6.Even in the 18 th century, Kant in his classification of "fine arts" included rhetoric and placed it next to poetry (151).But Hegel excluded it in his list of fine arts in the 19 th century (82-90).Subsequently, rhetoric was gradually no longer considered much in the histories of art. In the letter, rhetoric is mentioned as one of the essential components of the image of the king. This letter begins with exaggerated descriptions of the characteristics of the young King, Sultan Selim, and long phrases are used to depict his rhetoric.Such descriptions and praises have no basis in reality, especially concerning the young king, but it was expected that when praising the king, part of this praise should be praising of his rhetoric: Blessed is the wise man [Sultan Selim] who, at the time of his invigorating speech, blossoms the buds of meanings in the garden of speech and makes the garden of speech green and watered it by the rain of the cloud of eloquence and the clear water of rhetoric.From his unique expressions, the breezes of attention and honour and the scents of love and affection blow on the garden of hearts of sincere believers, and the breezes of divine approval and the fruits of infinite conquests reach the souls of the faithful lovers.(Qumi 491) This perfected rhetoric is then attributed to Sultan Selim's letter, which is praised as: A letter with the smell of amber, whose charming face and musk line represent the face of Hoor ʿEin (Houries).Chinese artists envy this letter because of its perfected ornament and beauty.And the soul in the description of it sings that: your literacy drew on the pages of the days the Amber Lines / The case of Ferdowsi's zeal and jealousy of Chinese art.(Qumi 493) The interesting point in this description is the comparison of the art of rhetoric with the art of Ferdowsi as a poet, the skill of Chinese painters and the art of calligraphy: rhetoric should be seen as an artistic medium. One may also note the key phrase in the middle of the description connecting rhetoric and kingship: "The word of the kings is the king of the word (kalam al-muluk, muluk al-kalam)" (492).This phrase, a very common proverb in Iranian and Muslim cultures, shows precisely that one of the necessary characteristics of the king is his rhetoric, and that royal speech should be considered as the king of rhetoric.The reason for this particular emphasis on rhetoric was its place in politics and ethics in the pre-modern world.As stated in Aristotle's Rhetoric and continued throughout the Islamic philosophical tradition such as Ibn Sina's Rhetoric, the most critical qualities of rhetoric are "deliberative, forensic and epideictic" (Aristotle 1358 b; Ibn Sina 55) i.e., its use in legal, judicial, and ceremonial affairs.Rhetoric is a tool for governing society, and it derives its power from its influence due to its use of imagination and aesthetic devices.The king must use the power of the word if he wants to govern the society properly, subdue the people, and put the enemies and rivals in their place. From this last point, it is clear that rhetoric is not merely a literary-aesthetic matter.Royal speech and parole gain power and influence because they effectively cover the political content intended by the king in the guise of eloquence and rhetoric.The Shah's rhetoric is an aesthetic formulation of the monarchy's ideology.It is this special position of rhetoric that should be acknowledged in the letter of Shah Tahmasp.Here rhetoric has the vital task of showing "the actual image of the king" as "the ideal image of the king", and this idealization is achieved through imaginary devices that form the "rhetoric of the text", thus consolidating the king's power.Let us now turn to other artistic components in the letter that describe the king's image.While the letter does not pay much attention to the characteristics of the palace's structure, three aspects are highlighted: decorations and wall paintings of the palace, the Saʿadatabad Gardens, and the palace's Square.Looking first at a description of the palace's decoration, we see the description of the paintings on the walls of the palace: God Almighty.What a wonderful building!What a refreshing palace; Its doors and walls, with its paintings and decorations, are more beautiful than the Chinese painting; The master painters painted it, and its paintings are rare in the whole world.They have skilfully unveiled a banquet on the door and wall everywhere in the palace.On its wall flowers are made by plaster and from the clay flowers bloomed.(Qumi 511) ʿAbdi Beyk Shirazi, the famous Safavid poet in service of Shah Tahmasp's court, gives precise descriptions of the wall paintings in his Jannat-i ʿAdn (Gardens of Paradise), which give us a better idea of the palace's paintings.According to ʿAbdi Beyk, they were mainly lyrical scenes that served as part of the royal pleasure-seeking atmosphere and provided a colourful and attractive environment.As we shall see, this function of court paintings finds a parallel in the garden's function. The description of the palace's garden serves as another significant part of Shah Tahmasp's letter presenting several details: What can I say about the garden of Saadat?Saadat turns fortune towards me.It is adorned like the garden of Heaven; in it are fruits of every kind you may wish.This odiferous garden is like paradise; its water is from the streams of Heaven.(Qumi 519; Mitchell's translation) Further in the letter, the Saʿadatabad garden is compared with the gardens of Paradise: "Saʿadatabad which is equal to the rose-garden of Iram and the garden of Paradise and similar to heavenly gardens in the hereafter."(Qumi 520) Shah Tahmasp paid particular attention to the garden in Saʿadatabad, and mainly to the construction of the palace.A reason for the importance bestowed upon this garden by the king is that he ordered ʿAbdi Beyk to describe it in a complete and literary way.Jannat-i ʿAdn is full of detailed descriptions of Saʿadatabad, and particularly its gardens (see Losensky 1-29). Interestingly, the letter not only describes the garden, its vastness and the variety of its trees and plants, but also the royal pleasures occurring in these places: In those paradisiacal fields, where the sun and moon meet, flasks of silver and goblets of gold are filled with liquor mixed with cloves and cinnamon in commemoration of: "And they will be given [in Heaven] a cup of wine mixed with zanjabil."[76; Al-Ensan (the Man): 17].The moon-faced cup-bearers held; gilded porcelain decanters.The decanter was happy with its fortune; because the hands of the rosy-cheeked ones were on its neck.And the goblet's mouth has stayed open out of happiness; because it has kissed the lips of the coquettish ones.from every direction, the youthful ones who are like the servants of heaven-who have girded themselves with the belt of submission-carry porcelain dishes full of fruits [in accordance with 56: 32-33], "and fruit in abundance whose season is not limited, nor its supply forbidden."(Qumi 521; Mitchell's translation) As it is clear from this passage, the pattern evokes the Islamic texts' descriptions of Paradise, where all kinds of foods and drinks and all sorts of sexual and non-sexual pleasures that are usually forbidden and prohibited in Islamic law are found in the garden.As Mitchell highlights: "Openly hailed as a second paradise, Saʿadatabad is lauded for much more than its seraphic setting, and we find its denizens cast in a distinctly eschatological light. […] The soteriological implications of Tahmasep imperial garden indeed border on self-indulgence."(Mitchell 132) All these details found within the text about the Saʿadatabad garden leave no doubt that the authors of the letter have made its description an essential part of the image of the King and present the splendor of his palace in direct competition with the palaces of Ottoman kings.What is the importance of these gardens for the image of the King?The answer can be found in Jannat-i ʿAdn: "Shah, who is the shadow of God (zil Allah) in all things; His garden is also an example of Paradise."(ʿAbdi Beyk 157) In this verse, it is stated that the king is the shadow of God on earth, and since Paradise, with its strange descriptions in the Quran and Islamic texts as "the garden of God", the king, like God, has a garden similar to him.The attribution of Islamic gardens to paradise has been frequently mentioned in contemporary research on Islamic art, and is typically accompanied by a mystical and spiritual meaning.Nevertheless, these Safavid gardens have the opposite meaning, signifying worldly power and earthly pleasures.In Islamic culture, "earthly paradise" has a reprehensible meaning and is attributed to people like Shaddad, who were enemies of God and were annihilated by him. 8However, the Safavid kings "rightly" own these heavenly gardens with all their pleasures because they are the shadow of God on earth.This concept of the king, which originates in the Persian idea of the Just Ruler, is different from the dominant Islamic image of the Caliphate (Babaei 11).The concept of kingship was developed in Iran before the Safavids but peaked under their rule.Thus, Shah's garden as an earthly paradise is a particular part of the image of the Persian king, which serves as a demonstration of his celestial power. Considering Isfahan's royal gardens, Babaie specifies "the architectural accommodation of feasting [as having] represented a markedly idiosyncratic practice of absolute rule in the early modern age."(Babaie 1) Shah Tahmasp's letter confirms this claim.This political content of architecture was also effective: the vivid descriptions of travellers and foreign ambassadors of the royal gardens and the pleasures they saw in it offer us insight into how they were influenced and impressed by the Shah's image .These descriptions are very similar to the descriptions of Shah Tahmasp's letter of Saʿadatabad garden and show the gardens of Safavid kings as an extraordinary and dreamy place and a sign of the Shah's glory, wealth, and power.The third part of the description of Saʿadatabad is related to its square, which we will discuss in the next section, in relation to the royal gifts which accompanied the letter itself. The Shahnama and Other Gifts As mentioned above, precious gifts were sent to the court of Sultan Selim II along with the letter. The most remarkable of these gifts was Shah Tahmasp's Shahnama though magnificent pieces of jewellery were also offered.According to Ottoman historians and foreign ambassadors who attended the gift-giving ceremony, the gifts looked very dazzling (Arcak 66).Naturally, sending such gifts was typical between royal courts as a part of political diplomacy.However, this does not necessarily justify Shah Tahmasp's offering of gifts of such quality and rarity.As a masterpiece of Persian art, one would imagine that the royal Shahnama would have been kept in the Safavid royal treasury.We know that Shah Tahmasp had a taste in arts and even had some training in painting (Rumlu 488).The most significant masters of painting, calligraphers and illuminators were in charge of this masterpiece.The king undoubtedly recognized its high artistic, historic, and mercantile value.There are varying theories as to why the Shah offered the royal Shahnama: Qazwini proposes that it was the king's repentance (231), as a main motivation for the gift.Others, such as Robert Hillenbrand, suggest that a "change in priorities of Shah" may have been caused by the good relationship between the two courts, while Arcak argues that the "Safavid Shah intended to proclaim his superiority as patron of the arts."(71).Mitchell also suggests that "the presenta-8.A personage associated with the legendary town of Iram, to whom is attributed its foundation (Webb). tion of this unsurpassed Shahnama reinforced clearly the profile of the Safavid ruler as a cultural patron par excellence to the young ascending sultan" (129).It is therefore interesting to take a closer look at what the letter says about the royal Shahnama as a gift. In general, the letter's authors use a narrative technique to describe the royal gifts.Their descriptions come right after "the arrival of the good news" Sultan Selim's succession to the Safavid royal court, where joy extended beyond the palace and into the "four sides of Saadat Square and the new bazaar that was built" (Qumi, 514).It is in this bazaar that the shopkeepers and the artisans displayed their excellent goods; the letter continues by describing what was offered in these shops, which corresponded to the gifts that were sent with the letter: Each group of artisans decorated their shops separately.The jewellers hung their jewels beautifully, and every kind of jewel was found in large numbers, including rubies and diamonds (yaqut, laʿal, dorr).Very expensive swords and crowns, each worth as much as a country's tax, were decorated with various jewels.(Qumi 515) In this indirect method of narration, beautiful and praiseworthy rhetoric is used again.On one hand, the letter enumerates the gifts, describes them in detail and expresses their value, while on the other, it does not mention that they are gifts, seemingly to hide the boastful character of the description.Other points can be deduced from this type of expression: unlike the previous two cases, these gifts are not defined as properties of the king or the royal court but as objects that are in "Saadat Square" and in its bazaar's all over the country. This description attests to the comfort and enjoyment of the palace and garden.It shows that outside the palace, under the rule of the Shah, all craftsmen and artisans (har senf mardum-i saniʿ u sharif) in Iran were so prosperous that such unique gifts were found in every market.There is a reference to Shah Tahmasp's position as a supporter and promoter of arts and crafts: "To shorten the speech: the supreme justice of Shah Adel (Just King) watered the field of hope of friends" (Qumi 519). These depictions of objects are related to their beauty, value, and price.Their beauty has been compared with natural and celestial objects: "Pleiades (Suraya) is ashamed of the jewellery necklaces, and the sea is ashamed of the beautiful diamonds.The beautiful golden patterns of the swords are more beautiful than the moon and the sun.Beautiful sweethearts (Butan, i.e., the gifts) flirt in stores and are at war with each other, but not a real war.Everyone has adorned themselves with gold, and the heavens are jealous of them" (Qumi 515).However, the most detailed descriptions are dedicated to the Shahnama: When the atelier of bookbinding was prepared/a great rise arose from the city.The atelier is like a cypress in the garden/it is a new rose from a rose garden. From that atelier-that good-natured cypress/the rose garden of paradise is ashamed what an atelier! which was the envy of the abode of faeries/From the image [of the atelier], reason was stupefied This youth [i.e., book] sitting in the atelier/who is [such] an image that reason is perplexed by it The face of this youth [i.e., book] is so unique/that Bihzad went into a trance by its image when the dust of the down [on his lip] turns black [i.e., when his script is written]/no one will care anymore about the calligraphy of Yaqüt in every ornament and beauty, in every way and manner/piled up a hundred sections [of the book] From the poems of the well-known Firdausi/who had done justice to the word in the age.A Shah Namah was proffered/ and his atelier was beautified by this gift it was gilded and illuminated most gloriously/it was bound with a hundred ornaments.Its script was written by the master all over/its writing is illuminated like the light of the eye.From the work of the pupils who have trained with Zarin-qalam/each page had a design sketched on it one painting was done by Bihzad/But he departed and left behind regret (Qumi 516; Mitchell's translation). This excerpt refers to how the Shahnama was prepared, and in particular, to Behzad's key role.The latter is described as the one who "painted for kings" (az bahr-i shahan nemudi raqam) i.e., he is a "painter of kings" (Qumi 516). Most of these descriptions are related to the beauty of this work, which surpasses all other works of art before it.Therefore, the significant status of this work was fully confirmed and acknowledged.Although there is no specific reference to the Shah in the various metaphors and similes that describe this work, one should recall that the title is significant Shahnama (Book of Kings) and inherently illustrates imperial power.The Safavid kings wanted to show that the Shahnama was one of the sources of their legitimacy and the letter's authors distinguished this book as the highest among all of the other books and gifts.In a letter that is supposed to depict the power of the Shah and its elements, the description of Shahnama also finds a suitable place: the "Book of the Shah."Furthermore, throughout this letter, the content of Shahnama forms one of the primary sources of imagery that have been used to describe the royal greatness and authority or the origin of the Shah's legitimacy: the critical figures of Shahnama (Kei khusraw, Jamshid, Fereydun, Darab and Rustam) are used frequently in order to describe Suleiman and Selim.9 From the point of view of the letter's authors, who naturally reflected the court's view, this Shahnama had great value and importance.The craftsmanship of the Shahnama showed the skill of the royal workshop, and the book's high value and content formed the basis of the legitimacy of the Safavid kings.The detailed description of the Shahnama's value and its illustration in Shah Tahmasp's letter, which depict the image of the king, all attest to the political dimension of the Shahnama's production.Thus, one of the primary purposes of sending the royal Shahnama was to remind the young sultan of the ancient foundations of the legitimacy of the Safavid kings and its continuation. 9. Mitchell considers the prominence of the Shahnama discourse significant in this work and sees it as an alternative to the Shiite discourse, which is practically absent in this letter (134). Conclusion In current research on Islamic art, little value has been given to historical texts and documents by considering them as works of art.In this article, I have examined one of these texts, a letter, the study of which reveals the multiple dimensions of Persian art during the Safavid period.As I have shown, the main goal of Shah Tahmasp's letter to Sultan Selim II was to depict the idea of the Shah as understood in the Safavid court.In this depiction, the Shah has a combination of attributes that date back to the image of the Shah in ancient Iran.The king is the shadow of God on earth, and his vast power and politics are manifested in the actions and works that emanate from him. The letter depicts various aspects of royal power and implicitly states how each of these aspects reinforces and expands the idea of the Shah.Some of these works and actions are related to the realm of royal art and architecture.The content and form of these artworks are described as royal power.Examining this letter reveals the complex interrelations between power, thought and art behind the court's constructions, for which little textual evidence is available.Furthermore, one should consider rhetoric along with painting, calligraphy, and architecture and understand this letter as a result of the art of rhetoric as that pursues the same political goals.In the royal court, patrons of art viewed these works as manifestations of their power, and saw, in their creation and exchange, forms of empowerment and the expansion of the basis of their legitimacy. is Aristotle's Rhetoric; which clearly correlates to Aristotle's treatise on poetics, and mutual references are made in the two books.The same relationship appears in Islamic tradition and in the texts of Muslim philosophers such as Al-Farabi and Ibn Sina (Ibn Sina).Outside the Islamic philosophical tradition, rhetoric and poetry were always considered jointly.If poetry was accepted as an art and placed next to the painting and other mediums, so should rhetoric.Thus, rhetoric is not only close to poetry but also is related to calligraphy.It suffices to mention the 15 th -century Dastur al-Katib fi taiʿn al-maratib (The Guide for Writers to Understand Orders, Shams Monshi 1390), a critical book and manual on the art of the literal and rhetorical writing (Insha) for royal secretaries. The King's Palace and GardenShah Tahmasp's letter to Sultan Selim II begins with many praises of the young Sultan, and continues by recalling his father, Sultan Suleiman, giving a lengthy description of his last war, in the middle of which the elder Sultan passed away.By narrating the reaction of Shah Tahmasp and the Safavid court to the news of the Sultan's death, the authors bring the narration and attention to Shah Tahmasp in an interesting way.The beginning of the third part of the letter is dedicated to Tahmasp, his court, and the description of the gifts that were sent, as a narrative strategy to draw the image of Shah Tahmasp.What is particularly remarkable in this part, is the number of descriptions devoted to Saʿadatabad, the palace and garden of Shah Tahmasp in Qazwin.Saʿadatabad (literary the place of happiness) was a small new town founded in Qazwin by the king.Reviewing the letter's descriptions and other Safavid sources reveals the crucial importance of Saʿadatabad for Shah Tahmasp.The city and its extensive urban plan in Qazwin were undoubtedly a source of inspiration for Shah Abbas I. and his new capital Isfahan.7 Saʿadatabad's descriptions commence with the feasts given in Qazwin when the news of Sultan Suleiman's succession reached the Safavid capital: "It spread a celebration of happiness on the porch that was founded by happiness and looked like Iram Palace, which had not been seen there since the construction of the heavens."(Qumi 510)The letter's authors equate the royal palace and its gardens with Paradise, and the descriptions of the inhabitants of Saʿadatabad's happiness and celebrations that took place continue over several pages.The description of the king's palace, garden and their prosperous lives is part of the king's image and reflects and symbolizes his power, property, peace and the security that his government exudes.The letter's readers are thus obliged to consider these as part of the purpose of constructing Saʿadatabad itself as the palace and its garden provided such a position for Shah Tahmasp.
2023-10-12T15:08:18.627Z
2023-10-09T00:00:00.000
{ "year": 2023, "sha1": "1dde8e96abb7510d237aecfbea2a0ecf65142d03", "oa_license": "CCBY", "oa_url": "https://bop.unibe.ch/manazir/article/download/8811/13400", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a796115d323a0af76835694eacfb488d121b18ae", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [] }
53345774
pes2o/s2orc
v3-fos-license
Towards a Class-centred Approach to EFL Teaching in the Palestinian Context The teaching article attempts to highlight the significance of introducing a class-centred approach (henceforth CCA) to L2 teaching in the Palestinian context. Additionally, it aims to pinpoint that experienced teachers can make their teaching strategies more motivating and more communicative, through intertwining their learners' pedagogical and social demands. It is centered on the EFL teachers' everyday behaviours in a language classroom. It tries to precisely give an explanation and a definition of the concept of (CCA) and its implication in classroom language learning In addition, the article investigates the theoretical framework that underlie the CCA to teaching. In order to provide an overview of the present teaching preferences in L2 classroom conducted by EFL teachers at home, a questionnaire has been distributed to a sample population of EFL teachers from a Palestinian university. Meanwhile, the article tries to justify the need and the appropriateness of CCA to language teaching, with special focus on the Palestinian context. Alongside discussing and analyzing the questionnaire results, the article also makes use of major findings reached by many studies in this respect. Ultimately, it concludes discussion by confirming that language teachers' success in meeting and intertwining the learners' socio-pedagogic needs help EFL teachers cultivate and create a non-threatening classroom environment in which learners interact readily in the target language. Introduction English as a foreign language (EFL) has long been the centre of attention of educationalists, linguists, EFL teachers and learners.This can be noticed in the large number of studies geared towards research in L2, EFL approaches and difficulties encountered by learners (Hamdan 1994).Like too many EFL learners, Palestinian students face serious problems in English learning at almost every level all through their schooling.(IIEP, 1997).Low proficiency in English among Palestinian learners is evident in their inability to communicate in English following many years of learning.(For example, ELC-AAUJ Placement Exam results [2005][2006][2007][2008][2009] In his classroom-centred approach, Allwright (1986) who argues that learning a language is 'hardly' an easy task, reviews some of teachers' classroom habits which influence EFL learning experience.Senior (2002;2006a;2006b) focuses on the importance of identifying a framework for understanding the basis of the teachers' everyday behaviour in classroom, on which she believes that it underlies the theoretical assumptions language teachers base their everyday classroom practices and decisions on. Language teachers agree that no two of their classes are identical-even when the two are given the same course.In some classes teachers approach teaching pleasantly, while in others, things are quite the opposite.Hadfield (1997) agrees that language teachers' main concern is related to the atmosphere of the classroom and components of the class group.This article is going to address the concept of class-centered approach, its teaching and learning implications, its realization and materialization in our classrooms at home situation, and its appropriateness in mono-cultural class settings as observed in the Palestinian context. On the one hand, it is the concern of this article to review the language teachers' willingness to create and maintain a learning community within the classroom.On the other hand, it highlights EFL teachers' teaching preferences inside language classroom at home , and aims to pinpoint that experienced teachers can make their teaching strategies more motivating, constructive and communicative.The presentation stresses that our EFL teachers need to work on a real compromise between the learners' pedagogical and social demands.Once language teachers succeed in intertwining these needs, they can foster and cultivate a class-centred approach (CCA) in our teaching career, and, eventually, they will be able to create a non-threatening classroom environment in which learners interact readily in the target language. One may argue that the socio-pedagogic consideration is likely to work well in multi-national classes where learners come from different cultural background; still, learners from the same cultural setting -as the case in the Palestinian context -have great opportunities to improve their language if teachers approach teaching under a framework that successfully balances between social and pedagogic priorities. It is true that this trend in L2 is not new; nevertheless, it is an essential assumption that our teachers can exploit effectively and efficiently in their ongoing efforts they exert to help learners overcome difficulties in L2 learning.Rivers (1992) argue that successful teachers are those who understand their learners' different needs. The article overviews a group of Palestinian EFL teachers' everyday behaviour preferences in their classrooms.For this purpose, a questionnaire has been distributed to EFL teachers at the Arab American University-Jenin (AAUJ) as a sample of the target population (see Appendix 1).The results show the present practices in classroom and the wanted ones as priorities expressed by the teachers. Finally, the article concludes the discussion with suggesting certain aspects of classroom language teaching-among too many, and ways through which language teachers teach and manage their classes at the same time. Objectives The article attempts to highlight the significance of introducing a class-centered approach to L2 teaching in the Palestinian context.However, the study is not necessarily confined to certain EFL target group in terms of their qualifications, classes they teach or experience.As the study is centered on the everyday behaviour in a language classroom, it precisely attempts to:  delineate the concept of (CCA) and investigate its theoretical and practical implications  review EFL teaching preferences and practices in L2 classroom.  examine the appropriateness of the approach at home situation.  look into EFL teachers' need to identify their learners' socio-pedagogical demands. In their attempt to create a non-threatening classroom teaching environment, EFL teachers try hectically to experiment, and eventually, employ methods that 'best' fulfill the objectives of EFL leaning at home situation.With the introduction of a class-centred approach into their classes, EFL teachers are likely to 'shift' from their traditional role as knowledgeable resource for learners into other roles that not less significant than this, and even go beyond it.They can help in creating a learning community-classroom learners-in which the members wok together in a cohesive way that each finds a role in this learning community. The Concept of Class-centred Approach: Theoretical Framework Allwright (1986) argues that language learning is 'hardly' an easy task.In his classroom-centred research, Allwright (1983;1986) "tries to understand the processes that happen in classroom and why and how they take place that way." (1983: 191). The critical issue here is not the teaching methods that language teachers employ in class, neither it is the classroom management in terms of discipline or physical setting.It is an issue that goes beyond that, a situation in which we try to give responses to questions that concern both language teachers and learners: what behaviours and practices do teachers perform in class?Why do teachers find some classes easier to teach than others?Why do teachers believe that no two classes are identical while they have the same teacher with the same course syllabus?What makes language learning more difficult?What experience, other than pedagogical, makes learning more effective?How can experienced teachers deal with a non-unified individuals as a unifed unit or a learning community that share more than they differ? In her answer to some of these queries, Senior (1997;2002;2006 a;2006 b;2008) claims that the introduction of a (CCA) into our classes may respond partly to the issue of effective learning.This approach does not simply mean creating a non-threatening class eniveronment; rather it calls for intertwinig learners' learning wants with their socail demands.More precisely, Senior (2002) believes "….that teachers are sensitive to the social needs of their class groups, and that their pedagogically and socially-oriented behaviours are closely intertwined."(2002: 399) There is no doubt that some EFL teachers are class-centred ones consciously or subconsciously.They pay attention to their learners' needs, wants and demands that go too far beyond the learning and language tasks in their classes.These teachers are the ones who know their stuff, who can also develop a relationship with their learners individually and collectively.These teachers are best described by Finch, 2002 as "agents of social change" in the classroom.How we can put this role in a context of humanistic goal is the critical question.Rivers (1992) argues that, "Language teachers must study the language learners in their classes-their ages, their background, their aspirations, their interest, their goals in language learning, their aptitude for language acquisition in a formal setting."(Rivers, 1997: 376) . Is the (CCA) a proven way to effective learning and efficient teaching?How can classroom-centred research provide a theoretical framework for language teachers?To what extent can intertwining the learners' social needs with their learning experience be of any help in the teaching and learning process?These questions presume that the assumption that efficient teaching and effective learning can result from the introduction of a (CCA) to our classes.Classroom research has shown that pedagogical experience is only one part of the 'complicated' learning process.In summing up her major findings of classroom observations she conducted for different EFL teachers' performance in classes, Senior (2002) believes that there is a correlation between quality of class groups & quality of teaching/ learning; she concludes that: Teachers have demonstrated through their everyday classroom behaviour that language teaching is a highly complex business that not only involves teaching effectively, but also attending to the social well-being of their class groups.(2002: 402) It is true that gathering a group of learners with a teacher in a classroom is going to be complex and full of experiences of the members.(Wright: 2006;Ashour: 2008) Our awareness of this complexity of the individualslearners -may require us to look deeply into these individuals' needs: be social, pedagogical, cognitive or psychological, once we agreed to take teaching as our career. The ultimate goal of teaching EFL in our context as expressed in the syllabus outlines (see ELC-AAUJ Advanced English syllabus 2009 ) is to enable learners to learn, or, eventually, 'master' the language four skills and sub-skills and to enable them to become skilled and trained on dealing with the language components.With this goal ahead, EFL teachers are exerting endless efforts to fulfill this aim through employing different teaching approaches and other cognitive-functional means that would enhance the achievement of this goal.(Tomasello: 1992). One major effort teachers may consider is paying close attention to their learners' social needs alongside the pedagogical experience, as the latter is existing in any teaching agenda after all, why not including the former if it motivates and creates a free-stress, friendly and non-threatening context for learning in class.Also, teachers who are class-centred do focus on their learners, how they feel and on how effectively they learn; an assumption that best forms a realization of the learning-centred or learner-centred teaching method , and ultimately, the communicative approach.(Littlewood, 1981). It is very significant to clarify what connotation social needs of an EFL learner refers to.In the first place, we have to agree that learners are members of a group in a classroom known as learning community.Therefore, it is EFL teachers' concern, as Hadfield (1992) claims, to think of the atmosphere in the classroom and the chemistry of the group than problems of how to teach the language.This assumption calls for dealing with the class as a whole-group unit which requires the participation of each member to the uniqueness, activation techniques and the 'success' of the group.In her feedback on this paper, Senior assumes that "Developing a sense of unity within the class as a whole is the overall goal of the class-centred approach" (Senior's email commenting on this paper, on June 2 nd , 2011).Tian et al ( 2004) understand that the cohesiveness of the learning group in class means furnishing for the idea of accepting the other, open-mindedness, safe L2 practice, less discipline burden on teachers, individual's self-esteem and success of both the individual learner and the group.It is clear then that these values can be best achieved once challenging and convenient language tasks are given to the target learners of the group. For example, in teaching paragraph writing, a teacher can brainstorm learners first to agree on a topic for writing.This can be carried out by consensus and voting for a topic.Then groups are formed with a spokesperson and a group reporter for each group.To ensure that learning is taking place, the teacher can sit with each group, discuss, listen and share ideas.The teacher's mindful discussion with the groups is one way of establishing a kind of relationship with the class or the learning community there.Each group presents their product on a board in class, where the different groups share and compare their writings.Evidently, the group work is a main feature of the communicative teaching of English. Therefore, what makes a 'good' teacher for the learners is dependent on the teacher-learner interpersonal relations that are geared towards fulfilling the pedagogical wants.Sowden (2007) believes that: "Success as a teacher does not depend on the approach or method that you follow so much as on your integrity as a person and the relationships that you are able to develop in the classroom.The ability to build and maintain human relationships in this way is central to effective teaching".(2007: 308) Peterson (2005) agrees that a relaxing classroom leaning environment takes place when a learner feels that he/she belongs to a group with a caring teacher, and when feeling accepted by other learners. Finch (2002) and Senior (2006 b) agree that CCA to EFL teaching approach has certain principles that can be viewed as general guidelines for classroom behaviours.They add that teachers are required to enjoy many characteristics that make both learning and teaching effective.Accordingly, teachers are asked to value their learners equally, to ensure there is a variety in activities which are supposed to be effective both socially and pedagogically, to encourage self-confidence without focusing on competence or performance, and reflect a student-centered view of language learning.With these principles in mind, we can furnish for a motivating learning setting that is considered a cornerstone in EFL learning.There is no doubt that some EFL learners do better than others because they are better motivated.Gardner (1991) and Littlewood (1986) argue that such learners will find it difficult to learn a foreign language in a classroom if they have neither instrumental nor integrative motivation. The way EFL teachers can link their learners' social needs with their learning experiences is an easy task by itself.It is true that experienced teachers are class-centred, as Senior (2006a) mentioned above; still approaching the concept and employing it in class is a critical story. Teachers' Preferences in Language Classroom As this article intends to highlight EFL teachers' attitude and teaching preferences inside language classroom, pinpoint the significance of intertwining their learners' pedagogical and social demands and to urge our EFL teachers to introduce and practice (CCA) in their teaching career, the study target population will be all EFL teachers in the Palestinian universities of the West Bank.The article took a sample population of EFL teachers and teacher assistants at the Arab American University-Jenin (AAUJ).It used a questionnaire-based tool to get first-hand information from EFL teachers on their language classroom behavior preferences and practices.The questionnaire is comprised of three sections that include background information, 13 items and one ranking question.(see Appendix 1). The questionnaire has been distributed to EFL teachers and teacher assistants who teach English courses and language lab classes at the English Language Center (ELC) at the (AAUJ) in the academic year 2010/2011.The second part of the questionnaire with its 13 items supposedly cover specific aspects of the topic in focus.In their answer to item 6 of the questionnaire (see Table 1), more than half of the teachers (53 %) believes that meeting their learners' learning demands is a priority.With a percentage of (27 %) who has no opinion on the same item, one can draw a conclusion that most teachers see pedagogical needs as a main concern.From experience, it is believed that our EFL teachers understand that the ultimate goal of any EFL syllabus is, roughly, to enable learners to acquire language skills and sub-skills, and, therefore, the 'contract' between teachers and learners is leaning/teaching proceedings.Another related consideration of preferring pedagogy is the pressure of time span limit and administrative requirements to abide by a given syllabus outline.Even though these considerations may be justified, again learners need a teacher who, as Senior (2006) puts it, is not only " an expert in their field as a language teacher; but also they want somebody who can actually develop a relationship with them both individually and with the class as a whole." (2006: 400). Another expression of EFL teachers' focus on pedagogy is seen in the questionnaire item 11 in which (73 %) feels that some of their classes are pedagogically frustrating.Moreover, about 79 % of the teachers ranked their classes pedagogical setting as the most important, and the remaining percentage ranked it second. A major surprising response is the teachers' uncertainty about the effect of meeting the learner's social needs in helping them overcome teaching obstacles as expressed in item 13, where 53 % has no opinion.This also explains the teachers' opinion on the difficulty to create a homelike learning setting in class which positively reflects on the learners' approaching the class activities in a relaxing atmosphere.More than one third of the teachers agree that it is difficult for them to create a friendly environment in class (item 10), while another third expresses no opinion in this respect.This difficulty may be associated with teaching experience as we find that the majority of teachers who find it easy to set up a friendly class setting is among those with more than 10 years of teaching experience.Also, these findings explain the teachers' ranking of social setting where 60 % of the respondents placed it in the third or fourth place.(see Table 2). In their answer to the importance of humour in class (item 1), the majority of the teachers-80 %-believes that employing humour in their classes is a vital aspect in their teaching techniques.It looks that teachers do not consider the sense of humour as part of the learners' social needs, rather they may think of it as an occasional incident for motivating purposes, or as an on-the-spot energizer. When more than 86 % of the teachers (item 5) thinks that their learners' cultural backgrounds are non-identical, they agree that they deal with dissimilar individuals-learners-who are culturally different.This leads us to the conclusion that the concept of cultural background is a vague and ambiguous notion for the teachers who could have found as an interchangeable term with learners' prior knowledge or their socioeconomic status.However, this issue is not the concern of this paper.What concerns us here is that most, if not all, our learners share the same cultural background in terms of ethnic group, traditions, norms and socially-oriented behaviour.They are definitely different in their leaning styles, classroom behaviour, and other individual aspects that influence their learning. Another surprising preference of the teachers' practices in classroom is that 40 % of teachers and another 13 % with no opinion see their classes as individual learners, and not as one unit (item7).This implies that the consideration of the needs of the learning community still ranked second after the individual learner's needs.Apparently, this implication may look promising in terms of approaching learner-centred teaching method which ultimately focuses on the learner (Littlewood, 1983).However, the teachers' previous preference is likely to be viewed as a misleading, unless it is explicitly understood that the individuals are members with different learning abilities who form a cohesive leaning community where each contributes, shares and works in beehive-like environment.(Senior, 2006 a) The percentage of 73 % is significantly high when teachers expressed their ability to identify their learners' learning needs as seen in (item 4).Again, the teachers must be talking about pedagogical wants, and might be fully dependent on their tuition, observations and teaching experience.If this is the case, the question of identifying the learners' learning needs is subjectively-oriented process that lacks scientific research and objective identification and analysis of these needs. Class-centered in the Palestinian context: Does it work? In one of my Advanced classes in the summer session in 2010, one of the best students expressed a significant point of view about the English course he was attending.He said, "When I do language tasks in this class, I must say that I feel so comfortable as if I'm talking to my father, brothers and sisters at the dining table."Even though this may not be the case with most EFL learners at home situation, this learner with his sincere expression has triggered and ignited an idea that geared me to try to look for an acceptable and precise educational interpretation for a better teaching classroom practices that may go beyond the methods of teaching I have learnt, practiced and tried in my classes. This assumption leads us to recall what Senior (1997) said about teachers' position in the cohesive group, that they "are both an integral part of their class groups, and in a sense set apart-just as a parent who bonds with a child is both a blood relation and an authority figure".(1997: 4) In other occasions, other learners would voice other negative views about their roles in a class where they may have mixed feelings of tense, anxiety and pressure.In a podcast interview on the concept of class-centred, Senior (2006 b) stresses that experienced teachers always link pedagogical and social classroom behaviours in a way that both influence and are influenced by the atmosphere of the class.She also states that a major principle in (CCA) requires teachers to "develop rapport with individuals and with the class as a whole."(The podcast interview 2006).Cehan (2002) believes that teachers try to gear learners towards interaction through establishing a creative discourse by providing continuous classroom social roles tasks that get all learners involved in activities no matter how small the role may be. Palestinian teachers, in particular EFL ones, may argue that this trend we are calling for-socio-pedagogic consideration-works well in classes with cultural diversity or multicultural background which is likely to be different from the Palestinian context.Nevertheless, learners with similar cultural setting, as in our case, have great opportunities to approach language and learn its skills once a framework that successfully embraces their social and pedagogic priorities.Rivers (1992) argues that successful teachers are those who understand their learners' different needs.Colibaba (2009) criticizes the teacher who "does not make pedagogical choices which provide cohesion to the class and thereby stimulate the perception of a positive learning environment," because she thinks that the learners in such a class will never reach a satisfying level of communication in the foreign language.(2009: 184) As Palestinian EFL teachers and their learners are monoculture, and nearly all our learners themselves share the same cultural background, one can assume that establishing friendly ties between the learners and their teachers in classroom is attainable.While it is apparently easier for learners to set up good social relationship within their classmates through language pair-tasks or group ones, teachers are the best people to set as examples for their learners, to show respect to the members of their learning groups, to be good listeners to them, to stand at the same distance from each learner, and basically to treat them in a humanistic way.As this behaviour may cultivate a mutual respect between learners and their teacher, and, eventually develop a more motivating atmosphere, a positive pedagogical experience is going to be fostered. Conclusion As the article first introduces the concept of (CCA) to EFL teaching, it should be stated that teachers, particularly the experienced, employ it in classroom in a way or another.However, in order to meet learners' social and pedagogical needs, teachers are urged to objectively do classroom-based research that can give answers to their day-to-day practices in class.The research should target classroom teaching and learning management in an attempt to give explanations for the most effective and efficient practices -other than teaching approaches -that teachers may practice in their classes.For example, teachers can identify and then analyze their learners' learning needs through using a questionnaire that gives first-hand information about their learners' pedagogical background, their past learning experience, their preferred ways of approaching a course.In their analysis of learners' learning needs, teachers, on the one hand, are expected to plan, outline and organize their courses; on the other hand, they get clear ideas that gear them towards appropriate methods of approaching their teaching process.It is through the identification and analysis of the learners' needs, teachers will have to consider the learner's socio-pedagogical demands in their teaching plans.The social demands discussed earlier are best realized in classroom everyday behaviour through teachers' employment of a humanistic teaching context where they cultivate values that foster learner's self-esteem, self-confidence, respect, participatory trend in the learning community (class), sense of belonging, democracy, and equality-oriented practices.However, the present EFL teachers' preferences and classroom behaviours analyzed in the paper indicate that theoretical assumptions that underlie the approach in focus, i.e.CCA to teaching, still need to be spotlighted among EFL teachers at home situation. Further Studies Although the class-centred approach is not a new trend in EFL teaching, much more research is needed in this respect.Both qualitative and quantitative research will provide a more objective framework that can form criteria for teachers to benefit from concerning approaching learners' socio-pedagogical demands.How to practically apply the approach in class needs to be established in a form of guidelines for EFL teachers.These principles can be drawn out from longitudinal research that includes class observations, interviews of both teachers and learners, review of theories on teaching as a humanistic process. The Arab American University.English Language Centre Records of English Placement Test 2005-2010. The International Institute for Educational Planning (IIEP)-UNESCO-IIEP forum on repetition, 1997. Table 1 . Percentage of Respondents for each item Table 2 . (questionnaire section 3) Percentage of ranked items in terms of importance Respond to each point by adding a tick ( √ ) next to the item that applies to you: Appendix 2. Questionnaire results: 8 Teachers Respond to each point by adding a tick ( √ ) next to the item that applies to you: Appendix 3. Questionnaire Results: 7 Teacher Assistants Questionnaire Results: 7 TAs Respond to each point by adding a tick ( √ ) next to the item that applies to you: As an EFL teacher, rank the following concepts in terms of importance to you when you teach: (no. 1 is the most important, 2 is less important, etc) Respond to each point by adding a tick ( √ ) next to the item that applies to you: As an EFL teacher, rank the following concepts in terms of importance to you when you teach: (no. 1 is the most important, 2 is less important, etc)
2018-10-21T20:01:09.835Z
2011-11-28T00:00:00.000
{ "year": 2011, "sha1": "69eb5b4be03d900ce2563cce1998001c3b8ea847", "oa_license": "CCBY", "oa_url": "https://ccsenet.org/journal/index.php/elt/article/download/13356/9231", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "69eb5b4be03d900ce2563cce1998001c3b8ea847", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
218596179
pes2o/s2orc
v3-fos-license
Estimating the Cheeger constant using machine learning In this paper, we use machine learning to show that the Cheeger constant of a connected regular graph has a predominant linear dependence on the largest two eigenvalues of the graph spectrum. We also show that a trained deep neural network on graphs of smaller sizes can be used as an effective estimator in estimating the Cheeger constant of larger graphs. Introduction Let G = (V, E) be a finite, simple, connected and undirected k-regular graph with |G| = n. It is a well known fact from basic algebraic graph theory [1,3] that the eigenvalues λ i (G), 0 ≤ i ≤ n − 1, of the adjacency matrix A(G) of G are real and can be ordered as: For each F ⊂ V , let ∂F := {{u, v} ∈ E(X) : u ∈ F, v ∈ V \ F }. Then the number is called the Cheeger constant (or the isoperimetric constant or the edge expansion constant) of the graph G. The Cheeger constant is a measure of the connectivity of the graph G. Families of regular graphs with Cheeger constants bounded below by a positive constant also known as expander families have been widely studied (see [4,7,8,10] and the references therein) due to their applications to communication networks. The computation of h(G) of an arbitrary finite graph is a well known [2,6,9] NP hard problem. However, for a k-regular graph G of size n, we use machine learning to answer the following natural questions. (a) Does the dependence of h(G) on λ 0 (G) = k and λ 1 (G) stronger than what the known bounds indicate? (b) Is this dependence predominantly linear or non-linear? (c) Is there a strong dependence of h(G) on λ i (G), for 2 ≤ i ≤ n − 1? (d) Can these dependencies be used to estimate h(G) for large n with greater efficiency? We begin by providing data which shows that in general these known bounds for h(G) deviate significantly from its actual value. By considering random regular graphs of sizes 12 through 30, we apply machine learning via deep neural networks and linear regression to make the following statistical observations: (i) h(G) has predominant linear dependence on λ 0 (G) and λ 1 (G). Moreover, as |G| increases, this dependence appears to approach the linear function 1 2 λ 0 (G) − 1 3 λ 1 (G). This linearity is more pronounced when the spectral gap is large. (ii) Its dependence on λ i (G), for 2 ≤ i ≤ n − 1 is insignificant. (iii) We demonstrate that a trained deep neural network on graphs of smaller sizes can be used as an effective estimator for Cheeger constants of larger graphs where computation times using classical algorithms are large. The paper is organized as follows. In Section 2, we analyze whether some well known bounds can be used as effective estimators for h(G). In Section 3, we determine whether the dependence of h(G) on λ 0 (G) and λ 1 (G) is predominantly linear. In Section 4, we use machine learning to examine whether h(G) has a nonlinear dependence on λ 0 (G) and λ 1 (G), and also study its relation to λ i (G), for 2 ≤ i ≤ n − 1. Finally, in Section 5, we explore whether deep neural networks trained on graphs of smaller sizes can be used as viable estimators for Cheeger constants of larger graphs. Numerical analysis of known bounds We consider a dataset of random regular graphs of sizes 12 through 30 for our analysis. This dataset was generated by using a Python package that implements the algorithm described in [11]. The number of graphs considered for n = 12 was limited by the total number of available graphs, while for n > 20, the limitation came from long computation time for h(G). In all other cases, we have considered at least 20, 000 random graphs of varying regularity. The number of graphs considered for analysis for each n is shown in the second column of Table 1. The Cheeger is related to the spectral gap k − λ 1 (G) of a k-regular graph G by the following inequality (see [5,Proposition 1.84]): Mohar [9] showed that , if n is even, and , and when G = K 1 , K 2 , or K 3 (where K i denotes the complete graph on i vertices), he showed that For each graph G, we compute the lower bound on h(G) as given by Eqn. (2.1), and an upper bound, which is the lowest of the upper bounds appearing in (2.1)-(2.3). For each of these estimators, we calculate its deviation ∆h from the true value of h(G) as given in the equation below: where, h est. refers to the estimator of h(G). For the analysis in this section, h est. corresponds to either the upper bound or the lower bound. The mean values of ∆h est. (which we denote by ∆h lower and ∆h upper respectively) for each n is shown in Table 1 below. Table 1. Graph data considered in the analysis of this paper and the average deviation in bounds: The second column shows the number of graphs considered in the analysis in this paper for each n. For n ≤ 20, at least 20,000 graphs were considered for each n with exception to n = 12, where the total number of available graphs is less than 20,000. For n > 21, we tried to accumulate at least about 1000 graphs with the exceptions of n = 29 and n = 30. We note that, on an average, the lower bound deviates from the true value of h(G) by about 20%, while the upper bound deviates at about 60%. This deviation marginally reduces for large values of n. The table indicates that the bounds considered are not efficient estimators for h(G). In the following section, we consider linear regression to construct a better estimator for h(G). Linear regression analysis and prediction In this section, we want to determine whether the relationship between h(G) and λ 0 (G) and λ 1 (G) is predominantly linear. To begin with, we analyze whether h(G) can be estimated reasonably well by a linear function of the largest m eigenvalues, for 1 ≤ m ≤ 4. For each m, we calculate the mean deviation ∆h , where we use the fitted linear regression function as the estimator. The results for this analysis are presented in Fig. 1 below for various values of n. Log scale is used on the y-axis to stretch the scale. There is no considerable improvement in the Cheeger estimate from linear regression beyond λ 0 (G) and λ 1 (G). It is evident from the graph that adding the third and fourth eigenvalue to the analysis does not significantly reduce ∆h . This shows that a linear function of just the two largest eigenvalues estimates h(G) fairly accurately Interestingly, the average deviation ∆h reduces gradually with increase in n coming down to about 2% for n ≈ 30. This observation confirms that the relationship between the two largest eigenvalues and h(G) is mostly linear. The regression coefficients of λ 0 (G) appears to converge to 1 2 as n increases, while the coefficient of λ 1 (G) appears to converge to − 1 3 . The coefficients a and b of the model aλ 0 (G) + bλ 1 (G) + c are plotted in Fig. 2 below for each n along with lines corresponding to 1/2 and −1/3 for reference. This suggests a universality in the linear relationship, which is almost independent of n. This observation motivates us to test the linear model on λ 0 (G) and λ 1 (G) for the prediction of h(G) for larger n, where its computation is challenging. We train the linear regression model on the available data for n = 12, 13, 16, 17 and then use it to predict h(G) for other n. Using the trained linear model as the estimator, we show the mean deviation ∆h in Fig.3 below. Figure 3. Predicting with linear regression: Linear models trained on h(G) data for n = 12, 13, 16, 17 are used to predict Cheeger constants for graphs of other n. Average fractional deviation of the model from true value of h(G) is shown for each n. Linear models trained on even (resp. odd) values of n work better for the prediction of h(G) for even (resp. odd) n. The left panel shows prediction for even n, while the right panel shows prediction for odd n. We make the following observations (1) In general, for large n, linear regression with λ 0 (G) and λ 1 (G) appears to be a reasonable estimator for h(G). (2) The prediction is slightly more accurate when regression on odd n (resp. even n) is used to predict the h(G) for larger values of odd n (resp. even n.) (3) Average deviation ∆h is typically 4-5% for odd-odd and even-even predictions for the entire range of n considered. (4) It also appears that n = 16 and n = 17 linear models are slightly better over n = 12 and n = 13 models respectively for even-sized and odd-sized graphs respectively. This indicates that, for training a predictive model, we should opt for largest possible even and odd n for which the Cheeger constant data is available. Estimation of Cheeger constant using machine learning In this section, we study the data on h(G) using machine learning methods with deep neural networks, mainly to answer following two questions. (2) Does h(G) has any significant dependence on other eigenvalues? We expect that machine learning techniques will be able to identify nonlinear dependencies that were not visible through linear regression. We randomly take 40% of our dataset for 12 ≤ n ≤ 30 and train a deep neural network shown in Fig. 4 below 1 using ADAM optimizer. The remaining 60% of the dataset is used for validation. The trained neural net essentially provides an approximate non-linear map between the input eigenvalues and the expected Cheeger constant. The validation ensures that there is no memorization done by the neural net and that it is truly capturing features of the data. Fig. 5 below shows training and validation histograms of ∆h for n = 12, for both the cases of trainings done with the largest two and the largest four eigenvalues. We make the following observations: (1) λ 0 (G) and λ 1 (G) have a very strong correlation with h(G). Furthermore, there appears to be a small non-linear dependence on λ 0 (G) and λ 1 (G) which accounts for about 2.5% improvement over the linear regression. The average deviation ∆h is about 2.5% in both the training and validation data sets for deep neural net (DNN) model while it was about 5% for the linear model. (2) We do not observe any significant improvement for the estimation of h(G) while considering largest four eigenvalues over λ 0 (G) and λ 1 (G). In both cases ∆h ≈ 2.5% with small fluctuations in each 1 We have observed that other similar choices of neural net produce similar results presented in this section, as is the case with any machine learning problem. Several results in this paper can also be produced using a less deeper network. Our choice of neural network here works for all the results presented here. has similar results. Mean and standard deviation for ∆h for these cases is plotted in Fig. 6 below for both training and validation, reaffirming the observations made above. large, while it exhibits non-linear dependence when the spectral gap is small. Predicting h(G) using Machine Learning The most interesting application of this work is to predict Cheeger constant for large regular graphs, where it is computationally inefficient to calculate Cheeger constant but computationally efficient to calculate the spectrum. To achieve this, we train a neural net for small graphs where it is possible to calculate Cheeger constant in reasonable computation time. We then use this trained net to predict Cheeger constant for the large graph. We moderately train the deep neural network shown in the previous section for 50 epochs 2 on λ 0 (G) and λ 1 (G) of the spectrum and Cheeger constant data for graphs of sizes 12 and 16 for even-sized graphs and sizes 13 and 17 for odd-sized graphs. Again for training here we have taken only 40% of the available data. Each training results in a new model, so we train the network for each n a few times then take the trained model that yields the least validation error on the same n. We use the trained nets to predict h(G) for graphs of other sizes which we compare to its true value and obtain ∆h. The average deviation ∆h with respect to n is shown in Fig. 7 below, where we also show prediction done by linear regression method of Sec. 3 for contrast. Here are our observations 2 The training was stopped after 50 epochs as compared to about 500 epochs (optimization stopping automatically when loss stops improving) done in the previous section. This ensures that the network learns the significance of top two eigenvalues and not the information about the n. Maximal training to about 500 epochs optimizes the network to estimate Cheeger constant for a given n, but is bad for predicting Cheeger constant of other n. (1) We note that n = 16 works better than n = 12 for predicting Cheeger constants for higher even n, and similarly n = 17 works better than n = 13 for predicting Cheeger constant for higher odd n. (2) Although the plots are not shown here, but we have verified that to predict for even n training on even n works better than training on odd n, and vice versa. This is consistent with observations of Sec. 3. (3) We also note that deep neural net based model provides better prediction compared to linear regression model with a consistent improvement as n increases. Particularly, the models trained on n = 16 and n = 17 data predict Cheeger constants for the graphs of sizes 29 and 30 respectively, to within 3% accuracy on an average. (4) While we observe low average ∆h the standard deviation in ∆h is also low at about 4% throughout the range of the n, thus guaranteeing reliability on predictions. This is shown in Fig. 8 below. Mean deviation stays between 2% and 4% for all higher n while standard deviation is about 4%. Conclusion In this paper, we have studied the relevance of the spectrum of a graph G in estimating h(G). We find that h(G) is strongly dependent on λ 0 (G) and λ 1 (G), and this correlation is largely linear with a small non-linear component, as confirmed by the machine learning analysis. We have also demonstrated that by using a deep neural network that has been moderately trained about the relationship between h(G) and λ 0 (G), λ 1 (G), we can effectively estimate the Cheeger constant of a larger graph with high accuracy, statistically. We believe that an optimal use of this approach could be a powerful and efficient tool for studying the connectivity for large regular graphs.
2020-05-13T01:00:51.269Z
2020-05-12T00:00:00.000
{ "year": 2020, "sha1": "ecc61adfb67edcc235138a3036893baaf82d947f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ecc61adfb67edcc235138a3036893baaf82d947f", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
240068725
pes2o/s2orc
v3-fos-license
Study on the gas migration laws of non-pillar mining with gob-side entry retaining in high gas outburst coal seam Mining fractures are the channel of gas migration, and it is important to figure out the gas migration laws for gas control. Combined with the actual geological conditions of Shaqu No. 2 coal mine, a numerical model of no-pillar mining with automatically retained entry was built to simulate the deformation and fracture of overburden rock and the evolution laws of cracks above the goaf during mining. Subsequently, CFD software is used to theoretically stimulate the gas concentration distribution laws of non-pillar mining with the automatically retained entry in the stope under non-drainage and high-level borehole drainage conditions. The laws of gas concentration in the stope were compared and analyzed under the condition of the Y-type ventilation of non-pillar mining and U-type ventilation of coal pillar mining. The results show that Y-type ventilation of non-pillar mining reduces the gas concentration in the upper corner of the working face and the goaf, and effectively solves the gas accumulation problem in the upper corner of the working face in the mode of U-type ventilation of coal pillar mining. On this basis, the high-level borehole drainage technology is adopted to effectively reduce the gas concentration in the goaf. The research has certain guiding significance for the gas management of non-pillar mining with gob-side entry automatically retained. Introduction At the beginning of the 21st century, with the continuous reduction of coal resources, the problems of low recovery rate and a serious waste of resources caused by coal pillar mining have become increasingly prominent. Leaving coal pillars between the working faces will not only cause a great waste of resources, but also easily cause geological disasters such as rock bursts, open fires, and gas accumulation [1][2]. Gas disasters are a common geological disaster, and gas explosion not only causes a large number of casualties, but also destroys shaft 2 facilities, interrupts production, and easily causes secondary disasters such as coal dust explosions and mine fires [3][4]. To improve the recovery rate of coal resources and solve the dynamic disasters of coal mines, Chinese scholars proposed the technology of non-pillar mining with automatically retained entry [5][6][7][8]. Before the coal seam of the working face is mined, the directional blasting technology is used to pre-crack and cut the roof at the edge of the mining roadway. After the working face is mined, under the action of the mine pressure, the roof will automatically form a roadside along the pre-cracked and cut seam, thereby retaining the original roadway. Realize pillarless mining [9][10][11]. Because coal pillar mining has always been the main method in coal mine production in China, the research on the law of gas migration in working faces and goaves is also based on this mining system. Li Ying-ming conducted a numerical simulation and comparative study on the gas distribution law in the goaf under the conditions of drainage with and without buried pipes in U-type ventilation working face [12]; Wu Xiao-min used FLUENT software to compare the U-shaped and U-shaped Numerical simulation of the flow field in the goaf under U+L type ventilation mode was carried out, and the gas concentration distribution in the goaf and the distribution of spontaneous combustion danger zone in the goaf under the two ventilation modes were compared and analyzed [13]; Gao Gui-xiang analyzed and studied the U+L type ventilation method is the method of controlling the gas accumulation in the upper corner, which solves the problem of easy accumulation of gas in the upper corner [14]; Zhang Dong analyzed and studied the composition, characteristics, key technologies and treatment effects of the "double U" ventilation method, and practice shows that the "double U" type ventilation method has strong gas processing capacity, which can eliminate the problem of gas accumulation in the upper corner and ensure safe production of the working face [15]; Zhu Jian-fang comparatively analyzed the advantages and disadvantages of the "U+L" and "W" ventilation methods, showing that the "W" ventilation method has wider applicability [16]. After recent years of testing and promotion, a large number of research results have been achieved in the technology of non-pillar mining with automatically retained entry, but the main research is concentrated on key technical parameter design, overlying rock movement and underground pressure law, roadway stability control [17][18][19], and there is little research on the law of gas migration in the application of this technology. The clarification of the gas migration law in the goaf is of great significance to the upper corner gas control and fire prevention in the goaf [20][21]. Therefore, this paper uses the CFD software FLUENT, based on the "Y" type ventilation of self-made roadway without coal pillar, and combined with the mining situation of No.3402 working face in Shaqu No.2 coal mine, comprehensively studies the gas migration law in goaf during the mining process. "Y" type ventilation system for non-pillar mining with automatically retained entry The test working face is 3402 working face of Huajin Coking Coal Shaqu No. 2 Mine, and the coal seam is 3# coal. The thickness of the coal seam is 0.82~0.95m, the average thickness is 0.87m, and the average inclination angle of the coal seam is 4°. The working face buried depth is 380~450m, the strike length of the working face is 839.4m, and the dip length is 200m, and the designed roadway length is 600m. The test roadway is 3402 belt roadway, the roadway section is rectangular, the roadway clear height is 2.5m, and the clear width is 4m. The immediate roof of the coal seam is fine-grained sandstone, the main roof is 3 medium-grained sandstone, and the immediate bottom is medium-grained sandstone. The column diagram of drilling holes near the working face is shown in Figure 1. The principle of the non-pillar mining with automatically retained entry technology is to use constant resistance and large deformation anchor cables to strengthen and support the roof of the reserved roadway after the reserved roadway working face system is formed. Before the mining of the working face, the bilateral cumulative tensile blasting technology is used to pre-split blast the roof of the roadway along the direction of the gob along the goaf side to form a slit structure surface, cutting off the stress transmission path of the roof of the goaf and the roadway roof. After the working face is mined, the roof of the goaf gradually collapses along the cutting face under the influence of self-weight and mine pressure. When the working face is ventilated, the left roadway is used as the return airway, and the air return system is composed of the air inlet of the double crossheading of the working face, forming a Y-shaped ventilation system with "two inlets and one return" at the working face [22]. The Y-type ventilation system of 3402 working face in Shaqu No.2 coal mine is shown in Fig.2. Establishment of the numerical model Based on considering the actual engineering conditions and simplified calculations, aiming at the production geological conditions of the belt roadway in the 3402 working face of Shaqu No. 2 Mine, the FLAC3D numerical simulation software was used to establish a numerical model. The Mohr-Coulomb model was selected for this model. The size of the model is: length×width×height=300m×300m×60m. The size of simulated roadway excavation is: 4m×2.5m, and the size of working face excavation is: 200m×200m×2.5m. The roadway is buried at a depth of 400m and is driven along the roof of the coal seam. The left and right boundaries of the model limit the x-direction displacement, the front and back boundaries limit the y-direction displacement, and apply the horizontal compressive stress varying with the depth; the lower boundary limits the z-direction displacement; the upper boundary applies the uniform self-weight stress. The specific rock mechanical parameters of the model are shown in Table 1. Analysis of the failure of the overlying strata in goaf To study the deformation and failure of the overlying strata and the evolution of cracks at different distances of the working face, this simulation is divided into 5 excavations, each excavation 40m. In the middle of the goaf, slices are made along the direction of the coal seam, and the deformation and failure state of the overlying rock strata affected by mining is shown in Figure 3. When the excavation size is 40m, due to the short excavation distance, the overlying rock layer is not damaged in a large area, and only the rock layer about 10m above the roof undergoes plastic failure under the influence of mining. With the continuous advancement of the working face, the roof of the goaf above the working face will further collapse and deform, 5 and the deformation and destruction will increase, especially when the excavation size is 200m, the overlying rock formation is most severely damaged, the longitudinal extension of plastic zone is "saddle" shape and tends to be stable, and the depth of plastic failure of roof is about 27m, among them, the rock formation within 5m above the coal seam has tensile and shear failure, which can be considered as a caving zone; the overlying rock within 5m~27m above the coal seam enters the tensile failure zone, which can be considered as a fracture zone [23] . Seepage control equation in goaf Generally, the goaf is regarded as a porous medium composed of the coal-rock mixture, and the gas and air mixture in the entire calculation domain is regarded as an incompressible ideal gas. The flow field can be applied to the porous medium model, and the calculation follows the mass equation and momentum equation, and energy equation [24]. (1) Mass conservation equation of the flow field in the goaf ( ) Where: t is the time, s ;  is the gas density, (2) The momentum conservation equation of the flow field in the goaf: Where: k is the heat transfer coefficient of the fluid; T S is the viscous dissipation term; F is the gravity and external volume force, N ; i S is the momentum source term, N . (3) The energy conservation equation of the flow field in the goaf Where: k is the time; ij  is the gas density, 2 N m ; p c is the specific heat capacity of the object; i S is the mass source term, N . Porosity and permeability equation of goaf The porosity of the caving zone in the goaf is distributed in a "shovel shape", that is, the porosity in the shallow part and the sides of two roadways is large, while that in the middle and inside is small. The distribution of porosity satisfies the following equation [ Where:  is the porosity distribution; Where: m D is the average particle size of the porous media framework, m; n is the porosity. According to the field test data and CFD simulation experience, the porosity and permeability distribution in different areas of the caving zone and fracture zone in the porous media model of the goaf are determined through repeated simulation tests. Model establishment and boundary conditions To study the law of gas concentration migration in the working face and the goaf under the technical conditions of non-pillar mining with automatically retained entry, and to guide the gas control in the goaf, the model was established based on the simulation results of "upper two zones" and geological conditions of 3402 working face in Shaqu No.2 coal mine, and use FLUENT software for numerical simulation. The basic situation of the working face and the setting of solution boundary conditions are shown in Table 2. The setting of porosity, oxygen consumption rate, viscous resistance coefficient, and inertial resistance coefficient in the goaf is completed by compiling UDF, which makes it closer to the actual situation of the goaf. Distribution law of gas in the stope When the goaf of 3402 working face is not drilled on the ground, the gas concentration distribution law is shown in Figure 5. In the Y-type ventilation mode, the gas concentration in the upper corner of the working surface is small. Extending to the deep part of the goaf, the gas concentration gradually increases, causing a large amount of high-concentration gas to collect in the deep part of the goaf. This is due to the gradual compaction of the rock layers in the deep part of the goaf, and the porosity is much lower than that in the shallow part, and air cannot penetrate, forming a "gas warehouse". When the roof collapses, the large amount of gas accumulated in the goaf will quickly gush out from the fissure zone and other positions to 8 the working face under the impact of the incoming pressure. This part of the gas is under the action of the wind from the two roadways. It will rush to the non-pillar mining with automatically retained entry, avoiding the accumulation of gas on the working face and upper corners, causing the problem of excessive concentration. Flowfield distribution law in strike goaf Take X=40m, X=70m, X=100m slice cloud images to analyze the distribution law of gas concentration along X (that is, the direction of the working face), as shown in Figure 6. It can be seen from Figure 6 that at the same height, the greater the proportion of the "red" area toward the depths of the goaf, the greater the gas concentration toward the depths of the goaf. Also, because the gas density is 0.554 times the air density [27], the gas in the goaf is accumulated in the direction of the roof under the action of buoyancy, resulting in a higher gas concentration along the coal roof. Distribution law of flow field in inclined goaf Draw the reference line at X=60m and X=90m in the cloud chart to generate the concentration curve of gas with the trend, as shown in Figure 7. The trend of the curve shows that along the working face, from the side of the air inlet lane to the side of the air return lane, the gas concentration gradually increases, and the gas concentration growth trend gradually becomes gentle. The gas concentration near the inlet lane is smaller. The law of gas distribution in the stope of high-position borehole drainage of non-pillar mining with automatically retained entry Based on the numerical simulation modeling of the undrained Y-shaped ventilation gas, keeping the basic physical parameters unchanged, and combining with the actual situation on the site, it is determined that the simulated high-level drilling holes in the goaf are a group of 5 groups, and the end of the drilling arranged in the fracture zone, the horizontal distance controlled by the end of each group of boreholes is 75m, 125m, 175m, 210m, 260m from the cut-hole, and the borehole diameter is 0.153m. Among them, the drainage flow of each group of boreholes is set according to the actual situation of the 3402 working face, which is achieved by setting a flow outlet in FLUENT. The drainage flow of each group of boreholes is set in fluent, and the pumping simulation results are shown in Figure 8. It can be seen from Figure 8 that when high-level drilling is used to drain gas in the goaf, the drainage effect is better. The gas concentration on the side of the goaf near the non-pillar mining with automatically retained entry is significantly reduced, and the area of low-concentration gas is increased compared with the time when the gas is not extracted so that the gas concentration in the goaf area with air leakage into the non-pillar mining with automatically retained entry is also significantly reduced, and the gas concentration of the non-pillar mining with automatically retained entry and the return airway is reduced. The drainage of the high-level borehole also intercepts part of the gas flow from the goaf to the working face, which makes the gas concentration near the working face gradually decrease. Besides, from the overall situation of the three-dimensional gas concentration distribution map of the goaf, the high concentration gas area inside the goaf is also significantly reduced. Therefore, the drainage method has a better effect on gas drainage control in the goaf and effectively solves the problems of excessive gas in the return airway and high concentration of gas in the deep part of the goaf. Figure 9 shows the distribution of gas concentration in the goaf under different mining and ventilation methods. It can be seen that the traditional coal pillar mining face adopts U-type ventilation system, because most of the gas in the goaf gushes to the upper corner of the working face and the return airway along with the air leakage, resulting in the gas accumulation in the upper corner of the working face and the gas concentration in the return airway exceeding the limit, which brings great safety risks to the production of the working face. However, under the Y-type ventilation method of non-pillar mining with automatically retained entry, only a small part of the gas in the goaf flows to the upper corners of the working face, which makes the gas concentration in the upper corner of the working face extremely low, which fundamentally solves the problem of gas accumulation in the upper corner of the working face and gas overrun in the return airway under the U-type ventilation mode. Compared with Y-type ventilation, the curve of gas concentration growth under U-type ventilation is "steeper", and the curve on the side of the return airway is more obvious. The gas concentration near the corner of the working surface reaches more than 40%. The Y-type ventilation is 40m away from the upper corner of the working surface, and the gas concentration is still below 20%, which also shows that the Y-type ventilation can well solve the problem of high gas concentration at the upper corner of the working surface. Discussions (a) U-type ventilation in coal pillar mining (b) Y-type ventilation in non-pillar mining with automatically retained entry Figure 10. Variation of gas concentration trend in goaf under differ 5. Conclusions 1) FLAC3D numerical software was used to simulate the deformation, failure, and fracture evolution law of overlying strata in the mining process of non-pillar mining with automatically retained entry at working face 3402 of Shaqu No. 2 mine. Finally, the caving zone height of overlying strata failure was determined to be 5m and the fracture zone height to be 22m, providing a foundation for the study of gas migration law. 13 2) Use FLUENT software to establish the Y-shaped ventilation working face of non-pillar mining with automatically retained entry, and the distribution of the gas concentration on the y-type ventilation face and goaf was obtained through simulation, that is, the gas concentration in the upper corner of the working face is small and extends to the deep part of the goaf, and the gas concentration increases gradually. In the vertical direction, due to the influx of pressure relief gas from the overlying coal seam and the gas accumulation in the goaf area toward the roof under the action of buoyancy, the gas concentration is the highest here. 3) The simulation of gas drainage by high-level borehole under the condition of Y-type ventilation in non-pillar mining with automatically retained entry is carried out. The results show that the combination of Y-type ventilation system and high-level borehole drainage and relief gas is of great significance to reduce the gas volume fraction in goaf and ensure safe production. Compared with the traditional U-type ventilation for mining with coal pillars, when the Y-type ventilation of non-pillar mining with automatically retained entry is adopted, most of the gas in the goaf is carried by the air leakage from the working face to the goaf and discharged into the non-pillar mining with automatically retained entry, and a very small part of the gas rushes to the upper corner of the working surface, which makes the gas concentration in the upper corner of the working surface extremely low, which fundamentally solves the problems of gas accumulation in the upper corner of the working surface and excessive gas in the return airway under the U-type ventilation mode.
2021-10-28T20:09:25.296Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "607645426f5f3df91aefac739b8d2d934e2cefb6", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/861/5/052058", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "607645426f5f3df91aefac739b8d2d934e2cefb6", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Physics" ] }
261115426
pes2o/s2orc
v3-fos-license
The epidemiological profile of Aids in Brazil in the period 2010-2020 In Brazil, the AIDS epidemic still represents a relevant public health problem. The aim of this study is to analyze the epidemiological profile of individuals affected by AIDS in Brazil from 2010 to 2020. An ecological, descriptive, retrospective study was carried out quantitatively. Data were obtained from consulting the DATASUS databases. A total of 437,327 cases were observed, with a downward trend in their incidence during the analyzed period. Results demonstrated that AIDS, over the years, has been modifying its epidemiological profile, due to the processes of heterosexualization, juvenilization, pauperization, interiorization, and feminization of the pandemic. In the analyzed period, there was a prevalence profile of young adults, male, heterosexual, aged between 30 and 34 years old, brown, with complete secondary education, and living in the southeast. There has been an important change in the prevalence of the disease in recent years and the evaluation of the current risk behavior shows that heterosexuals have the highest number of diagnoses. It remarks the importance of emphasizing the fact that everyone is susceptible to contracting HIV infection if there is no prevention. INTRODUÇÃO Acquired immunodeficiency syndrome (AIDS) is caused by the human immunodeficiency virus (HIV) and targets the TCD4 lymphocytes of the immune system (NEVES et al., 2015). HIV destroys the body's natural defense mechanisms, allowing several diseases to settle in, establishing AIDS (MOURA & FARIA, 2017). In Brazil, the AIDS epidemic still represents a relevant public health problem that heterogeneously affects different groups of the population and different areas of the country, according to some sociodemographic characteristics (MOURA & FARIA, 2017). The first cases of AIDS were reported in male homosexuals in the United States in 1981. Subsequently, the syndrome was described in hemophiliacs, blood-transfused patients, drug users, children born to infected mothers, and sexual partners of infected individuals. The first indication that AIDS was caused by a retrovirus occurred in 1983 when a virus with reverse transcriptase activity was isolated from the lymph node (FOCCACCIA & VERONESI, 2015). More than three decades after the identification of the first case of AIDS, the number of people infected by HIV around the world exceeded 35 million in 2012, and approximately 36 million people died during the epidemic. There has been an increase in the number of adult and young women affected by AIDS, representing 50% of individuals living with the virus (FOCCACCIA & VERONESI, 2015). The epidemic has three distinct phases in Brazil. The first phase was described only by those infected with HIV, particularly homosexual men with a high level of education, and this period was marked by the definition of "risk groups". The second phase was marked by the concept of "risk behavior" due to the large number of contaminations by injection drug use, affecting a significant heterosexual portion, which, consequently, determined the third and current phase, which is represented by an increase in cases between females, individuals with low education, and spreading of AIDS to the countryside, generating the concept of vulnerability (MOURA & FARIA, 2017). The spread of HIV infection reveals an epidemic of multiple dimensions as a result of the high inequalities in Brazilian society, which has undergone significant changes in its epidemiological profile over the years. In the beginning, the epidemic was limited to a few cosmopolitan circles of the national metropolises, being predominantly male, affecting mainly men with homosexual sexual practices and hemophiliac individuals. Currently, it is characterized by processes of heterosexualization, feminization, interiorization, and pauperization (BRITO, CASTILHO & SZWARCWALD, 2000). The most efficient route of HIV transmission is through blood (FOCCACCIA & VERONESI, 2015) and may occur through blood transfusion and its products (GOLDMAN & SCHAFER, 2014). In Brazil, where official legislation made it mandatory to screen blood with anti-HIV tests, a decline in the rates of HIV transmission through blood was noted from 1988 onwards (FOCCACCIA & VERONESI, 2015). The acquisition of HIV in children occurs mainly through vertical transmission. The most effective intervention is the use of antiretrovirals during pregnancy, delivery, and postpartum (FOCCACCIA & VERONESI, 2015). The relationship between drug users and HIV cannot be seen exclusively as a consequence of injecting drug use, as most users are young and sexually active. They acquire and transmit HIV by sharing the same injection equipment (direct transmission) and having unprotected sex (indirect transmission) (FOCCACCIA & VERONESI, 2015). HIV has been found in almost all body fluids and tissues, but its transmission occurs mainly through exposure to blood, semen, vaginal secretions, and breast milk (GOLDMAN & SCHAFER, 2014). Laboratory testing strategies are intended to improve the quality of diagnosis of recent HIV infection, which must be safe and performed promptly. Tests for detecting infection are usually performed on three occasions: serological screening of donated blood, ensuring the safety of blood, blood products, and organs for transplantation; epidemiological surveillance analysis; and diagnosis of HIV infection (BRASIL, 2013). Antiretroviral therapy was introduced in the late 1980s, and combination antiretroviral therapy in the late 1990s, which revolutionized the treatment of HIV infection. From a rapidly lethal pathology with imprecise treatments, HIV infection has become a chronic disease in recent decades. Treatment options for HIV management are gradually more accessible and based on evidence, creating a suitable environment for greater integration of primary care and multidisciplinary teams in the management of this pathology (BRASIL, 2006). Early diagnosis and treatment are essential due to the great lethality potential of the pathology. It can lead to complications resulting from opportunistic diseases such as pneumonia, tuberculosis, Kaposi sarcoma, and lymphomas when not treated or treated improperly (SANTOS et al, 2020;DAGNAW et al., 2023). AIDS does not yet have a cure or vaccine and hence prevention and control must be based on specific actions to reduce the risk, primarily aimed at vulnerable populations, in addition to measures that facilitate access to early diagnosis and adequate treatment for those infected. The implementation of policies to transform structural determinants is essential, especially the reduction of stigma and discrimination against the groups most affected by the virus (FOCACCIA & VERONESI, 2015). AIDS is still a major public health problem in Brazil. In recent years, there have been important scientific advances regarding this disease. The AIDS pandemic is characterized by a dynamic behavior and its epidemiological profile has been changing over time. Evaluating these changes is an important tool for setting new goals for preventing and combating the pandemic. Thus, the aim of this study is to analyze the epidemiological profile of individuals affected by AIDS in Brazil from 2010 to 2020. MATERIAL AND METHODS An ecological, descriptive, retrospective study with a quantitative approach was carried out. The data were obtained by consulting the SINAN (Notifiable Diseases Surveillance System), SIM (Mortality Information Systems), and SISCEL (Control System for Laboratory Tests of the National Network for Counting CD4+/CD8 Lymphocytes and Viral Load), which are made available at http://www.datasus.gov.br by the Department of Informatics of the Unified Health System (DATASUS). The data were collected from June 1st, 2022, to June 10th, 2022. The data analysis period was from June 11th, 2022, to June 16, 2022. Cases diagnosed and reported as AIDS in Brazil from January 1, 2010, to December 31, 2020, available on DATASUS, were included in the study. Data available up to December 2020 were analyzed to avoid notification delay errors, as it was the last year with available complete data. The Federative Republic of Brazil currently has a population estimated at 214,732,242 inhabitants10. Brazil is divided into 26 states, in addition to the Federal District, comprising a total of 5,570 municipalities11. The incidence of AIDS per 100,000 inhabitants/year was calculated based on the absolute population living in the state according to IBGE. The variable educational attainment was divided into illiterate, incomplete 1st to 4th grade, complete 4th grade, incomplete 5th to 8th grade, complete primary education, incomplete secondary education, complete secondary education, incomplete tertiary education, complete tertiary education, and not applicable. Ethnicity was divided into white, black, brown, yellow, indigenous, and ignored. The exposure category consisted of homosexual, bisexual, heterosexual, injection drug users, vertical transmission, and ignored. The data obtained from DATASUS were analyzed and arranged in tables using the software Microsoft Office Excel and Microsoft Word. The study did not require approval by the Research Ethics Committee as it only involved the collection of information originating from a publicly accessible database (DATASUS). RESULTS A total of 437,327 cases of AIDS were reported in Brazil in the period of 10 years. In this period, the year 2013 was responsible for the highest number of cases. Table 1 shows a decrease in the number of diagnosed cases of AIDS over the years. Moreover, males had the highest incidence (66.65%), almost double compared to females (33.33%). Regarding race, brown (27%) showed the highest significance among the results, followed by white (26.7%). The lowest rates were found among yellow (0.31%) and indigenous people (0.20%). Race identification was ignored in 39.2% of reported cases. A predominance of sexual transmission was observed, with the most frequent infection found in heterosexuals (35.1%), followed by homosexuals (13.4%) and bisexuals (3.5%) ( Table 2). The Southeast region had the highest incidence of AIDS diagnoses (25.4%), followed by the South region (14.8%). The lowest rate was found in the Midwest region (4.8%), as shown in Table 3. The age group with the highest incidence of AIDS in the analyzed period was 30 to 34 years old (Table 4), with 16.03% of cases. It was followed by the age group 35 to 39 years old (14.95%) and 25 to 29 years old (14.63%). The elderly (60 years or older) had a percentage of 5.41%. The age with the lowest incidence was 5 to 9 years (0.26%). Source: Prepared by the authors based on data from the Ministry of Health/SVS -Notifiable Diseases Surveillance System -SINAN, 2022. The epidemiological analysis of the variable educational attainment in individuals with AIDS showed a higher incidence of cases in the population with complete secondary education (18.29%), (Table 5) followed by individuals with incomplete 5th to 8th grade (15.36%). The population with the lowest incidence was illiterate (2.07%). Individuals with complete higher education had an incidence of 7.67%. DISCUSSION The highest prevalence among males considering the diagnosed HIV cases has been observed in several national studies (BRASIL, 2006;PEREIRA et al., 2018;TRINDADE et al., 2019). The fourth decade of the AIDS epidemic in Brazil has shown that men are the group most affected by the infection, with a decrease in the detection rate among females14. In general, men's lifestyle and behavior relative to health would be greatly influenced by culture and society, leading them to present a higher risk of acquiring health problems compared to women (SANTOS et al., 2019). Studies have shown that white individuals more frequently seek health services to perform the diagnosis and treatment of sexually transmitted infections (STIs), while black people have greater difficulty in attending and accessing health services (FIGUEIREDO et al., 2020). The risk of acquiring HIV through blood transfusions was significantly reduced due to the screening in blood banks and inactivation processes during the preparation of concentrated blood products (GUIMARÃES & CASTILHO, 1993). Injecting drug users have significant importance in the HIV epidemic, as they are considered a "bridge" for the spread of infection among other populations. Also, they are seen as a morose group relative to the perspective of behavioral change (HACKER & BASTOS, 2003). A well-known practice in several countries is the non-use of condoms in all sexual relations, which significantly contributes to the increase in the incidence of AIDS (CASTRO el al., 2020). These rates demonstrate the heterosexualization of the AIDS epidemic today. Heterosexual men have a high prevalence of AIDS because they are not the focus of prevention policies or actions, not being considered regarding risk behaviors (GUIMARÃES & CASTILHO, 1993). Homosexuals were the group most likely to become infected at the beginning of the AIDS epidemic, but this reality has changed over the years. Currently, the infection occurs in different groups of exposure. Research considers that heterosexual transmission is more prevalent, which leads to higher rates in females. This process is called "feminization" and "heterosexualization" (AGUIAR et al., 2021). The AIDS epidemic in Brazil is divided into three phases. The first phase, which began in the 1980s, had a higher incidence in groups of homosexuals, bisexuals, and recipients of blood products. The second phase began in the 1990s and was more prevalent in injecting drug users and heterosexuals. The third phase showed an increase in contagion among heterosexuals, which led to an increase in female contamination (SOARES et al., 2014). The decrease in cases of vertical transmission occurred due to screening policies, anti-HIV testing, and the treatment of pregnant women infected with the virus (SOARES et al., 2014). Southeast was the region with the highest rate of AIDS cases (25.4%), followed by the South region (14.8%). These data corroborate the research results, in which the contagion rate is much higher in the Southeast region compared to other regions of Brazil (GODOY et al., 2008). It is one of the most developed regions of Brazil, with greater access to health services, in addition to having intense tourism, with a higher probability of contracting an imported infection through the incorrect use or even non-use of condoms, which can explain the high rate of AIDS cases. The population living in this region is composed of many young people of working age, with a high level of contact with the external environment (AGUIAR et al., 2021). Results showed that the Northeast region represents 14.2% of the cases. The region has one of the worst indicators for HIV/AIDS. In addition, it has been showing changes in its epidemiological profile due to the processes of impoverishment, juvenilization, aging, spreading to the countryside, heterosexualization, and feminization. It corroborates the breaking of paradigms related to infection: it is not just certain groups that are vulnerable to the virus, demystifying the relationship between HIV and homosexuality, promiscuity, risky behavior, and other stigmas (JUNIOR et al., 2019). AIDS cases spread across Brazil from the Rio-São Paulo axis, evidencing the infection's process of interiorization. HIV virus has spread from large urban centers to small and mediumsized cities in the countryside. Changes in the profile of AIDS in Brazil are mainly due to the process of spreading to the countryside, the growth of heterosexual transmission, and the increase in the number of cases among injecting drug users. The current epidemic is not limited to large cities (BRITO et al., 2000). Regarding age groups, the population aged 30 to 34 years presented the highest incidence, followed by the group aged 35 to 39 years. It reveals that the distribution of AIDS cases is concentrated mainly among the community of reproductive age, with the main form of transmission being sexual intercourse without a condom (TOMAZELLI, CZERESNIA & BARCELOS, 2003). Moreover, there is an increase in the percentage of AIDS diagnoses among the elderly. A discreet "aging" process of the epidemic has been taking place. The disease is diagnosed in the elderly after a long period of investigation and by excluding other pathologies since the signs and symptoms of AIDS are often confused with those of other diseases in this age group. Also, there are other factors such as the prejudice of professionals in requesting the HIV test, the fact that this population sometimes considers itself immune to the virus, and the lack of dialogue on the part of health professionals about the sexual life of the elderly (GODOY et al., 2008). These data corroborate the increase in sexual activity among the elderly, resistance to using condoms, and the availability of technology that improves and prolongs sexual performance (GODOY et al., 2008). Adolescents end up leaving prevention in the background, as they consider AIDS a distant threat, as its manifestations are not noticed immediately after unprotected sexual intercourse. The adoption of a preventive behavior would be more likely if the manifestations were earlier (JUNIOR et al., 2022). The results of this study show that most cases of AIDS occurred in people with complete secondary education (24.17%), followed by people with incomplete 5th to 8th grades. It shows that the highest incidences of AIDS are concentrated in populations with low levels of education. Educational attainment can be seen as a reflex marker of the socioeconomic conditions of individuals. In a study, most patients with AIDS had completed 1st and 2nd grades (TOMAZELLI, CZERESNIA & BARCELOS, 2003). The AIDS epidemic has been increasingly reaching populations with socioeconomic disadvantages. An important factor in the quality of knowledge about HIV/AIDS is education (IRFFI, SOARES & de SOUZA, 2010). A higher level of education stimulates a greater demand in the individual for knowledge about the virus and, consequently, facilitates the understanding of the risks of contagion (FERREIRA et al., 2014). Addressing sexuality in the school environment is necessary for a quality education system, forming responsible citizens for participation in society. The absence of this education, in addition to ignorance, fear, or the unsubstantiated response, sustains the increase in the rates of sexually transmitted infections (FERREIRA et al., 2014). This reduction in the level of education of patients with AIDS contrasts with data from the beginning of the pandemic. In a study carried out in Brazil, the epidemic began in social strata with higher education, with a progressive spread to social strata with lower education over time. CONCLUSION The epidemiological analysis of AIDS cases in Brazil in the period from 2010 to 2020 shows a prevalence profile of young, male adults aged between 30 and 34 years, with brown skin color and complete secondary education who contracted the virus through heterosexual intercourse. The Southeast region represents the highest prevalence of cases in Brazil. There was a decrease in the overall number of diagnoses in Brazil during the analyzed period, which demonstrates the effectiveness of infection prevention strategies. Incentives and maintenance of public AIDS prevention policies are extremely important to maintain the pattern of reducing the number of new cases, as well as their early detection and therapeutic institution. Homosexuals had a higher prevalence of cases at the beginning of the AIDS epidemic in 1980, being considered "risk groups". This term contributed to the idea that the disease affected only certain niches and was then replaced by "risk behavior". There has been an important change in the prevalence of the disease in recent years and the evaluation of the current risk behavior shows that heterosexuals have the highest number of diagnoses. It remarks the importance of emphasizing the facr that everyone is susceptible to contracting HIV infection if there is no prevention.
2023-08-25T15:22:52.835Z
2023-08-21T00:00:00.000
{ "year": 2023, "sha1": "253e80bcfa412630de39d1a89d2b46a545ed6eda", "oa_license": "CCBYNCSA", "oa_url": "https://clium.org/index.php/edicoes/article/download/1889/1256", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "98ed89750fa2d3d93ab7f30969fd9937db830590", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
7642437
pes2o/s2orc
v3-fos-license
The stoichiometry and kinetics of the inducible cysteine desulfhydrase from Salmonella typhimurium. Abstract Studies using highly purified cysteine desulfhydrase from Salmonella typhimurium reveal that only a small fraction of the cysteine utilized by the enzyme appears as pyruvate. The isolation of 2-methyl-2, 4-thiazolidinedicarboxylic acid from reaction mixtures offers an explanation for this unusual stoichiometry. The relative amounts of pyruvate and thiazolidine produced during a reaction depend upon the cysteine concentration, pH, and the presence of a protein termed Fraction B, which prevents the formation of the thiazolidine. We propose that 2-aminoacrylate may be an intermediate in the formation of 2-methyl-2, 4-thiazolidinedicarboxylic acid. Substrate velocity curves for cysteine desulfhydrase reveal positive cooperativity with an n value of 1.9 and a Km, for l-cysteine of 0.17 to 0.21 mm. The product, sulfide, inhibits the reaction with a Ki of 0.010 mm. Sulfide inhibition is of the linear competitive type at high cysteine concentrations, but it becomes nonlinear and more pronounced at low cysteine concentrations. The unstable intermediate, 2-aminoacrylate, is assumed to decompose rapidly to ammonia and pyruvat,e even without enzymatic intervention (2). Since this equation predicts equimolar yields of pyruvate, ammonia, and sulfide, it is puzzling that several studies on the stoichiometry of the reaction have found lesser yields of pyruvate than could be accounted for on the basis of sulfide production or cysteine depletion (8-10). Previous investigators have suggested that this unusual stoichiometry might be due to peculiarities of the pyruvate assay used (8) or the presence of transaminases in the crude extracts in which the enzyme has been studied (11). We have recently reported the purification of an inducible cysteine desulfhydrase from Xalmonella typhimurium to a state of near homogeneity (12). Using highly purified enzyme, we find that the yield of pyruvate is still only a fraction of that expected on the basis of sulfide or ammonia production. This report details the isolation and characterization of 2-methyl-2,4thiazolidinedicarboxylic acid as a product of the cysteine desulfhydrase reaction. The results of studies on certain kinetic properties of the purified enzyme are also presented. d I I I I I I I I I I I 3 5 7 9 Cysfeme (mM) FIG. 1. Dependence of apparent ~650 on cysteine concentration in the sulfide assay. Sulfide was assayed as described under "Experimental Procedure" using solutions containing 0.05 M sodium sulfide and varying concentrations of L-cysteini in 0.1 RI Tris-HCl, pH 8.6. convenient to measure sulfide production when purifying the enzyme and for kinetic studies. Reactions are carried out at 23" in capped test tubes (10 X 75 mm) containing 2.0 ml of a given concentration of L-cysteine in 0.1 M Tris-HCl, pH 8.6. The reaction is started by the addition of a small volume of enzyme diluted in 0.1 M Tris-HCl, pH 7.6, containing 0.5 mg per ml of bovine serum albumin, and terminated by the addition of 0.2 ml of 0.02 &I N,N'-dimethyl-p-phenylenediamine sulfate in 7.2 N HCl followed immediately by 0.2 ml of 0.03 M FeC13 in 1.2 N HCl (15). The tube is then recapped, vigorously shaken for a few seconds, and, after storage in the dark for 15 to 20 min, the absorbance at 650 nm is determined in a spectrophotometer. The apparent ~~50 for sulfide is dependent upon the cysteine concentration, and for kinetic studies in which cysteine concentrations were varied, the curve shown in Fig. 1 was used to calculate the amounts of sulfide formed. At the 2.0 mM cysteine concentration used for enzyme purification and other routine assays, the apparent ~50 for sulfide is 1.56 x lo4 M-l cm-l. Due to inhibition of the enzyme by the product sulfide, a plot of sulfide production per given period of time uersus enzyme concentration is not linear. We have previously shown (12) that the initial velocity of the reaction, Vi, can be calculated from the expression vi =++ K,Q' 2K,1(K,tA) , t>o (II where & is the sulfide concentration at time t, A is the initial cysteine concentration, which is assumed not to vary significantly during the course of the reaction, K, is the Michaelis constant for cysteine, K, is an inhibition constant for sulfide, and & = 0 at t = 0. At 2.0 mM L-cysteine K,/BK,(K, + A) is equal to 5 mM+. One unit of enzyme is defined as that amount which gives a Vi of 1 pmole of sulfide per min under these standard conditions. For the determination of pyruvate 1.0 ml of reaction mixture is incubated in an uncapped test tube (13 X 100 mm), and the reaction is terminated by the addition of 0.5 ml of 1.0 N H&Oh. After 5 min, 0.5 ml of 3 mM 2,4-dinitrophenylhydrazine in 1.5 N HCl is added, followed 15 min later by 0.5 ml of 7.1 M KOH. After an additional 10 min the absorbance of the 2,4-dinitrophenylhydrazone at 540 nm is measured. Early in the course of this work it was found that one of the products of the cysteine dcsulfhydrase reaction is a derivative of pyruvate, which gives no appreciable color reaction u-ith the 2,4-dinitrophenylhydrazine reagent unless pretreated u-ith an acidic solution of mercuric ion. 11-e refer to t'he pyruvate detectable in the absence of mercuric ion as free pyruvate, while the pyruvate which is measured after treatment with mercuric ion is referred to as total pyrurate. Total pyruvate is the sum of free pyrurate and the pyruvatc present as the derivative. To measure the total pyrurate produced in a reaction, 0.01 M HgS04 in 1.0 N HzS04 is substituted for the 1.0 r HzS04 used in the free pyruvate assay. This results in the formation of a precipitate after the addition of the KOH reagent, u-hich must be removed by centrifugation before determining the absorbance at 540 nm. Using sodium pyruvate solutions standardized by the lactate dehydrogenase method (16), I\-e find that this modified 2,4-dinitrophenylhydrazine assay gives an c540 of 4.4 x lo3 h1-l cm-l without HgS04 (free pyruvate) and 4.6 X lo3 nz+ cnl-1 ITit HgS04 (total pyrurate). The rate of pyruvate production can also be determined in a continuous spectrophotometric assay, utilizing NADH and a large excess of lactate dehydrogenase. For this purpose the basic reaction mixture is supplemented with 0.2 mM NADH and 5 units per ml of lactate dehydrogcnase, and the loss of absorbance at 340 nm is followed \\-ith time. Initial reaction \ elocities are measured in a rerording spectrophotometer, and are linearly proport'ional to the amount of cysteine desulfhydrase added. This procedure measures only the rate of free pyruvate production, since lactate dehydrogenase does not react with the pyruvate derivative. Stoichicmetry-In experiments designed to determine the stoichiometry of the cysteine desulfhydrase reaction, 5 ml of a solution containing 2.0 rnM L-cysteine in 0.1 1~ Tris-HCl, pH 8.4, in a Thunberg tube (150 X 18 mm) [I-ere deaeratcd by bubbling nitrogen for 5 min through an aperture specially fitted to the bottom of the tube. Approximately 0.3 unit of purified cysteine desulfhydrase in a small volume of solution was then added to start t'he reaction, which was carried out for 15 min at 23" while nitrogen was continuously bubbled through the incubation misture. Hydrogen sulfide n-as collected by directing the gas outlet stream through 70 ml of a solution containing 0.5 gram of zinc acetate and 0.75 gram of sodium acetate in a loo-ml volumet,ric flask. The reaction was terminated by the addition of 1.0 ml of 1.0 N HzS04, and the remaining hydrogen sulfide was collected for an additional 10 min. Control experiments, using standard solutions of sodium sulfide, sho\T-ed that 95 to 100% of the added sulfide could be collected in this manner. To the solution of zinc acetate were then added in rapid succession 10 ml of 0.02 hl N ,N'-dimethgl-p-phenylenediamine sulfate in 7.2 N HCI and 10 ml of 0.03 %f ferric chloride in 1.2 N HCl. The flask was quickly stoppered, shakenvigorouslyfor 1 min, and the total volume was adjusted to 100 ml with water. After storage for 15 min in the dark, the absorbance at 650 nm was determined. The ~650 for sulfide under these conditions was found to be 2.67 X lo4 M-' cm-l. The acidified reaction mixture was then diluted with water to a volume of 25 ml and assayed for cysteine with 5,5'-dithiobis(Znitrobenzoic acid) (17), for ammonia by t)he glutamate dehydrogenase method of Su et al. (18), and for free and total pyrurate. xc'0 detectable sulfide remained in the reaction mixture. ORDl and ultraviolet spectra were obtained using automatic recording spectrophotometers, Cary models 60 and 15, respectively. Protein determinations were performed by the biuret method (19), and autoradiography was done as previously described (20). Reaction mixtures to be analyzed for alanine were first oxidized with performic acid (21), lyophilized, and then adsorbed to a column (5 cm x 1 cm%) of Dowex 50W-Hf (X8,200 to 400 mesh) at pH 2. After washing the column with water to remove cysteic acid, alanine and other amino acids were eluted with 1 N KHIOH, concentrated by lyophilization, and analyzed on a Beckman model 121 amino acid analyzer. Recoveries subsequent to performic acid oxidation and prior to amino acid analysis were estimated by adding a small amount of [lJ4C]glycine to each sample. RESULTS Initial attempts to quantify the products of the cysteine desulfhydrase reaction revealed an unusual stoichiometry, which varied with the stage of enzyme purification (see below). The data in Table I show that, using purified enzyme, the molar yields of sulfide and ammonia are equal, while the amount of free pyruvate detected is less than 10 y0 of that expected from the accumulation of the former two products. Furthermore, the disappearance of cystcine from the reaction mixture is greater than can be accounted for by the yield of any one of these three products. As measured after preincubation of the reaction mixture with acidic mercuric ion, the yield of total pyruvate nearly equals that of ammonia or sulfide. In addition, that portion of the total pyruvate which is not detectable as free pyruvate is approximately equal to the amount of cystcinc not accounted for by the sum of the total pyruvate formed and the cysteine remaining at the end of the reaction. We hare accounted for this unusual stoichiometry by identifying a mercuric ion-labile conjugate of cystcine and pyruvate, provisionally designated Compound CP, as a product of the cysteine desulfhydrase reaction. Preparation of Compound CP-The enzyme used was a fraction which had been purified through the first ammonium sulfate step (12) and then desalted at room temperature by gel filtration through a Sephadex G-50 column, equilibrated with 0.1 M Tris-HCl, pH 8.4. The specific activity of this preparation was 1 unit per mg, representing a 4-fold purification from the crude extract. Three hundred units (12 ml) of cysteine desulfhydrase were added to 380 ml of a solution containing 40 mmoles of L-cysteine (free base) adjusted to pH 8.4 with 5 N NaOH. During the entire course of the reaction, the mixture was stirred vigorously at room temperature in an open beaker, and the pH was kept at 8.0 to 8.4 by the addition of NaOH. An additional 40 mmoles of dry L-cysteine were added after 3 hours of incubation, at which time the total pyruvate concentration was 0.034 M. Four hours later, total pyruvate was 0.054 RI and another 175 units of enzyme were added. After an additional 16 hours of incubation, the total pyruvate concentration had reached 0.079 M (32 mmoles), and Dhe free pyruvate concentration was 0.006 M. The solution was adjusted to pH 7.6 with glacial acetic acid and was filtered through Whatman No. 1 paper. After the addition of 4 volumes of cold absolute ethanol, the filtrate was chilled to -20" and refiltered. Following concentration to a volume of 100 ml in a rotary evaporator at a temperature not exceeding 1 The abbreviation used is: ORD, optical rotatory dispersion. I Stoichiometry of cysteine desulfhydrase reaction Reactions and assays were carried out as described under "Experimental Procedures," using 5 ml of reaction mixture containing 2.0 mM rJ-cysteine at pH 8.4 as substrate. 6189 Highly purified cysteine desulfhydrase was added at a concentration of approximately 0.06 unit per ml to start the reaction, which was terminated 15 min later by the addition of acid. 45", the solution was titrated to $1 7.6 and filtered. Four volumes of absolute ethanol were added, and the solution was again filtered and reconcentrated to a volume of 40 ml. The pH of the concentrate was adjusted to 7.6, and then it was filtered first through Whatman No. 1 paper and then through a Millipore (0.45 ~1) membrane filter. The addition of 19 volumes of icecold absolute ethanol to this solution resulted in the formation of a white, gel-like precipitate. After storage at -20" overnight, the precipitate was collected by filtration, washed with cold absolute ethanol, and dried in vacua. The yield was 5.8 g. The dried material was dissolved in water at room tcmperature (200 mg per ml), and, after the addition of 4 volumes of cold absolute ethanol, the turbid, yellow solution was clarified by passage through a Millipore membrane filter. Absolute ethanol was added to a final concentration of 95%,, and after 2 hours at -20" the resultant precipit,ate was collected by filtration and dried in uacuo. The yield was 4.5 g. A second reprecipitation gave Compound CP in 3.1 g yield. Characterization of Compound CP-To 40 ml of a solution containing 500 mg (1.96 mmoles of total pyruvate) of Compound CP were added 20 ml of 0.5 M HgClz in 1.0 N I-ICI. A white precipitate formed which, after adjustment of the solution to pH 3.0 with concentrated NH40H, was collected by filtration, washed with two 5-ml portions of water, and set aside for further analysis. Excess mercuric ion was removed from the filtrate and washes by passage through a column (20 cm x 2.5 cmz) of Dowex 50W-H+ X8, following which the column was eluted with water. The eluate was assayed by both the lactate dehydrogenase and 2,4-dinitrophenylhydrazine methods and was found to have a total of 2.3 and 2.0 mmoles of pyruvate, respectively, in a volume of 96 ml. The 2,4-dinitrophenylhydrazone derivative II-as prepared from this solution (22), and after recryst'allization from 955; ethanol-ethyl acetate a yield of 197 mg (0.73 mmole) was obtained. This material was characterized as the derivative of pyrurate by its meking point (222-223O; authentic 221-222"; mixed 221-223"), and by its mobility in three thin layer chromatography solvent systems. The precipitate from the initial acidic HgC!& step T\-as dissolved in 80 ml of 1.0 1\~ HCl and treated as previously described for the isolation of cystine (20). After two reprecipitat'ions, a yield of 145 mg (equivalent to 1.21 mmoles of half-cystinc) was obtained. The product IT-as identified as L-cystine by its mobility in three thin layer chromatography systems, an [a]:' of -205" (0.1 y0 in 1 x HCl; authentic L-cystine, -203"), and its ability to serve, after reduction with dit'hiothreitol, as a subskate for purified cysteine desulfhydrase. The compound, 2-methyl-2,4-thiazolidinedicarboxylic acid, is a conjugate of pyruvate and cysteine, which by analogy with ot,her thiazolidines might be predicted to decompose in the presence of mercuric ion, giving as produck free pyrurate and the mercuric mercaptide of cysteine (23). The free acid of this thiazolidine derivative was chemically synthesized from pyruvic acid and L-cysteine by the method of Schubert (24), and recrystallized from hot water. il 0.8 M solution was t'itratcd t'o pH 9 with concentrated KaOH, and the disodium salt was precipitated by the addition of 19 volumes of cold ethanol. After two reprecipit,ations the dried product was compared Ivith Compound CP. Both compounds I\-erc found t,o contain negligible amounts of free pyruvate, thiol, sulfide, and ammonium ion, while giving 1 mole of total pyrwatr per 245 to 255 g of material. The results of elemental analyses were as folloTT-s: Two moles of sodium \vere found per mole of total pyruvate, indicating that both compounds are disodium salts. The two compounds camlol be distinguished from each other by thin layer chromatography in t,hree solvent' systems, and have identical infrared spectra. A comparison of the ORD spectra from 225 to 400 nm for t'hc disodium salts of both products shows them to be virtually identical TTith a single Cotton effect noted at 253 nm (Fig. 2). Ultraviolet spectra of both compounds are also similar kth a shallow shoulder at 250 to 255 am and an ~~~0 of 150 X1-l cm-l. The free acid of Compound CT was prepared by adding 0.7 ml of 11.6 s HCl to 4 ml of a solut'ion containing 800 mg of the disodium salt, followed by crystallizat,ion in the cold. The resultant crystals were collected by filtration and recrystallized from hot water giving a yield of 78 mg. The melting point of the free acid of Compound CP is 163-164"; authentic 2-methyl-2,4thiazolidinedicarboxylic acid, 163-164"; mixture 163-164". We conclude from t,hese data that Compound CP is the disodium salt of 2-methyl-2,4-thiazolidincdicarbosylic acid. Furthermore, although such an analysis cannot completely rule out the possibilitJy of stereochemical differences between the two compounds, the ORD and ultraviolet specka, together with the identical melting points of the free acids, constitute excellent evidence for the stcreochcmical identity of Compound CP kth Since both syntheses started with L-cysteine as a reactant, and the chemical synthesis utilizes pyruvate, we feel that both arc probably equal mixtures of diastereomcrs at C-2 with the configuration of the o( carbon atom of L-cysteine at C-4. Fraction B-Using 2.0 rnM L-cystcinc at pH 8.6, the portion of enzymatically produced total pyruvate appearing as free pyrurate varies ITith the stage of enzyme purification from a total pyrurate to free pyruvate ratio of 2, using a crude extract, to a ratio of approximately 6, using highly purified enzyme. Thus, if t'he progress of enzyme purification is follow-cd using the usual types of pyrwate assays rather than the tot'al pyruvate or sulfide assays, an apparent large loss of activity occurs after the first ammonium sulfate precipitation step (12). The greater re1at'iv-e yields of free pyruvate noted with crude preparations of cystcinc desulfhydrase can be attributed to the presence of a factor which we have designated Fraction B. Preparations of this substance can be obtained which have no appreciable cysteinc desulfhydrase activity, but nhkh, when added to reaction mixtures cont'aining pure cystciric desulfhydrase, increase the yields of free pyruvate without affecting the rates of total pyruvate or sulfide product'ion (Table I). Fraction B does not convert purified Compound CP to pyruvate or sulfide, even in the presence of purified cysteine desulfhydrase, but the addition of this factor to a cysteine desulfhydrase reaction misture, in I&i& Compound CI' has already accumulated, results in a decrease in the total pyrurate to free pyrurate ratio of products formed after such addition. Thus the action of Fraction B seems to be to prevent the formation of Compound CP during the cyst'eine desulfhydrase reaction rather than to degrade it. In our attempts to devise a quantit'ative assay for Frartion B rre have found that a linear relationship exists between the total pyruvate to Compound CP ratio (where Compound CP is assumed to be t'he difference bctlwcn total pyrm-ate and irec pyruvatc) and the amount of Fraction B added to a cysteinc desulfhydrase reaction mixture (Fig. 3). Thus our standard assay consists of adding Fraction B to 1.0 ml of a standard incubation mixture containing 0.05 unit per ml of purified cysteine desulfhydrasc and measuring the total pyruvate to Compowd CP ratio after a 5-min incubation. A control in l&ich Fraction B is omitted is also run and the diffcrcnce in the total pyruvate to by guest on March 24, 2020 http://www.jbc.org/ Downloaded from Compound CP ratio is dctc~rmincd. One unit of activity is defined as that amount of Fraction B which CHUSCS an increasr in the total pyruvatr to ('ompound CP I,atio of 1.0 under these standard conditions. 'fh(~ assay is wcful between the limits of 0.3 to 5 units of Fractioil B activity per ml of reaction mixture. Fraction B was purified from frozen cells of S. typhimurium, LT2, grown in minimal salts-~lucoar media containing cithcr 1.0 mM L-cystinc or 0.5 nw L-djrnkolate as the sole sulfur source (20). We find that levels of Fraction B activity are independent of the sulfur source used for growth, and for that reason djew kolate-grown cells were used in the preparation described hcrr to eliminate the l)ossibility of cwntamination of Fraction B with cystcine desulfhydrase. Following ceIltrifugation at 40,000 X g for 30 min, the supernatant was rrmovrd and treated with 0.4 volume of 10% streptomycin sulfa@ pH 7.0. After 10 min of stirring at room temperature, t,hc Ilrecipitate was removed by centrifuaation and ammonium sulfate, 210 mg per ml, was slowly added to the supernatant with stirring. Following: centrifugation, the supcrnatant horn this step was heated to 90" in a boiling lx-ate1 bath, and, aftrr 1 min at that temperature, cooled in an ice bath. Coagulated protein was removed by centrifugation, and Fraction B activity was preripitatcd by the addition of an additional 280 mg per ml of ammonium sulfate to the supernatant. This precipitatc was dissolved in 0.1 JI Tris-HCl, pH 7.6, 0.5 M XaCl, and dialyzed at 4" against the same buffer. This prorrdurc results in a 25 to 30.fold purification with a 45 to 500/, yield (Table II). The ammonium sulfate and heat steps remove all cysteine desulfhydrase activity, whether the ~11s are growl on djrnkolate or cystine as a sole sulfur source. Fracation B activity is resistant to treatment with RNase and DiYase but is rapidly illactivated by treatment with small amounts of trypsin, which, after subsequent dilution, have no effect on the cysteine dcsulfhydrasc assay itself. The purified material is relatively stable when stored frozen, losing approximately 1O70 of its activity prr mo;itli at -20". Other Factors InJluencing Synthesis oj' Compound Cl'-The cysteine desulfhydrase-mediated synthesis of Compound CP is markedly dependent upon $1 and cysteine caoncentration. Using 2.0 mM L-cysteine and purified cysteine desulfhydrase, the total pyruvate to free pyruvatc ratio illcwases from a value of 2 at $1 7.2 to a value of about 6 at 111-I 8.6 ( Fig. 4A). At a constant 1)1-I of 8.6, the total pyruvatc to free pyruvate ratio is directly but not linearly, proportional IO x>-cysteille concentration, and cstrapolates to a value of 1 at, zero cyst&e concentration (Fig. 4B). Other investigat,ors have prwiously postulated (8, 10) and demonstrated (25) the Irollenzymatic formation in aqueous solutions of addwts b&wcII cysteillc and certainly carbonyl compounds. Therefore, studies wore performed to evaluate the extent to which the Ilollc~tlzymatic formation of Compound CI' occurs. Fig. 5 s11ows the results of rspcriments in which L-cysteinc and sodium pyruvate at several different concentrations were incubated ill 0.1 M 'I'ris-IICl, pH 8.6, at 23" for varying periods of time. Usilla 2.0 ~JI I,-c)-stcine and 0.2 mM pyruvate, no appreciable loss of free pgruvatc could be detected even after 90 min of incubation. At hi&w concentrations of both substratcs, however, significant losses of free pyruvate were noted stant. Under these wnditions the half-life of flee pyruvatc~ is 60 min at 20 mar L-cystrinr, 2.0 ~RI pyruratc, and 13 inin at 100 mM L-cysteine, 10 mu pyruvate. Fl,action B has no effwt on the i,ate of iioiieilzymatic~ formation of mercuric ioli-labile pyruvatc. Although no appreciable nonenzymatic loss of free pyruvatcs ocrurs at the cysteine and pyl'uvate collc~elltratiorls presrnt' in our routine cysteine dcsulfhydrasc assay, all of our analytical data have been obtained on ('ompound CP which leas prepared using 0.1 M L-cysteinc as substxrte. l'hereforr it is likely that at, least a portion of our enzymatically produced material was fol,rnc>d by a non-enzyme-depend& reaction bctn-ten L-cystcine and ~JTUvate. To establish the identity of Compound CP x\ ith the mercuric ion-labile pyl,nyatc made in the ~~KXXKT of cysteillr desulfhydrase at 10~ cystcine concentrations, reaction mirturca containing 2.0 11131 Q"S]cystcGne as substrate were analyzed f'ol radiolabeled Compound CP. Small llortions of thrse reaction mixtures wwc spotted on T'i'hatmaii Ko. 3811\1 I:apcr, and, after electrophoresis in 0.025 31 sodium citratr, $1 5.8, for 1 hour at 20 volts per cm, the 1:ositions of rlirlll~drin-llositive carrier conpounds wcrc compared with the locations of radiolabrl as tic,tected by autoradiography. The arcas on the paper correspondwith time, wllile total pgrllvate concentrations remained con-ing to cysteine and Coml~ound Cl' wre then cut out and countc,d in a scintillation counter to obtain more quantitative results. When 0.4 mM pyruvate (approximately the amount produced in the enzymatic reaction) was included in a reaction mixture lacking cysteine desulfhydrase, only 1.77; of the total radiolabel was incorporated into material with the electrophoretic mobility of Compound CP. Fraction B had no effect on this nonenzymatic reaction. In contrast, after incubation with cysteine desulfhydrasr, 12.2yb of the total radiolabel migrated with Compound CP, and the addition of Fraction B at a concentration of 20 units per ml decreased this incorporation to a level of 2.2% of the total rndiolabcl. Autoradiogral)hy revealed a radioactive spot which esactly superimposed over the faintly ninhpdrin-positive area corresponding to added carrier Compound CP. These data substantiate the notion that under our usual assay,conditions the formation of 2-methyl-2,4-thiazolidincdicarboxylic acid is de-I'cndent upon the cysteine desulfhydrase reaction. Subsfrate Spec$city and pH Optinluni-Among potential substratcs thus far tested, purified cysteine desulfhydrase is quite sprcific for I>-cysteine. lncubntion of the enzyme with L-cys-oL---+ IO 20 Cysfelne (m&4) total pyruvate concentration of approximately 0.2 mM after 5 min of incubation. B, varying concentrations of L-cysteine were incubated for 5 min with cysteine desulfhydrase in 0.1 M Tris-HCl, pH 8.6. The other compounds tested for substrate activity inhibit the enzyme less than 15y0 at concentrations of 0.5 to 2.0 MM. The pH optimum of cysteine dcsulfhydrase in 0.1 M Tris-HCl is 8.6, with a rather sharp decline in activity at pH levels below 8.3. The activity at pH 7.0 is less than 5y0 that observed at 8.6. Kinetic Studies-Kinetic studies of cysteine desulfhydrase have been complicated by the potent inhibition of the enzyme by its product, sulfide. One approach to this problem has been to measure rates of pyruvate production in uncapped reaction tubes, which allow diffusion of hydrogen sulfide from the reaction mixtures (9). Due to the quantitatively uncertain extent of sulfide diffusion under such conditions and the lesser sensitivity of the pyruvate assay, u-e have chosen to measure rates of sulfide production and to analyze our results after correcting for sulfide inhibition. Substrate-velocity studies \vere carried out by measuring the accumulation of sulfide ad a function of time at different concentrations of L-rysteine. Initial velocities of reaction, Vi, were then estimated graphically using the t050 for sulfide appropriate for each L-cysteine concentration (see Fig. 1). As shown in Fig. 6, a plot of T/i versus z-cysteine concentration gives a sigmoidshaped curve with a half-maximum Vi at 0.21 mM L-cysteine. A plot of l/Vi versus l/S reveals that at L-cysteine concentrations greater than 0.3 mM a straight line is obtained, and that at points corresponding to lower substrate concentrations the slope of the line increases (Fig. 6, inset). The apparent K, for L-c>-steine calculated from the linear portion of the double reciprocal plot corresponding to higher cyst&e concentrations is 0.17 mM. The Vi at each cysteine concentrat,ion was determined as the initial rate of sulfide production as described under "Results." The inset shows the plot of l/Vi WTSWS l/S. Treating the data according to the method of Hill (26), a plot of la [Bi/(V, -Vi)] versus In X gives a straight line with a slope of 1.9 (Fig. 7). Thus the dependence of the reaction rate on L-cysteine concentrations shows positke cooperativity with an n value of almost 2 at substrate concentrations less than 0.03 m&r. The rate of sulfide production is unaffected by sodium pyruvate, Compound CP, or ammonium sulfate when added either singly or in various combinations at concentrations of 0.025 InM or 0.5 mM. Preincubation of cysteine desulfhydrase with 0.02 mM sodium sulfide, however, leads to a partial inhibition of activity which is unrelated to the time of preincubation for at least 10 min. Removal of the sulfide by dilution results in a loss of inhibition to a level expected by the lower concentration of sulfide. Since the inhibitioll appears to be very rapid and reversible, we have endeavored to describe it in terms based on the assumptions of steady state, rapid equilibrium kinetics. Due to the difficulties involved in estimating initial velocities by the sulfide method in solutions to which exogeneous sulfide has already been added, we have carried out our inhibition studies by measuring the time-dependent accumulation of endogeneously formed sulfide in the absence of added sulfide. Fig. 8 shows product (sulfide) versus time curves at five different concentrations of L-cysteine using a constant amount of enzyme. The shapes of these curves indicate that the percentage of inhibition at a given sulfide concentration is markedly dependent on the cysteine concentration, and is greater at lower substrate concentrations. Fraction B at a concentration of 10 units per ml has no effect on the shape of such curves. We find that under certain conditions sulfide inhibition appears to be of the linear competitive type (27) which can be described by : which holds only when Q = 0 at t = 0. Thus when the sulfide concentration is measured as a function of time under conditions where the change in cysteine concentration is small (less than 10% in our experiments) and the sulfide concentration is zero at time zero, a plot of Q versus t/Q should give a straight line with a y intercept equal to -2K, obtained to sulfide concentrations as high as 0.09 m&l (Fig. 9). Using lower substrate concentrations a straight line can be drawn through points corresponding to low sulfide concentrations, but at higher sulfide concentrations and longer incubation times the points describe lines which becomc concave downward. Since prrincubation studies show no time-dependent effect of sulfide on the enzyme activity, we conclude that, under the combined conditiolls of low qxteine concentration (less than 1 IrIM) and high sulfide conrcntrntion (greater than 0.05 m&f) the inhibition of cysteille desulfhydrase by sulfide is nonlinear. If one assumes that the linear portions of the curves obtained by plotting Q versus f,/Q reflect' cwnditiolls where sulfide inhibition is of the linear competitive Our tiata ii&ate that during the cyst&e desulfliydrase rcxtion, a l,ortion of the total desulfuratcd cystcinc, as measured by t,he production of sulfide, reacts with additional cysteine to give 2-met,hy-2,4-thiazolidinedicarboxylic acid. Thus for every mole of thiazolidine formed, 2 lnoles of cysteiuc are consumed, releasing 1 mole of sulfide, 1 mole of ammonia, and no free pgruvate. This s&me fits wc~ll with the stoichiomrtric data presented in Table I and free pyruvate confirm these earlier obserl-ations and favor the notion that the product of this reaction is the thiazolidine. Several investigators have suggested that the immediate products of the cyst&e desulfhydrase reaction are sulfide and 2aminoacrylate, and that the latter compound, being unstable in aqueous solution, is spontaneously alld rapidly hydrolyzed to ammonia and pyruvate (2, 3). Thus, while the nonenzymatic fol,mationof mercuric ion-labile pyruvatc from free pyruvate and cystcGne is low to undetectable under the conditions of our assay, it is possible that a reaction between 2-aminoacrylate and cys-t&w might, occur readily. The immcdiatc product would be a thiolwmiketamine, which could then cyclize to the thiazolidine, liberating ammonia in the process (Fig. 11). Our data showing that the total pyruvatc to free pyruvate ratio estrapolates to a value of 1.0 at zero cystcine concentration (Fig. 4B) are consistent with the prediction of this model that the amount of Compound CP formed should br directly proportional to the cgsteine concentration. In an experiment desigwd to demonstrate the existence of 2-amilroacrylatc as an intermediate in the cysteine dcsulfhydrase reaction, sodium borohydride n-as added to reaction mixtures on the assumption that any 2-aminoacrylate lpresent would be reduccd to alanine. The data presented in Table III show that when a complctc reaction mixture was treated with borohydride, the amount of alanine recoxwcd was considerably more than that found in control mixtures lacking enzyme or cysteine. The lesser yield of alanine noted with the higher concentration of borohydride is probably explained by the fact that while cysteine desulfhydrase retains aplxoximately 50yc of its activity in the lxrscnce of 0.2 mM borohydride, the enzyme is rapidly and complet,ely inactivated by 10 mu borohydride. Therefore the ala- ;H2 -bH-CO; Although the in vitro formatiolI of 2.methyl-2,4-thiazolidinedicarbosylic acid under the conditions of our standard assay is pertinent to an undcrst'anding of the enzymology of cysteine desulfhydrasc, thr in viva significance of this reaction is problematic. rnder the conditions of pH and cysteine concentration which one would expect to find irz Go, it is unlikely that very much thiazolidine would bc made, particularly in the presence of Fractioll B. The mechanism by which Fractioll B prevents thiazolidine formation is c~ompl&ly unknown. Presumably it is an enzyme 6195 which reacts catalytically with an intermediate in the reaction to give free pyruvate. If 2-aminoacrylate is in fact a precursor of the thiazolidine, Fraction B might facilitate its hydrolysis to pyruvate and ammonia, either directly or by catalyzing an eneamine tautomerization to give that tautomer which is less reactive with cysteine or more readily hydrolyzed. Alternatively Fraction B might hydrolyze the thiohemiketamine before cyclization takes place or perhaps even interact with cysteine desulfhydrase itself in such a way as to enable it to release pyruvate and ammonia directly. The exact role of Fraction B in cellular metabolism is also unclear, since it is found at the same concentrations in cells either grown 011 cysteine or starved for sulfur by growth on djenkolate. Perhaps Fraction B is an enzyme of general usefulness to the cell rather than being limited to a single function related to cysteine catabolism. The results of our kinetic studies are essentially in agreement with those of Collins (9), who, using a part'ially purified preparation of enzyme and a different, assay, found cooperative kinetics for cysteine with a K, of 0.22 m&l and an n value of 1.9. Collins also studied the inhibition of the enzyme by sulfide and found evidence for mixed inhibition kinetics with a Ki (K 4 in our terminology) for sulfide of 0.007 m&f. It may be of some significance that sulfide inhibition shows a greater deviation from linearity at lower cysteine concentrations M-hero the dependence of the reartiou rate upon cysteine concentration is positively cooperative. The csaggeration of product inhibition at these low cyst,eine concentrations may be related to the ability of sulfide to interfere with cooperativity either by competing with cysteine for an allosteric site or by otherwise preventing the enhancement of enzyme activity related to an allosteric event.
2018-04-03T03:40:45.229Z
1973-09-10T00:00:00.000
{ "year": 1973, "sha1": "0aa94179c73d48a753f25b3d340f3c9c417003db", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/s0021-9258(19)43526-6", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "90355c4fa1b5f370a4e7a57d629d1239077c75e2", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
231768762
pes2o/s2orc
v3-fos-license
Acute lymphoblastic leukemia masquerading as acute myelofibrosis: a report of two cases and literature review jection every 1–2 weeks was effective and safe for treating the Korean boy with severe hemophilia A and a high titer inhibitor. Emicizumab dramatically reduced bleeding episodes, the use of BPA, and overall medical burden. The patient had a history of complications, including severe melena, chemoport site hematoma, and large hematomas at other body sites during conventional treatment; conversely, this patient had no bleeding episodes requiring BPAs during emicizumab prophylaxis. Additionally, an invasive procedure such as chemoport removal operation was possible for the patient during emicizumab prophylaxis, without any adverse events. Emicizumab treatment resulted in improved quality of life and convenience for the patient and his parents. Acute lymphoblastic leukemia masquerading as acute myelofibrosis: a report of two cases and literature review TO THE EDITOR: Unlike primary myelofibrosis (PMF), acute myelofibrosis (AMF) is a distinct clinicopathological entity characterized by the sudden onset of pancytopenia, extensive bone marrow (BM) fibrosis, megakaryocytic hyperplasia with or without dysplasia, leukoerythroblastic blood picture, and absence of hepatosplenomegaly (HSM) and no tear drop cells [1][2][3][4][5][6][7][8]. AMF is an uncommon presentation of acute myeloid leukemia (AML; particularly AML-M7), acute panmyelosis with myelofibrosis, and occasionally myeloproliferative neoplasm [especially chronic myeloid leukemia (CML)] [9,10]. BM fibrosis has been reported in acute lymphoblastic leukemia (ALL) at diagnosis (B cell-ALL>T cell-ALL); although BM fibrosis has been shown to correlate with a low minimal residual disease (MRD)-negative rate at the end-of-induction (EOI) [11], AMF in association with ALL is extremely rare. AMF may be either concurrent with ALL or precede its onset. We present two cases of ALL that were preceded by AMF and a literature review. Case 1 A 50-year-old man presented to our hospital in June 2018 with a two-month history of progressive fatigue. Past medical history was significant for diabetes mellitus, which was well controlled with antidiabetic medications. Examination revealed marked pallor and no HSM or lymphadenopathy. Complete hemogram revealed pancytopenia (hemoglobin, 6 g/dL; white cell counts, 3.25×10 9 /L; and platelets, 78×10 9 /L) without any atypical cells. BM aspiration (BMA) performed at an outside hospital in May 2018 was a dry tap. BM biopsy revealed extensive reticulin fibrosis [World Health Organization (WHO) grade 2] without any blasts (Fig. 1). He was managed symptomatically with blood transfusions. In July 2018, peripheral blood (PB) examination showed a left shift with 13% blasts. He did not have eosinophilia or basophilia. Moreover, BMA performed in July 2018 was a dry tap. BM biopsy revealed dense reticulin fibrosis (WHO grade 2) admixed with blasts. Findings of PB flow cytometry (FCM) were consistent with the diagnosis of precursor B-cell ALL (Pre-B ALL; Fig. 2A). BM cytogenetics could not be performed due to dry tap. Reverse transcription polymerase chain reaction (RT-PCR) using PB was positive for the BCR-ABL transcript (p190), whereas JAK-2, CALR, and MPL mutations were negative. He was treated with the European Working Group for Adult ALL (EWALL) protocol for Philadelphia-positive ALL (Ph + -ALL) along with imatinib. EOI BMA and biopsy (day-33) were in morphological remission, with significant resolution of BM fibrosis (WHO grade 0-1). Complete molecular remission was achieved at 3 months. However, the disease relapsed after 1 year, and he died 2 months after relapse. Case 2 A 14-year-old boy presented to our hospital in May 2019 with complaints of high-grade fever and epistaxis for 1-month duration. Examination revealed marked pallor and no HSM or lymphadenopathy. Complete hemogram revealed pancytopenia (hemoglobin, 5 g/dL; white cell counts, 0.8×10 9 /L; and platelets, 20×10 9 /L). In March 2019, BMA performed at an outside hospital was a dry tap. BM biopsy revealed extensive reticulin fibrosis (WHO grade 3) without any immature cells or granuloma. In addition, in May 2019, repeat BMA performed at our hospital was a dry tap. BM biopsy revealed reticulin fibrosis (WHO grade II) with occasional immature cells [CD34 + and CD117 + on immunohistochemistry (IHC)]. However, immature cells could not be characterized further by IHC. The patient was closely followed up and supported with transfusions when indicated. Fever work-up was inconclusive. The vitamin-D and parathyroid hormone levels were within normal limits, and antinuclear antibodies were absent. In June 2019, PB findings were unremarkable except for occasional blasts, FCM of PB was inconclusive, and BMA was a dry tap. BM biopsy revealed dense fibrosis (WHO grade 2), dysplastic mega-karyocytes, and occasional blasts. The patient was managed symptomatically with transfusion support. In August 2019, BM was successfully aspirated from the sternum. FCM of BMA revealed early T-precursor ALL (ETP-ALL, Fig. 2B). BM cytogenetic analysis revealed a normal male karyotype. He was treated with the Berlin-Frankfurt-Münster (BFM)-95 protocol for ALL. EOI BMA and biopsy on day 33 revealed morphological remission and significant resolution of BM fibrosis (WHO grade 0-1). He was continued on chemotherapy and is currently in the maintenance phase. We reported two cases of ALL (pediatric ETP-ALL and adult Ph + -ALL). In these cases, AMF preceded the diagnosis of ALL by 5 and 3 months, respectively. No cases of ETP-ALL and Ph + -ALL presenting as AMF have been reported to date. Megakaryocyte dysplasia was present in case-2 patient. Most probably, cytokines released from lymphoblasts (megakaryocytes in case-2 patient) resulted in AMF in our patients [1,8]. This is supported by the fact that BM fibrosis significantly reduced after ALL treatment. Secondary MF is commonly observed in CML at diagnosis [16]. Absence of HSM, eosinophilia, basophilia, dwarf megakaryocytes in the BM, and a relatively short history argued against the possibility of CML with lymphoid blast crises in case-1 patient; hence, the diagnosis of Ph + -ALL was favored. The diagnosis of ALL was delayed because the BM was inaspirable due to AMF and PB blasts were absent at presentation. Initial BM biopsies revealed only MF without any blasts. Subsequently, PB blasts were detected in both patients (case-2: 4 mo and case-1: 3 mo), which raised the Table 1. Clinical and pathological review of seven cases of acute myelofibrosis with acute lymphoblastic leukemia reported in the literature. S. No Author The current report highlights that AMF can either occur concurrently with or precede the diagnosis of acute leukemia. In the latter case, after excluding the secondary causes of MF and PMF, the suspicion of acute leukemia masquerading as AMF should be maintained. The diagnosis and subtyping of acute leukemia is frequently delayed either due to technical challenges in obtaining BM aspirate, absence of blasts in the initial BM biopsies, or low percentage of PB blasts during subsequent follow-ups, mandating close follow-up of the patients. Because dense BM fibrosis can mask the blasts, IHC analysis should be performed for immature cells in cases of AMF. Repeat BM examinations, particularly from different sites, may aid in obtaining sufficient sample for FCM analysis. ALL treatment usually results in AMF resolution. A limited number of patients preclude the evaluation of the effect of AMF on the EOI MRD and prognosis. Thus, we reported a rare presentation of ETP-ALL and Ph + -ALL and their diagnostic challenges.
2021-02-03T06:17:16.838Z
2021-01-28T00:00:00.000
{ "year": 2021, "sha1": "bbe2823b1e65c1f3f434dda9866349bdb17f061e", "oa_license": "CCBYNC", "oa_url": "https://www.bloodresearch.or.kr/journal/download_pdf.php?doi=10.5045/br.2021.2020160", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bfb73c1245fc9c9a25efe521b122a68668b3bbb9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14312689
pes2o/s2orc
v3-fos-license
Severe acute respiratory infections caused by 2009 pandemic influenza A (H1N1) among American Indians—southwestern United States, May 1–July 21, 2009 Background During April–July 2009, U.S. hospitalization rates for 2009 pandemic influenza A (H1N1) virus (H1N1pdm09) infection were estimated at 4·5/100 000 persons. We describe rates and risk factors for H1N1pdm09 infection among American Indians (AIs) in four isolated southwestern U.S. communities served by the Indian Health Service (IHS). Methods We reviewed clinical and demographic information from medical records of AIs hospitalized during May 1–July 21, 2009 with severe acute respiratory infection (SARI). Hospitalization rates were determined using denominator data provided by IHS. H1N1pdm09 infection was confirmed with polymerase chain reaction, rapid tests, or convalescent serology. Risk factors for more severe (SARI) versus milder [influenza‐like illness (ILI)] illness were determined by comparing confirmed SARI patients with outpatients with ILI. Results Among 168 SARI‐hospitalized patients, 52% had confirmed H1N1pdm09 infection and 93% had >1 high‐risk condition for influenza complications. The H1N1pdm09 SARI hospitalization rate was 131/100 000 persons [95% confidence interval (CI), 102–160] and was highest among ages 0–4 years (353/100 000; 95% CI, 215–492). Among children, asthma (adjusted odds ratio [aOR] 3·2; 95% CI, 1·2–8·4) and age <2 years (aOR 3·8; 95% CI, 1·4–10·0) were associated with H1N1pdm09 SARI‐associated hospitalization, compared with outpatient ILI. Among adults, diabetes (aOR 3·1; 95% CI, 1·5–6·4) was associated with hospitalization after controlling for obesity. Conclusions H1N1pdm09 hospitalization rates among this isolated AI population were higher than reported for other U.S. populations. Almost all case patients had high‐risk health conditions. Prevention strategies for future pandemics should prioritize AIs, particularly in isolated rural areas. Introduction Historically, hospitalization and mortality rates from respiratory illness and influenza complications have been higher among American Indians (AIs) and Alaska Natives (ANs) than among the U.S. population. [1][2][3] During influenza pandemics and epidemics, AI/AN communities have had disproportionately higher morbidity and mortality, compared with other races. The reasons for these disparities are likely multifactorial and may include socioeconomic status, household crowding, and higher prevalence of chronic diseases. 2 In April 2009, a novel influenza A virus was detected, 4 now known as the 2009 pandemic influenza A (H1N1) virus (H1N1pdm09). On June 11, 2009, the World Health Organization confirmed the first influenza pandemic since 1968. 5 During April-June 2009, Indian Health Service (IHS) providers reported increasing severe respiratory illness at multiple facilities, particularly in the southwest, which were the earliest affected regions. 4 On June 10, 2009, a cluster of severe respiratory illness that included AIs was reported to a southwestern U.S. county health department. The IHS, the Centers for Disease Control and Prevention (CDC), and state public health officials were notified and began investigation of respiratory illness hospitalizations among AIs served by four IHS and tribal healthcare facilities (collectively referred to as IHS). These areas are geographically remote, with substantial poverty; they receive free health care through IHS facilities, which are the sole providers in these locations. We sought to determine the rate of H1N1pdm09-related hospitalizations, describe clinical characteristics and outcomes of patients hospitalized with H1N1pdm09-associated respiratory illness, and describe risk factors for hospitalization with confirmed H1N1pdm09 severe acute respiratory infection (SARI). Case finding and medical record review Severe acute respiratory infection case finding was initiated by reviewing all hospital admissions of ! 24 hours in the IHS Health Information System for AIs who resided in the four catchment areas and were hospitalized during May 1-July 21,2009. Hospital admissions with a respiratory chief complaint or provider diagnosis of respiratory distress were reviewed to determine whether they met SARI criteria. The four IHS service units, which represent distinct geographic regions and Indian tribes, had an overall user population of 70 018, which comprises 22% of the state's AI/AN population (315 727). We defined SARI among children aged <5 years as a hospitalization with physician-documented findings suspicious for pneumonia (fever or cough and age-appropriate tachypnea) and among persons aged ! 5 years as a hospitalization with measured temperature ! 38°C, difficulty breathing, and either cough or sore throat. [6][7][8] Medical records were reviewed at hospitals where a person with a SARI was treated, including IHS facilities and non-IHS tertiary care centers to which severely ill patients were transferred and were abstracted using an adaptation of CDCdeveloped forms. 9 We collected demographic and clinical information. Clinical information included the presence of any lung disease (including asthma, chronic obstructive pulmonary disease, and interstitial lung disease), diabetes, immunosuppressive medical conditions, cardiovascular disease, chronic renal disease, neurocognitive disorder, neuromuscular dis-order, and pregnancy. The body mass index (BMI, weight in kilograms/meters 2 ) was calculated among persons, excluding pregnant women and children aged <2 years. Obesity was defined among adults aged >19 years and children aged 2-19 years by standard definitions. 10 Obesity in adults aged >19 was classified as Class I (BMI 30-34Á9), Class II (BMI 35-39Á9), and Class III (BMI ! 40). 10 Laboratory diagnostic testing Per routine care, nasopharyngeal (NP) specimens were collected and tested for H1N1pdm09. Specimens were tested at the state public health laboratory using established CDC protocol for real-time reverse transcriptase-polymerase chain reaction (rRT-PCR). 11 Convalescent serum specimens were collected 15-90 days after symptom onset from 122 persons who met the SARI case definition with uncertain H1N1pdm9 status. Specimens were stored at À70°C until tested for the presence of H1N1pdm09-specific antibodies using both a microneutralization assay (MN) and a turkey red blood cell hemagglutination-inhibition (HI) assay (HI and A/California/07/2009-like virus). 12 SARI cases were considered as laboratory confirmed for H1N1pdm09 if ! 1 of the following were positive: rRT-PCR for H1N1pdm09, viral culture for H1N1pdm09, BinaxNOW â Rapid Influenza A and B diagnostic tests (Inverness Medical, Ballybrit Galway, Ireland) for influenza A, or a single convalescent serology sample with a MN titer ! 40 and HI titer ! 20. This combination of pH1N1-specific antibody titers was demonstrated to provide 90% sensitivity and 96% specificity for detection of H1N1pdm09 infection among U.S. persons aged <60 years and 92% specificity among those aged 60-79 years. 12 Because of inadequate specificity of MN and HI criteria for persons aged ! 80 years, those with elevated titers in MN and HI were considered to have an indeterminate serologic test result. Data collection and analysis We estimated H1N1pdm09-associated hospitalization rates using denominator data provided by IHS, which defines its denominator user population for a given facility as AI persons with ! 1 inpatient or outpatient visit during the previous 3 years. Case patients identified during the study period who did not meet the criteria for potential inclusion in the denominator were not included in the numerator and were therefore excluded. These facilities represented the only source of health care through IHS in the areas covered. Incidence rates were age adjusted to the 2000 U.S. standard population. 13 Differences in clinical characteristics and outcomes were assessed between confirmed and unconfirmed SARI cases by the chi-squared test or by Fisher's exact test for nominal variables and by the Wilcoxon rank sum test for ordinal variables and were considered statistically significant at a P < 0.05. To assess risk factors for hospitalization, we conducted a case-case analysis comparing patients hospitalized with laboratory-confirmed H1N1pdm09-related SARI with illness onset between May 1 and July 21, 2009 with outpatients with influenza-like illness (ILI) from the same four populations over the same time. Outpatients with ILI were identified using established algorithm that defines ILI as a patient visit in which the patient had a temperature ! 37Á8°C and 1 of 24 ILI-related International Classification of Disease Revision-9 (ICD-9) codes or a physician diagnosis of an influenza-specific ICD-9 code (Appendix S1). This ILI definition has been demonstrated to have a sensitivity of 96Á4% and a specificity of 97Á8% for detecting chartconfirmed ILI [JW Keck, JT Redd, JE Cheek, et al., manuscript in preparation]. The presence of asthma, diabetes mellitus, obesity, and pregnancy in both case patients and the background population was determined from ICD-9 codes in the electronic health records; BMI was calculated using height and weight measurements. Risk factors for severe illness (SARI) versus milder illness (ILI), including diabetes, asthma, and obesity, were assessed by calculating crude and adjusted odds ratios, the latter using logistic regression. Risk factors were measured in two different models for adults aged >18 years: (i) comparing any SARI with any ILI, and (ii) comparing H1N1pdm09confirmed SARI with ILI. These risk factors were chosen because of their high frequency among the H1N1pdm09related SARI group, the availability of comparison risk factor data in the ILI group, and their high population prevalence (Appendix S2). Multivariable logistic regression was used to simultaneously evaluate the effect of all possible risk factors either identified in univariate analyses or clinically suspected to have an association with severe illness. We assessed for interactions between obesity and diabetes and between agegroup and asthma, diabetes, and obesity. Age was categorized as <2, 2-4, 5-18, 19-24, 25-49, 50-64, and ! 65 years to match age-groups reported elsewhere and to account for known age-groups at high risk. 14 Because of insufficient observations for analysis, we did not assess as risk factors pregnancy among females or diabetes and obesity among children. All analyses were performed with SAS â version 9Á2 (SAS Corporation, Cary, NC, USA). Ethics review This investigation was part of emergency public health response to the pandemic and underwent human subjects review by IHS, regional tribal authorities, and CDC. It was deemed not to be research in accordance with Federal Regulations 46Á101c and 46Á102d and CDC's Guidelines for Defining Public Health Research and Public Health Non-Research. Adults provided verbal informed consent for serologic specimen collection, and parents and children together gave verbal consent for children aged <18 years. Epidemiology, diagnosis, and treatment One hundred sixty-eight persons with a SARI were identified at four IHS facilities during May 1-July 21, 2009. Of these, 88 (52%) cases were confirmed H1N1pdm09-related SARI ( Figure 1). The epidemic curve of SARI and H1N1pdm09 SARI cases indicates sustained transmission during the investigation period ( Figure 1). Confirmed H1N1pdm09 SARI was most frequent among ages 0-4 years (28%), ages 25-49 years (33%), and females (65%) ( Table 1). The age distribution was significantly different in confirmed versus unconfirmed cases (P < 0Á001). The majority (83%) of confirmed cases received antibiotic therapy, whereas approximately one-third received antiviral therapy. Confirmed cases were more likely to receive antiviral therapy than unconfirmed cases. Appendix S3 summarizes diagnostic testing. Of 168 persons with SARI, 3% did not receive any diagnostic testing. Fifty-six percent were diagnosed by convalescent serology alone. Fifty-eight (46%) of 125 SARI cases who received convalescent serologic testing were positive for H1N1pdm09. Confirmed Not Confirmed Clinical characteristics American Indians with confirmed H1N1pdm09 hospitalization in this investigation presented with similar signs and symptoms as other U.S. H1N1pdm09 hospitalizations (Appendix S4). 9 The majority (93%) of persons with confirmed H1N1pdm09 infection had ! 1 condition conferring high risk of influenza complications (e.g., age <5 years, obesity, or other medical condition) ( Table 2). [14][15][16] The most common medical conditions among those with H1N1pdm09 infections were obesity (71%) and lung disease (29%) for all SARI patients and diabetes (31%) for adult SARI patients. By comparison, in the overall catchment area, background prevalence of obesity was 38% and of asthma 27%; the prevalence of diabetes mellitus was 14% overall (Appendix S2), but increased with age among ages ! 25 years. Intensive care unit (ICU) admission was reported for 26% of patients with H1N1pdm09-related SARI. A smaller proportion of ICU (3/23, 13%) versus non-ICU (16/65, 25%) patients with H1N1pdm09 SARI received antivirals within 2 days of symptom onset, but differences were not significant. Three of 88 (3%) H1N1pdm09-related SARI patients died during hospitalization (two received antivirals, 4 days and 33 days after symptom onset); an additional two patients died with SARI but were not tested for H1N1pdm09. All deceased patients developed acute respiratory failure, 4 of 5 developed sepsis, 4 of 5 required mechanical ventilation, and 4 of 5 developed acute respiratory distress syndrome. Sixty-eight percent of all persons with H1N1pdm09 SARI had abnormal chest radiographs ( Table 2). Admission to ICU (33% among adults versus 13% among children, P = .037) and acute respiratory distress syndrome (14% among adults versus 0% among children, P = 0.046) were significantly more common among adults than among children. Risk factors for H1N1pdm09-related severe acute respiratory infection Among children, multivariable analysis adjusted for sex and age-group showed that asthma [adjusted odds ratio (aOR) 3Á2; 95% CI, 1Á2-8Á4] and age <2 years (aOR 3Á8; 95% CI, 1Á4-10Á0) were risk factors for H1N1pdm09-related SARI (similar associations were found comparing all SARI to ILI cases). Among adults, multivariable analysis adjusting for age-group, sex, asthma, and obesity showed that diabetes (aOR 3Á1; 95% CI, 1Á5-6Á4) was a risk factor for H1N1pdm09-related SARI (Table 3). In comparison, the effect estimate for diabetes was lower in a model of all SARI versus ILI (aOR 1Á8; 95% CI 1Á0-3Á2). Obesity slightly mitigated the association between DM and severe illness (H1N1pdm09 SARI), although diabetes was associated with at least a twofold increase in odds of severe illness (H1N1pdm09 SARI) among both obese and non-obese persons. Among adults, asthma and obesity were not statistically significantly associated with higher odds of being hospitalized with H1N1pdm09-related SARI. Discussion Our investigation estimates the influenza-related hospitalization burden among an AI/AN population residing in the southwestern United States during the initial 3 months of the 2009 H1N1pdm09 pandemic. We report hospitalization rates substantially higher than previously reported. Because of our comprehensive case ascertainment, our rates likely 19 In contrast, we report on rates >300 per 100 000 persons aged <5 years and overall rates of 131 per 100 000 persons who accessed any care at these IHS facilities. A higher influenza-related hospitalization toll has been noted among minority and indigenous populations nationally and internationally. In national enhanced surveillance, H1N1pdm09-associated hospitalizations were most common among AI/ANs, but were also more common among Hispanics and non-Hispanic Blacks than among non-Hispanic Whites (NHWs). 20 An Alaska study reported a 2-4 times higher hospitalization rate among Alaska Native (AN) and Asian/Pacific Island persons, compared with NHWs. 21 New Mexico surveillance data suggested a 2Á6-fold greater likelihood of hospitalization among AIs, compared with NHWs. 22 During our investigation, use of convalescent serology to retrospectively confirm H1N1pdm09-related SARI led to an approximate doubling of confirmed cases than reported by routine diagnostics. The rates in this report without serologic confirmation were still substantially higher than reported elsewhere. Internationally, indigenous populations in Canada, New Zealand, and Australia experienced elevated incidence of hospitalizations relative to the general population, prompting public health focus on vulnerable indigenous populations. [23][24][25][26] The reasons for higher H1N1pdm09-associated hospitalizations among AIs are likely multifactorial. Chronic medical conditions are more prevalent among AI/ANs than NHWs and may contribute to higher rates. [27][28][29] Additionally, the incidence of infection may be higher among AI/ANs. Environmental reasons for higher rates of respiratory illness (e.g., household crowding, indoor air pollution, and less access to hand hygiene) have been reported 30-32 but never specifically studied in relation to influenza. Despite the plausibility of these explanations, surveillance data from IIAS revealed ILI rates similar to rates from other ILI surveillance [JW Keck, JT Redd, JE Cheek, et al., manuscript in preparation], suggesting that higher hospitalization rates may not be a reflection of more persons being infected but rather of more severe outcomes among the infected. This interpretation should be made with caution because careseeking habits and threshold for hospitalization in IHS facilities may be different from those in other populations. Finally, and importantly, antiviral treatment rates (35%) were substantially lower than reported nationally (75%), 9 which may have contributed to more severe disease. Efforts are needed to monitor and improve antiviral use among AI/ AN populations. Among children aged <5 years, hospitalization rates were particularly high, in comparison with national data. 9,17 Other studies have also indicated disparities between AI/AN and overall U.S. infectious disease hospitalization rates in this age-group, especially from lower respiratory tract infection in infants. 33,34 It is possible that the threshold for hospitalization of AIs aged <5 years in these isolated, rural areas may have been lower than for similarly aged children elsewhere. Among adults, diabetes was an independent risk factor for hospitalization with H1N1pdm09-related SARI, even after accounting for obesity, compared with outpatients with ILI. The effect of diabetes was less pronounced when we compared all SARI inpatients (confirmed and unconfirmed) with all ILI outpatients, which may suggest that the risk from diabetes was different in persons infected by H1Npdm09 than in persons infected by other respiratory pathogens. Among children, asthma and age <2 years were independent risk factors for hospitalization, compared with outpatients with ILI. These findings are consistent with recognized risk factors for adverse influenza outcomes reported among the U.S. population. 14 Other investigations have reported an association between morbid obesity and severe H1N1pdm09 illness. 15,16 In our analysis, obesity was not a statistically significant independent risk factor, despite increasingly elevated odds ratios with increasing obesity class. Potential biases may limit our conclusions. First, our case ascertainment was more exhaustive than state and national reportable data, making comparisons difficult. Nevertheless, we found higher AI hospitalization rates compared with U.S. rates reported elsewhere 9 even without the serologic testing, suggesting that disparities are difficult to explain by differential case ascertainment alone. Second, without a laboratory-confirmed outpatient comparison group with H1N1pdm09, risk factors for hospitalization should be interpreted with caution, although the proportion of ILI with confirmed infection did increase as the pandemic evolved. 35 Third, analysis of the role of obesity is limited by many persons with missing height or weight (19% of H1N1pdm09 hospitalizations and 30% of non-hospitalized ILI). Fourth, the user population denominator may undercount young persons who might not access care as often as older persons and might not account for persons who lived previously in these areas but moved; however, the majority of AI children aged <5 years receive regular primary care at IHS facilities, and a "visit" is counted even if for a service such as immunization alone. Fifth, the sensitivity and specificity of convalescent serology in the diagnosis of H1N1pdm09 has not specifically been validated among AI/AN populations. Finally, the experience among these southwestern AIs might not represent the AI population nationally. Our findings suggest that the burden of pandemic influenza hospitalization among AI/ANs particularly in rural areas was substantially higher than rates in most other groups in the United States. Given the worldwide burden of H1N1pdm09-related illness among indigenous populations, 23-26 the high prevalence of risk factors for severe outcomes of respiratory illness among AI/AN populations, and the morbidity and mortality findings among AI/ANs during the U.S. pandemic response, 21,22,36 AI/ANs have been designated by the Advisory Committee on Immunization Practices as a high-risk population for severe influenza illness. 37,38 The high morbidity and mortality among AI/ANs during the H1N1 pandemic accentuate the importance of protecting such vulnerable populations through aggressive vaccination efforts and prioritizing them for treatment, prevention, and health education. Financial support This study was financially supported by the Centers for Disease Control and Prevention, Indian Health Service. Table 3. Diabetes, obesity, and asthma as risk factors for (a) Hospitalization with any severe acute respiratory infection (SARI) and (b) Hospitalization with H1N1pdm09-related SARI, as compared to non-hospitalized influenza-like illness (ILI) among American Indian adults aged >18 years, May 1-July 21, 2009, southwestern United States Severe pandemic influenza among American Indians Published 2013. This article is U.S. Government work and in the public domain in the USA.
2016-05-16T12:44:36.332Z
2013-05-30T00:00:00.000
{ "year": 2013, "sha1": "f368831ebb95ca3a31899a198eda4f16f769f104", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc4634245?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "f368831ebb95ca3a31899a198eda4f16f769f104", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
270308447
pes2o/s2orc
v3-fos-license
Karyotipe analysis of the tiger barb (Puntigrus tetrazona bleeker, 1855) of North Sumatra The Tiger barb, Puntigrus tetrazona, is an ornamental fish native to Indonesia with a notably striking visual. The research was conducted to perform a karyotype analysis of the Tiger barb chromosomes, including the numbers, sizes, and shapes. The study employed a descriptive approach, by applying 0.007% colchicine for 12 hours. Chromosome preparation was carried out using solid tissue technique, and staining was performed with 5% Giemsa stain. Data analysis was performed using IdeoKar 1.3 software, and the karyotype of the chromosomes was constructed using Adobe Photoshop CS 3. The results of the study showed that the tiger barb had the number of chromosome haploid were 50 with sizes ranging from 1.16 µm to 5.35 µm. The largest value of r was 0.93 and the largest chromosome had a relative length percentage of 5.57. The chromosome shape percentage was 2.54. The highest centromere index obtained 0.47. Furthermore, the study found a TF% value was 40.48%, an Ask% value was 59.51%, an XCI value was 0.38, and an HCL value was 90.08. This research provides an understanding of the karyotype characterization of the tiger barb and an insight for further research in the genetics study of fish. Introduction Karyotype is a collection of chromosomes within the cells, in this case of the tiger barb fish, contains identifiable genetic information.In the interest of the aquaculture of the ornamental fish, it give an insights of the karyotype of the tiger barb and can be used to augment the process of selective breeding in identifying individuals with the better breeding potentials. The tiger barb is characterized by its polychromatic visuals and unique form.As such, the tiger barb is highly valued in accordance to its varying types, colors, size and shapes.Previous studies have been conducted in the analysis of karyotype of the tiger barb, such as conducted by [1], which revealed that the tiger barb has 50 haploid chromosomes consisting of 17 pairs of metacentric, 3 pairs of sub-metacentric, and 5 pairs of acrocentric chromosomes.Another research conducted by [2] showed that it had 50 haploids, the shapes and patterns of the chromosomes differ from previous that was 3 pairs of metacentric, 14 pairs of submetacentric, and 8 pairs of acrocentric.As such, this research was performed to revise and reconcile the current understanding on the karyotype structure of the tiger barb and the characterization of its chromosomes.The karyotype analysis on the tiger barb (Puntigrus tetrazona), was conducted using the IdeoKar 1.3 software.The analysis was performed to gain a more accurately calibrated data on the numbers, shapes, lengths of the arm of chromosomes, ratio of the arm of chromosomes, and other parameters that are rarely studied previously.Using of the IdeoKar 1.3 software was done to perform cytogenetic accurately compared to the previous researches. Research Methode Samples are collected from various ornamental fish distributors in Medan.The karyotype analysis was conducted on a sample of 20 tiger barbs, There were taken and treated by applying 0.007% colchicine for 12 hours, chromosome preparation using solid tissue technique, and staining with 5% Giemsa staining.The samples were analyzed by using amscope microscope and then examined by using IdeoKar 1.3.also by Adobe Photoshop CS3 software. The treatment of the sample fish using a solution of 0.007% colchicine for 12 hours, where the fish were kept alive in a small aquarium using an aerator.Afterwards, the incisions were made to extract the gills which were treated by soaking it into a solution of Hypotonic Potassium Chloride (KCL) for 80 minutes.The treatment replaced every 40 minutes, then being fixated with Carnoy solution.The chromosomes were prepared using modified the solid tissue technique, and then stained using a solution of 5% Giemsa stain. Observations were made using a microscope with magnifying magnitude of 100x, 400x, and 1000x.Observations using the 1000x magnification yielded the better result, which were further analyzed to determine the number and shapes of the chromosomes.The number of chromosomes was calculated using a normal distribution, and the diploid chromosomes with the highest frequency were sorted and paired in accordance to its shape and sizes.Further analysis on the karyotype such as the measuring, numbers, and shapes of the chromosomes were conducted using the IdeoKar 1.3.soft-ware and then reconstructed by pairing to homolog chromosomes using Adobe Photoshop CS 3. Results And Discussion The karyotype analysis on the chromosomes of the tiger barb (Puntigrus tetrazona) using the IdeoKar 1.3.software revealed a data on the dispersion, data analysis, and the structures of the chromosome karyotype as well as a visual ideogram show on the Figure 1 Figure 1 reveals the data to be interpreted regarding the number of diploid chromosomes in the tiger barb.The tiger barb was shown to have 25 pairs of diploid chromosomes, totaling 50 chromosomes (2n=50).This finding indicates that each somatic cell of the tiger barb contains a pair of chromosomes.Interestingly, in every chromosome pair, there is a maternal chromosome and a paternal chromosome, which plays a crucial role in inheritance and genetic stability. Stated Bukhsh [3] that fish in the family of cyprinidae normally has the same number of chromosomes, that is 2n=50.Further report by Roesma et al. [4] stated that in five other species of freshwater fish, such as the Rasbora lateristiata, Puntigrus tetrazona, P. binotatus, P. javanicus, and Mystacoleucus padangensis, also exhibit the same number of chromosomes, indicating the similarity between the differing species.The same can be said with the tiger barb (Puntigrus tetrazona).Previous research conducted by Ohno et al. [1] showed that the tiger barb have 2n=50 haploid chromosomes, establishing that the tiger barb have 50 haploid chromosomes [2].While the similar number of chromosomes indicate a degree of similarity between the species, there are other variables that should be accounted for, such as a more detailed data on the genetic structure of the chromosomes, as shown at Table 1 1 indicates that the largest chromosome was at the size of 5.35 µm, and the smallest chromosome at the size of 1.16 µm.The longest chromosome arm was found at the first chromosome, while the shortest chromosome arm was at the 25 th .The 25 th chromosome had the lowest data analytics value, that is 0 at the variables of AR, r-value, F, and CI.As such, the 25 th observation is classified as a telocentric chromosome. The differences in the size of the chromosome are most likely to be attributed to the differences that occur at the time of mitotic division [5].This is also supported by research conducted by [6]that the changes in the length of the chromosomes and the volume of the chromosomes happen during the time of mitosis.The centromere Index, was used to determine the classification of the chromosomes by shapes, where a centromere index of <10% for telocentric chromosomes, a centromere index of 10%-25% for Sub-metacentric chromosomes, 25%-75% for metacentric chromosomes, and 75%-90% for Sub-Telocentric chromosomes.According Lucas et al. [7], introduced the method for calculating the centromere index that is by dividing the value for the longest chromosome arm with the total length of the chromosome arms.The ratio has a range of 0.5 for metacentric chromosomes and up to 1.0 for telocentric chromosomes.This research in particular, found that the distribution shows metacentric chromosomes as the most frequent shape. 2 showed the karyotype asymmetrical index, by using IdeoKar 1.3.It indicated that the value for TF is at 40.48%, thus it can be concluded that 40.48% of all the chromosomes in the karyotype have the same, or at least similar shape.The Ask value of 59.51% revealed that the majority of the chromosomes in the karyotype are metacentric.While an XCI of 0.38 indicates that the location for the centromere is skewed towards the centre.A HCL value of 90.08 indicates that the length of the haploid chromosome varied between species, and even between individuals within the same species.As is indicated in Figure 2, the karyotype of the tiger barb (Puntigrus tetrazona), has 50 haploid chromosomes consisting of 19 pairs of metacentric, 3 pairs of sub-metacentric, 2 pairs of subtelocentric, and a pair of telocentric chromosomes.This finding was contradicts to previous researches on the chromosomes of the tiger barb, such as in [1] where the tiger barb was known to have 50 pairs of chromosomes consisted of 17 pairs of metacentric, 3 pairs of sub-terminal, and 5 pairs of acrocentric chromosomes.Such contradiction can also be found by juxtaposing the findings of [2], where it was known that the tiger barb has 50 pairs of haploid chromosomes consisted of 3 pairs of metacentric, 14 pairs of sub-metacentric, and 8 pairs of acrocentric chromosomes.It is important to note that these two researches was conducted solely on two variables of measurement, that is the shape and number of chromosomes, while this research was done on a broader multivariate parameters.To represent the sequence and relative position of the chromosomes in a cell, an ideogram was constructed on the karyotypes of the tiger barb (Puntigrus tetrazona) based on the data pertaining the numbers and structures of the chromosomes observed. Figure 2 . Figure 2. Structures and Ideogram of the chromosomes' karyotypes of tiger barb Table 1 . : Data Analytics on the chromosomes of the tiger barb Table 2 . Data Analytics on Karyotype Asymmetrical Index
2024-06-07T15:06:52.746Z
2024-05-01T00:00:00.000
{ "year": 2024, "sha1": "46ac22d2c708377c2bf8fc6f196d654e82952c59", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/1352/1/012058", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "c6674bf87828a965acf60d7e9e8b538aec4b110f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Physics" ] }
56382728
pes2o/s2orc
v3-fos-license
Contractive maps in Mustafa-Sims metric spaces The fixed point result in Mustafa-Sims metrical structures obtained by Karapinar and Agarwal [Fixed Point Th. Appl., 2013, 2013:154] is deductible from a corresponding one stated in terms of anticipative contractions over the associated (standard) metric space. Introduction Let X be a nonempty set; and d : X × X → R + := [0, ∞[ be a metric over it; the couple (X, d) is called a metric space. Call the subset Y of X, almost singleton (in short: asingleton) provided [y 1 , y 2 ∈ Y implies y 1 = y 2 ]; and singleton, if, in addition, Y is nonempty; note that, in this case, Y = {y}, for some y ∈ X. Further, let T ∈ F (X) be a selfmap of X. 1a) We say that T is a Picard operator (modulo d) if, for each x ∈ X, the iterative sequence (T n x; n ≥ 0) is d-convergent 1b) We say that T is a strong Picard operator (modulo d) if, for each x ∈ X, (T n x; n ≥ 0) is d-convergent, and lim n (T n x) belongs to Fix(T ) 1c) We say that T is a globally strong Picard operator (modulo d), if it is a strong Picard operator (modulo d) and (in addition), Fix(T ) is an asingleton (or, equivalently: singleton). The sufficient (regularity) conditions for such properties are being founded on orbital concepts (in short: o-concepts). Namely, call the sequence (z n ; n ≥ 0) in X, orbital (modulo T ), when it is a subsequence of (T n x; n ≥ 0), for some x ∈ X. Here, diam(U ) = sup{d(x, y); x, y ∈ U } is the diameter of the subset U ⊆ X; and T (x; n) := {T i x; 0 ≤ i ≤ n}, x ∈ X, n ≥ 0; referred to as: the orbital n-segment generated by x. This result extends the ones in Banach [4], Kannan [16], and Zamfirescu [37]; see also Hardy and Rogers [14]. Since all quoted statements have a multitude of applications to the operator equations theory, Theorem 1 was the subject of many extensions. The most natural one is to pass from the "linear" type contraction above to (implicit) "functional "contractive conditions like (a03) F (d(T x, T y), d(x, y), d(x, T x), d(y, T y), d(x, T y), d(y, T x)) ≤ 0, for all x, y ∈ X; where F : R 6 + → R is a function. For a basic extension of this type, we refer to Daneš [7]; further choices of F may be found in Rhoades [29] and the references therein. Note that, all such conditions are non-anticipative; i.e., the right member of (a03) does not contain terms like d(T i u, T j v), u, v ∈ {x, y}, where i + j ≥ 3; so, the question arises of to what extent it is possible to have anticipative contractions (in the above sense). A positive answer to this was recently obtained, in the "linear" case of Theorem 1, by Dung [10]. It is our aim in the present exposition to give a further extension of this last result, in the functional context we just quoted. As an argument for its usefulness, a fixed point theorem in Mustafa-Sims metric spaces due to Karapinar and Agarwal [17] is derived. This, among others, shows that a reduction of their statement to standard metrical ones is possible, along the lines described by Jleli and Samet [15]; in contradiction with authors' claim. Further aspects will be delineated elsewhere. Functional anticipative contractions Let (X, d) be a metric space; and T be a selfmap of X. In the following, we are interested to solve the problem of the introductory part with the aid of (implicit) contractive conditions like where Φ : R 10 + → R + is a certain function. As precise, these conditions are anticipative counterparts of the (non-anticipative) condition (a03). To describe them, some conventions are needed. Given ϕ ∈ F (R + ), we say that T is anticipative (d; ϕ)-contractive, provided (b02) (∀x, y ∈ X): d(T x, T y) ≤ ϕ(B(x, y)); where B(x, y) = diam[T (x; 2) ∪ T (y; 1)]. The functions ϕ to be considered here are to be described as follows. Call ϕ ∈ F (R + ), increasing, provided [t 1 ≤ t 2 implies ϕ(t 1 ) ≤ ϕ(t 2 )]; denote the class of all these with F (in)(R + ). The basic properties for such functions to be used in the sequel are as follows: i) Given ϕ ∈ F (in)(R + ), we say that it is regressive, in case ϕ(t) < t, for all t > 0; hence, ϕ(0) = 0. Note that this property holds in case of ϕ being super regressive: ϕ(s + 0) < s, for all s > 0; hence, ϕ(0) = 0. Here, as usually, ϕ(s + 0) = lim t→s+ ϕ(t) is the right limit of ϕ at s > 0. iii) For the last one, we need a convention. Let ϕ ∈ F (in)(R + ) be regressive. Denote ψ(t) = t − ϕ(t), t ∈ R + ; it is an element of F (R + ); referred to as the complement of ϕ. By definition, the coercive property for this last function means: (b04) lim t→∞ (ψ(t)) = ∞: i.e.: ∀α > 0, ∃β > α: [t > β =⇒ ψ(t) > α]. By definition, it will be referred to as: ϕ is complement coercive; note that, passing to the negation operator, this property may be written as: We are now in position to state our basic result of this section. Proof. We firstly check the asingleton property of Fix(T ). Let z 1 , z 1 ∈ Fix(T ); and suppose by contradiction that z 1 = z 2 ; hence, d(z 1 , z 2 ) > 0. Clearly, so that, by the contractive condition (and ϕ=regressive) contradiction. Hence, necessarily z 1 = z 2 ; and the asingleton property follows. It remains now to establish the strong Picard property for T . Fix some x 0 ∈ X; and put (x n = T n x 0 , n ≥ 0); clearly, this is an orbital sequence. If x n = x n+1 for some n ≥ 0, we are done; so, without loss, one may assume that (b06) x n = x n+1 (hence, ρ n := d(x n , x n+1 ) > 0), ∀n. Remember that, for each x ∈ X and each n ≥ 0, T (x; n) = {T i x; 0 ≤ i ≤ n} stands for the orbital n-segment generated by x. Put also T (x; ∞) = {T i x; i ≥ 0} = ∪{T (x; n); n ≥ 0}; and call it: the orbital set generated by x. Note that, by the introduced notations, we have, for each k ≥ 0, Moreover, by the working hypothesis above, There are several steps to be passed. I) We start with the following useful evaluation Lemma 1. Under the introduced notations, Proof. (Lemma 1) The case of i = j is clear; so, without loss, one may assume i < j; hence, i + 1 ≤ j. By definition, wherefrom, combining with the contractive condition, This ends the argument. II) The following consequence of this fact is to be noted. III) The following d-Cauchy property of our iterative sequence is now available. After n steps, one thus gets and conclusion follows. There are two alternatives to be discussed. Case IV-1. Suppose that T is o-continuous. Then, (y n := T x n = x n+1 ; n ≥ 0), d-converges to T z. On the other hand, (y n ; n ≥ 0) is a subsequence of (x n ; n ≥ 0); so that, y n d −→ z. As d is sufficient, this yields z = T z. Case IV-2. Suppose that ϕ is super regressive. To get the desired fact, we use a reductio ad absurdum argument. Namely, assume that z = T z; i.e., b := d(z, T z) > 0. From the contractive property, we have where (cf. the previous notations), Note that, by the continuity of the map (x, y) → d(x, y), the sequence (λ n := d(x n+1 , T z); n ≥ 0) fulfills λ n → b as n → ∞. On the other hand, by the very definition above, the sequence (µ n := B(x n , z); n ≥ 0) fulfills There are two sub-cases to discuss. Sub-case IV-2-1. Suppose that (b07) for each h ≥ 0, there exists k > h, such that µ k = b. As a consequence, there exists a sequence of ranks (i(n); n ≥ 0) with i(n) → ∞ as n → ∞, such that µ i(n) = b, ∀n. Passing to limit as n → ∞, over this subsequence, in the contractive property (2.6), yields b ≤ ϕ(b); contradiction. Sub-case IV-2-2. Assume that the opposite alternative is true: there exists a certain rank h ≥ 0, such that (b08) n ≥ h =⇒ µ n > b; hence µ n → b+ as n → ∞. Passing to limit in the same contractive property (2.6), gives b ≤ ϕ(b + 0) < b; again a contradiction. Summing up, the working hypothesis about z ∈ X cannot be accepted; so, we necessarily have z = T z. The proof is thereby complete. Then, T is globally strong Picard (modulo d). (C) For the applications to be considered, the following particular case of this theorem will be useful. Denote, for x, y ∈ X, Further, given some γ ≥ 0, we say that T is (d, P, Q; γ)-contractive, provided Note that, by the convention above, this contractive condition is anticipative. The following fixed point result is available. Proof. By the very conventions above, one has So, by the accepted contractive conditions, it follows that Hence, the preceding result applies, with α = 2γ. This ends the argument. As a consequence, Theorem 4 is indeed reducible to the developments above. However, for simplicity reasons, it would be useful having a separate proof of it. Proof. (Theorem 4) [Alternate] First, we establish the asingleton property of Fix(T ). Let r, s be two points in Fix(T ). By definition, P (r, s) = 2d(r, s), Q(r, s) = d(r, s); so that, from the contractive condition, This, along with 0 ≤ 2γ < 1, yields d(r, s) = 0; whence, r = s. It remains now to establish the strong Picard (modulo d) property of T . To this end, we start from (2.7) By the contractive condition, we therefore get where 0 ≤ β := γ/(1 − γ) < 1. Fix some x 0 ∈ X; and put (x n = T n x 0 ; n ≥ 0). By the above evaluation, This tells us that (x n ; n ≥ 0) is a d-Cauchy sequence. As (X, d) is complete, there must be some (uniquely determined) r ∈ X such that x n d −→ r. We claim that r = T r; and this completes the argument. By the contractive condition, d(x n+1 , T r) ≤ γ max{P (x n , r), Q(x n , r)}, ∀n. (2.9) But, from the very definitions above, one has, for all n ≥ 0, This yields lim n P (x n , r) = d(r, T r), lim n Q(x n , r) = 2d(r, T r); whence, passing to limit in the relation (2.9), one gets d(r, T r) ≤ 2γd(r, T r). As 0 ≤ 2γ < 1, this yields d(r, T r) = 0; so that, r = T r. The proof is complete. Note that, further extensions of the obtained facts are possible, in the quasiordered setting; we do not give details. Further aspects may be found in Yeh [36]; see also Popa [28]. Dhage metrics As already precise in the introductory part, there are many generalizations of the Banach's fixed point theorem. Here, we shall be interested in the structural way of extension, consisting of the "dimensional" parameters attached to the ambient metric being increased. For example, this is the case when the initial metric d(., .) is to be substituted by a generalized metric Λ : X ×X ×X → R + which fulfills -at this level -the conditions imposed to the standard case. An early construction of this type was proposed in 1963 by Gaehler [12]; the resulting map B : X × X × X → R + was referred to as a 2-metric on X. Short after, this structure was intensively used in many fixed point theorems, under the model in Namdeo et al [26], Negoescu [27] and others; see also Ashraf [2, Ch 3], for a consistent references list. However, it must be noted that this 2-metric is not a true generalization of an ordinary metric; for -as shown in Ha et al [13] -the associated real function B(., ., .) is not Bcontinuous in its arguments. This, among others, led Dhage [8] to construct -via different geometric reasons -a new such object. Define a sequential D-convergence ( D −→) on (X, D) according to: x n D −→ x iff D(x m , x n , x) → 0 as m, n → ∞; i.e., (c04) ∀ε > 0, ∃i(ε): m, n ≥ i(ε) ⇒ D(x m , x n , x) ≤ ε. Note that this concept obeys the general rules in Kasahara [18]. By definition, x n D −→ x will be referred to as: x is the D-limit of (x n ). The set of all these will be denoted D-lim n (x n ); if it is nonempty, then (x n ) is called D-convergent; the class of all D-convergent sequences will be denoted Conv(X, D). Further, let the D-Cauchy structure on (X, D) be introduced as: call the sequence (x n ) in X, D-Cauchy, provided D(x m , x n , x p ) → 0 as m, n, p → ∞; i.e.: (c05) ∀ε > 0, ∃j(ε): m, n, p ≥ j(ε) ⇒ D(x m , x n , x p ) ≤ ε. The class of all these will be indicated as Cauchy(X, D); it fulfills the general requirements in Turinici [34]. By definition, the pair (Conv(X, D), Cauchy(X, D)) will be called the conv-Cauchy structure attached to (X, D). Note that, by the properties of D, each D-convergent sequence is D-Cauchy too; referred to as: (X, D) is regular. The converse is not in general true; when it holds, we say that (X, D) is complete. (B) According to Dhage's topological results in the area, this new metric corrects the "bad" properties of a 2-metric. As a consequence, his construction was interesting enough so as to be used in the deduction of many fixed point results; see, for instance, Dhage [9] and the references therein. The setting of all these is to be described as below. Let (X, D) be a D-metric space; and T ∈ F (X) be a selfmap of X. The determination of the points in Fix(T ) is to be performed under the lines of Section 1, adapted to our context: 3a) We say that T is a Picard operator (modulo D) if, for each x ∈ X, the iterative sequence (T n x; n ≥ 0) is D-convergent 3b) We say that T is a strong Picard operator (modulo D) if, for each x ∈ X, (T n x; n ≥ 0) is D-convergent, and D − lim n (T n x) belongs to Fix(T ) 3c) We say that T is a globally strong Picard operator (modulo D), if it is a strong Picard operator (modulo D) and (in addition), Fix(T ) is an asingleton (or, equivalently: singleton). Sufficient conditions guaranteeing these properties are of D-metrical type. The simplest one is the following. Call T , (D; α)-contractive (for some α ≥ 0) if (c06) D(T x, T y, T z) ≤ αD(x, y, z), ∀x, y, z ∈ X. The following fixed point statement in Dhage [8] is the cornerstone of all further developments in the area. In the last part of his reasoning, Dhage tacitly used the D-continuity of the application (x, y, z) → D(x, y, z), expressed as D(x, y, z). But, as proved in Naidu, Rao and Rao [24], the described property is not in general valid. (This must be related with the developments in Mustafa and Sims [22], according to which an appropriate construction of a topological and/or uniform structure over (X, D) is not in general possible; we do not give details). A conv-Cauchy motivation of this negative conclusion comes from the fact that the convergence structure Conv(X, D) attached to our D-metric space is "too large"; i.e.: for many sequences (x n ) in X, D − lim n (x n ) is the whole of X. Returning to the above discussion, note that -technically speaking -it would be possible that the conclusion in Dhage's fixed point theorem be retainable, with a different proof. However, as results from an illuminating example provided by Naidu, Rao and Rao [25], this last hope fails as well; so that, ultimately, the above stated fixed point result is not true. Hence, summing up, a fixed point theory in D-metric spaces is not available, under the admitted conditions upon the underlying structure. Mustafa-Sims metrics The drawbacks of Dhage metrical structures we just exposed, determined Mustafa and Sims [23] to look for a different perspective upon this matter. Some basic aspects of it will be described further. The following consequences of these axioms are valid. ii) The first half of (4.2) follows at once from (4.1) by taking z = y; and the second part is obtainable by replacing (x, y) with (y, x). Proposition 2. Under the above conventions, j) The mappings b(., .) and c(., .) are triangular and reflexive sufficient; hence, these are almost metrics on X jj) The mappings d(., .) and e(., .) are triangular, reflexive sufficient and symmetric; hence, these are (standard) metrics on X jjj) In addition, the following relations are valid Proof. j) It will suffice establishing the assertions concerning the map b(., .). The reflexive sufficient property is a direct consequence of (d01) and (d02). On the other hand, the triangular property is a direct consequence of (d04). In fact, by this condition, we have (taking y = z) G(x, y, y) ≤ G(x, u, u) + G(u, y, y); and, from this we are done. jj) Evident, by the involved definition. jjj) The first and second part are immediate, by Proposition 1. The third part is evident. Hence the conclusion. Remark 2. A formal verification of j) is to be found in Jleli and Samet [15]. On the other hand, jj) (modulo e) was explicitly asserted in Mustafa and Sims [23]. This determines us to conclude that j) is also clarified by the quoted authors. (C) Having these precise, we may now pass to the conv-Cauchy structure of a MS-metric space (X, G). Define a sequential G-convergence ( (d10) ∀ε > 0, ∃i(ε): m, n ≥ i(ε) ⇒ G(x m , x n , x) ≤ ε. As before, this concept obeys the general rules in Kasahara [18]. By definition, x n G −→ x will be referred to as: x is the G-limit of (x n ). The set of all these will be denoted G-lim n (x n ); if it is nonempty, then (x n ) is called G-convergent; the class of all G-convergent sequences will be denoted Conv(X, G). Call the convergence ( G −→), separated when G-lim n (x n ) is an asingleton, for each sequence (x n ) of X. Further, let the G-Cauchy structure on (X, G) be introduced as: call (x n ), G-Cauchy, provided G(x m , x n , x p ) → 0 as m, n, p → ∞; i.e.: The class of all these will be indicated as Cauchy(X, G); it fulfills the general requirements in Turinici [34]. By definition, the pair (Conv(X, G), Cauchy(X, G)) will be called the conv-Cauchy structure attached to (X, G). Call (X, G), regular when each G-convergent sequence is G-Cauchy too; and complete, if the converse holds: each G-Cauchy sequence is G-convergent. In parallel to this, we may introduce a conv-Cauchy structure attached to any g ∈ {b, c, d, e}. This, essentially, consists in the following. Define a sequential gconvergence ( g −→) on (X, g) according to: x n g −→ x iff g(x n , x) → 0. This will be referred to as: x is the g-limit of (x n ). The set of all these will be denoted g-lim n (x n ); if it is nonempty, then (x n ) is called g-convergent; the class of all g-convergent sequences will be denoted Conv(X, g). Call the convergence ( g −→), separated when g-lim n (x n ) is an asingleton, for each sequence (x n ) of X. Further, let the g-Cauchy structure on (X, g) be introduced as: call the sequence (x n ) in X, g-Cauchy, provided g(x m , x n ) → 0 as m, n → ∞; the class of all these will be indicated as Cauchy(X, g). By definition, (Conv(X, g), Cauchy(X, g)) will be called the conv-Cauchy structure attached to (X, g). Call (X, g), regular, when each g-convergent sequence is g-Cauchy; and complete, when the converse holds: each g-Cauchy sequence is g-convergent. jj) The assertion is clear for (X, G), by Proposition 1; as well as for (X, g) (where g ∈ {d, e}), by its metric properties. The remaining situations (g ∈ {b, c}) follow from Proposition 3 and j) above. jjj) Evident, by the previously obtained facts. Proof. By the MS-triangular property of G, G(u, v, w) ≤ G(v, y, y) + G(y, u, w), G(u, w, y) ≤ G(u, x, x) + G(x, y, w), G(w, x, y) ≤ G(w, z, z) + G(z, x, y); so that (by the adopted notations) In a similar way, one gets (by replacing (x, y, z) with (u, v, w)) These, by the symmetry of d(., .), give the written conclusion. As a direct consequence of this, we have (taking Proposition 3 into account) Proposition 6. The map G(., ., .) is sequentially G-continuous in its variables. This property allows us to get a partial answer to a useful global question. Call the MS-metric G(., ., .), symmetric if G(x, y, y) = G(x, x, y), ∀x, y ∈ X. Note that, under the conventions above, this may be expressed as: b = c; wherefrom: d = b = c, e = 2b = 2c. The class of symmetric MS-metrics is nonempty. For example, given the metric g(., .) on X, its associated MS-metric G(x, y, z) = max{g(x, y), g(y, z), g(z, x)}, x, y, z ∈ X is symmetric, as it can be directly seen. On the other hand, the class of all nonsymmetric MS-metrics is also nonempty; see Mustafa and Sims [23] for an appropriate example. Hence, the question of a certain MS-metric on X being or not symmetric is not trivial. An appropriate answer to this may be given as follows. Call the MS-metric space (X, G), perfect provided for each x ∈ X there exists a sequence (x n ) in X \ {x} with x n G −→ x. Proposition 7. Suppose that (X, G) is perfect. Then, G(., ., .) is symmetric. Proof. Let x, y ∈ X be arbitrary fixed. Further, let (y n ) be a sequence in X \ {y} with y n G −→ y. From the MS-property of G(., ., .), G(x, x, y) ≤ G(x, y, y n ), for all n. According to the authors, this fixed point statement is an illustration of the following assertion: there are many fixed point results over MS-metric structures, to which the reduction techniques in Jleli and Samet (see above) are not applicable. It is our aim in the following to show that, actually, the above stated fixed point theorem cannot be viewed as such an exception [i.e.: as an illustration of this (hypothetical for the moment) alternative]. Precisely, we shall establish that Theorem 6 is reducible to the anticipative fixed point result over standard metric spaces given in a preceding place. This will follow from the proposed By the strong triangle inequality G(x, T x, T 2 x) ≤ d(x, T x) + d(T x, T 2 x), G(x, T x, y) ≤ d(x, T x) + d(T x, y), G(y, T 2 x, T y) ≤ d(T 2 x, T y) + d(y, T y), G(T x, T 2 x, T y) ≤ d(T x, T 2 x) + d(T 2 x, T y); Summing up, we therefore have d(T x, T y) ≤ γ max{P (x, y), Q(x, y)}, ∀x, y ∈ X. (5.7) In other words, T is (d, P, Q; γ)-contractive (according to a preceding convention). But then, the metrical fixed point result (involving anticipative contractions) we just evoked gives us the conclusion in terms of d. The remaining conclusion (in terms of G) is a direct consequence of it, by the properties of the Mustafa-Sims convergence we already sketched. Note, finally, that this reduction process comprises as well another fixed point result over Mustafa-Sims metric spaces given by Karapinar and Agarwal [17]; we do not give details. Further aspects may be found in Samet et al [32].
2013-10-07T11:13:09.000Z
2013-09-14T00:00:00.000
{ "year": 2013, "sha1": "9490e5a39fac1de4024965ba6ff0dc13dd5bacd2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "9490e5a39fac1de4024965ba6ff0dc13dd5bacd2", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
55676388
pes2o/s2orc
v3-fos-license
The influence of terracettes on the surface hydrology of steep-sloping and subalpine environments : some preliminary findings Alpine and mountain slopes represent important pathways that link high-altitude grazing areas to meadows and rangelands at lower elevations. Given the often acute gradients associated with such environments, they potentially represent highly efficient runoff conveyance routes that facilitate the downslope movement of runoff and associated material during erosion events. Many such slopes host series of small steps, or “terracettes”. The juxtaposition of terracettes against the natural downslope flow path of non-complex slopes leads us to hypothesise that they may influence typical hillslope processes by intercepting or capturing surface runoff. Here we report preliminary results and some tentative conclusions from ongoing work to explore this possibility. Google Earth was used to initially identify a ca. 400 m well-developed terracette system situated on a west-facing slope with gradients ranging from 25 to 40 (46 to 84 %). A digital elevation model (DEM) of the terracettes was constructed using spatial data taken from a relevant section of topographic map. The DEM was then queried using a flow-accumulation algorithm and the results displayed in a geographic information system. The output data provided “proof of concept” that terracettes can capture surface runoff. The generation of empirical data from a series of rainfall/runoff simulations performed on the same section of terracettes supports this finding. Results from both work components indicate that sections of a terracette system may intercept runoff and could act as preferential flow pathways. By contrast, some sections appeared to act as depositional sites. We cautiously predict that these areas could act as retention zones for the temporary storage of runoff-associated substances. Greater understanding of the exact influence of terracettes on surface hydrology in steep-sloping and subalpine environments could benefit the future management of grazing and rangelands in such areas. Introduction Alpine and mountain slopes represent important sources of fodder for grazing livestock.In addition, they also act as convenient routes with the potential to link high-altitude grazing areas to meadows and rangelands located at lower elevations (Michna et al., 2013;Hoffman et al., 2014).Given the often acute gradients encountered within terrain of this nature (Konz et al., 2012), their presence (theoretically) represents a convenient and potentially highly efficient runoff conveyance route with the capacity to facilitate the rapid and largely unimpeded downslope transfer of surface runoff and associated material during erosion events.Despite this reali-sation, however, many slopes above a certain threshold gradient (Waugh, 1995) host a series of small steps, or "terracettes", that have an appearance analogous to a wide staircase (Rahm, 1962).Whilst much has been written about the origin of terracettes and the processes thought to have led to their formation (e.g.Ødum, 1922;Rahm, 1962;Bell, 1981;Buckhouse and Krueger, 1981;Auzet and Ambroise, 1996;Henck et al., 2010), two lines of thought usually prevail.The first is that the upper layer of soil gradually moves downslope by soil creep (Carson adn Kirkby, 1972;Anderson and Cox, 1978;Waugh, 1995;Heimsath et al., 2002;Anderson and Anderson, 2011).Repeated wetting and drying cycles facilitate the downslope movement, which, combined with grav- ity, results in the almost uniform series of small steps (Vincent and Clarke, 1980;Auzet and Ambroise, 1996;Bielecki and Mueller, 2008;Heimsath et al., 2002).Whilst terracettes may initially form by a combination of hillslope processes and gravity, many people speculate that they are actually accentuated by the trampling action of livestock, which tend to use them as convenient pathways to gain access to different areas of grazing (Buckhouse and Krueger, 1981;Waugh, 1995). With regard to their characteristics and appearance, terracettes usually consist of individual steps that are arranged in parallel at more or less equidistant intervals from each other but generally dip at a slight angle across-slope.Despite the fact that terracettes are exclusively developed on a pasture and are hence always covered with relatively dense vegetation, their distinct profile is usually visually most striking in wintertime when, for instance, snow accumulates at the back, where runners (i.e. the flat steps) and risers (i.e. the near-horizontal backslopes) meet, and contrasts against the dark edge of the runners, as can be seen in Fig. 1.In terms of dimensions, runners can range in width from ca. 0.15 to 0.75 m, whereas risers can range in height from a barely perceptible 0.025 to 1.2 m (Rahm, 1962).If viewed in cross profile, however, pairs of runners and risers are generally asymmetric, with the length and height of each usually being of unequal dimensions, as can clearly be seen in Fig. 2. With this background information, we hypothesise that terracettes, juxtaposed against the natural downslope flowdirection of often acute slopes, may intercept or impede the natural downslope flow of surface runoff (Kuhn and Zhu, 2008).We do not suggest, however, that individual terracettes form a neatly interconnected system in which runoff is forced to systematically traverse, or "zigzag", its way downslope via a series of conveniently linked flow paths.Instead, we postulate that individual terracettes may sufficiently capture, or at least impede, the flow of runoff to make it descend slowly or in stages and where, between events, it may even be temporarily stored. Given the lack of literature on the subject, the hypothesis is largely conceptual-based and is tested using a package of work that firstly involves runoff modelling with GIS and is then followed by a series of field-based rainfall simulations.Both components sought to establish overall "proof of concept" that terracettes have the ability to influence surface hydrology by intercepting, capturing or impeding surface runoff.The first phase was done using digitized threedimensional data.This was then queried using a flow accumulation model and the output displayed in a GIS.The result from this first component of the investigation was then supported by the generation of empirical runoff data sets, which were obtained by conducting a series of simulated rainfall events on bounded sections of a terracette system. Materials and method The investigation to determine whether slope terracettes influence surface hydrology in steeply sloping environments was achieved using a two-phase approach.The first phase involved undertaking a desktop study in which "proof of concept" was established.The aim of this component of the investigation was to conceptually demonstrate the ability of terracettes to capture, and thus influence, surface runoff.After firstly identifying and then "ground-truthing" a well-developed terracette system using Google Earth, threedimensional data were taken from a relevant topographic map and from digital imagery.Aspects from both data sources were then merged to construct a bespoke digital el- evation model (DEM) of the section of the terracette system under scrutiny.The DEM was then entered into a GIS environment, queried and the resultant data displayed using the visual output of the GIS.After obtaining proof of concept, the second experiment-based phase was implemented.This involved applying simulated rainfall at a known intensity and for a predetermined duration across small (i.e.∼ 1 m 2 ), bounded erosion plots located on selected sections of terracettes in order to assess the relative time during which runoff could be generated.Information from each rainfall simulation is further supplemented with in situ bulk-density and soil-pore data, volumetric moisture content and climate data that include precipitation, air temperature and soil matric potential for the two 5-day periods before each set of rainfall simulations were conducted. The study site A ca. 400 m 2 section of a suitably well-pronounced terracette system was provisionally identified within the Jura region of northwest Switzerland (47 • 25 00.58N; 7 • 18 30.06E) using Google Earth.The chosen site on which both phases of the investigation focused is situated approximately 10 km northwest of the town of Delémont (Fig. 3) and was "groundtruthed" in order to confirm that it was sufficient for the needs of the investigation and logistically accessible.The geology of upland areas within the Jura mountain region is composed of Jurassic limestone.Elevations range from 500 to 1300 m a.s.l. and the elevation of the chosen study site lies approximately 710 m a.s.l.A general description of the climate of this region is given in Greenwood and Kuhn (2014); in brief, annual precipitation is around 1100 mm with maximum monthly rainfall at ca. 91 mm, usually falling between the May to September period.Autumn and winter months can receive as much as 80 % of the monthly maximum precipitation (ca.73 mm), but much of this frequently falls as snow.Sub-zero temperatures usually commence from around December onwards and can persist until early March.This means that snowfall frequently accumulates, particularly in shady valley bottoms and on east-and north-facing slopes, where it can persist and remain until mid-spring.Runoff events in the Jura region commonly occur throughout the year (Prasuhn, 1991(Prasuhn, , 2011;;Ogermann et al., 2003;Ledermann et al., 2008), resulting in soil erosion.Although incidences of erosion are usually most prevalent on cultivated hillslopes, such events are commonly initiated by unconcen-trated run-on originating from upland pasture (Ogermann et al., 2003). Despite terracettes reportedly being most prevalent on east-to north-facing slopes (Buckhouse and Krueger, 1981), the terracette system within the selected study area is relatively well-developed and located on a west-facing slope, with gradients ranging from ca. 25 to 40 • (46 to 84 %).The acute gradients and general nature of the terrain preclude any form of intensive agriculture from taking place, but the site is extensively grazed by cattle from around April to September, with stock densities equivalent to 1 livestock unit (1 unit is one cow) per hectare (G. Broquet, personal communication, 2014).Vegetation on and around the study site consists predominantly of a relatively dense cover of fine-leaved grasses.These are interspersed with a wide variety of higher-order alpine and subalpine flowering plant species, which collectively provide rich, high-quality grazing that is inherent to many infertile calcareous soils that develop on limestone-based hillslopes in the Jura and other regions (Rodwell, 1998).The depth of soil profile at this particular location was estimated at around 15-20 cm and, in keeping with limestone-based calcareous terrain, pH was estimated at around 8-8.5.The west-facing location at this particular site means that soil is probably subject to aspect-related physical influences that include regular desiccation, particularly the surface during warmer months (Rorison, 1990).Bare, partially vegetated or recently cultivated soils in this region are also prone to surface crusting, even after relatively light yet prolonged rainfall. Key physical characteristics of the parent soil were determined, both at the study site and at a relatively flat meadow area located downslope of the study area.Particle size diameter was determined for both sites using a Malvern Mastersizer.Median (D 50 ) measured 14.6 and 17.2 µm for slope and meadow sites respectively.Based on the generally similar 17, 78 and 5 % proportions of sand, silt and clay respectively, both soils are classified as a silty loam using the textural classification system adopted by Hodgson (1974, in White, 2000). Creating a digital elevation model The aim of this component of the investigation was to construct a digital elevation model (DEM) from a suitable area based on topographic information obtained from a 1 : 25 000 map (Landeskarte 1086; Delémont), which focused on a section of Canton Jura in northwest Switzerland.For the construction process, the relevant section of the map was firstly scanned and then imported into a geographic information system (ArcMap 10.0, Environmental Systems Research Institute).From this, three shape files were then created.The first delineated the exact outline of the chosen study area and the second was used to depict contour lines (i.e.isolines) showing areas of similar elevation within the corresponding study area.Both shape files were then digitized from the to-pographic map and saved with the attributed elevation values as a separate, yet corresponding, shape file.Once these stages were completed and all data were successfully digitized, the Topo-to-Raster tool in ArcMap was used to interpolate those individual attributes in order to create a hydrologically correct raster surface DEM based on contour lines, elevation spot-heights and depth contours. Integrating terracette paths on to the digital elevation model Due to the fact that small-scale features such as terracette paths cannot be represented in two-dimensional topographic maps, it was necessary to integrate these features into the DEM.This was achieved by identifying terracettes from a combination of aerial and more recent digital photographs of the study sites.Terracettes were then subjected to on-screen digitization in ArcMap by converting them firstly to a shape file and then to a raster file.For simplicity and ease of data manipulation, arbitrary height and width values equivalent to 0.5 m were attributed to both runners and risers respectively.By merging the raster file depicting the terracettes with the DEM of the raster surface file using the "Raster Calculator" tool in ArcMap, the terracettes, now with nominal height and width dimensions of 0.5 × 0.5 m, became the prominent object on the DEM surface (Fig. 4). Runoff simulation For the runoff simulation, IDIRSI Taiga version 16.05 was used (Clark Laboratories).Firstly, however, it was necessary to convert and export two digital elevation models, one with terracettes and one without terracettes, from ArcGIS raster format to an ASCII raster file format so that it could be read by IDRISI, as this is the GIS software package of preference for the operator when manipulating raster data.In IDRISI, both DEMs were then adjusted using the "PIT REMOVAL" tool, which creates a refined "depressionless" DEM in which individual cells corresponding to depressional areas are artificially raised to the lowest elevation value measured at an arbitrary point around the outer periphery of the nearby depression.This represents an important stage from the perspective of querying subsequent hydrological analyses, as it reduces the amount of "sink" areas where the simulated runoff would tend to accumulate but remain static (Kuhn and Zhu, 2008).Once all of those factors were established, it was necessary to calculate the flow directions.In IDRISI, the "FLOW" tool determines the flow direction of surface runoff by calculating slope gradients between adjacent cells.Flow then logically commences from the direction of the slope registering the most acute gradient.Next, runoff accumulation was calculated using a "RUNOFF' tool.This calculates the accumulation of rainfall units for each pixel, based on the assumption that 1 unit of precipitation uniformly falls on each location within the predefined study area.Those stages were under- taken for both DEMs and the resulting raster files were exported back into ArcMap in order to create maps of the runoff simulation area.Finally, ArcMap was then used to create two maps of the runoff accumulation pathways in order to show respective runoff patterns across both DEMs. Rainfall simulations Eight separate rainfall simulations, each lasting for 30 min, were conducted on 10 and 25 April 2014 on small bounded sections of terracettes.Water was pumped at a pressure of around 0.4 bar (±0.1 bar) to a single-spray system (Spraying Systems Fulljet nozzle, type: 1/4 HH -14 WSQ) which generated artificial rain with an estimated median drop diameter of around 1.5 mm.The rainfall simulator was supported by a lightweight yet robust aluminium frame, the shape of which can be adjusted to suit localised topographic conditions, and which gave an approximate drop-height of 1.2 m from the nozzle outlet to the target area (Fig. 5).The configuration and geometry of each selected are varied slightly and reflected the proportion that was occupied by either a runner (i.e. the step) or riser (i.e.near-vertical or sloping component) within each bounded area.The extent of bounded plots ranged from 0.23 m 2 (Expt.8) to 1.10 m 2 (Expt.2).Each plot was delineated using aluminium sheets embedded edge-on into the soil.A runoff collection trough was also embedded flush with the soil surface at the downslope end and was used to channel any runoff into plastic containers.Due to the fact that all terracette runners within selected areas were located on generally planar surfaces, only the average gradient of each section of riser is reported and these ranged from ca. 10 % (5.7 • ) (Expt.8) to ca. 75 % (36.5 • ) (Expt.4).Rainfall intensity was measured by simultaneously taking an average of three rainfall measurements within the rain-drop zone.Intensity values ranged from an equivalent of 47 mm h −1 (Expt.5) to 110 mm h −1 (Expt.2).Although these intensities are relatively high, if they are expressed as equivalent cumulative rainfall per single event (mm), the obtained values generally fall within the natural range of events reported in the Jura region, albeit within the upper range (Ogermann et al., 2003).In addition, despite the fact that intense rainfall over prolonged periods of time is rare, intensities such as these represented the best chance of generating surface runoff in a relatively short period of time. Despite selecting the same pump pressure during each rainfall simulation, broader variations in rainfall intensities than originally desired were obtained over the eight simulations.This is attributed to differences in end-of-pipe pressures, presumably brought about by changing height differences between the pump outlet and the rainfall simulator as it was moved further up-or across-slope to different sites.For the purpose of this investigation, we also define runoff as a steady and continuous flow of water leaving the plot outlet.Once runoff was obtained, a soluble coloured tracer (indigo blue) was applied from a pipette onto the upslope edge of the rainfall drop zone in order to provide an indication of the time taken for the tracer to be recovered at the plot outlet.Key data for each of the eight rainfall simulations are listed in Table 1. Runoff accumulation DEM After following the methodological steps outlined in Sects.2.2 to 2.4, a DEM of the terracette system was created and is shown in Fig. 6.The approach clearly identifies runoff accumulation paths and the natural trajectory of the slope juxtaposed over approximately 60 % of the area under scrutiny.Why the method only depicts just over half of the visible terracette system is not clear but may be related to poor sensitivity associated with the flow-accumulation algorithm in the IDRISI GIS package.In addition, the method also led to the creation of a number of diagonally orientated flow paths (Fig. 6).These are believed to be spurious features and probably represent a product of the way in which topographic data are converted to raster data (Hopkinson et al., 2009).A possible way in which this problem will be avoided in future is discussed in Sect. 4. Despite the occurrence of these anomalous features and the fact that only partial coverage of the terracette system was obtained, the resultant output is still believed to provide adequate proof of concept of the role of terracettes and was sufficiently encouraging to continue with the second, physical-based experimental phase of the investigation. Rainfall simulations Four key measurements relating to runoff characteristics were recorded during each rainfall event, the results of which are listed in Table 1.The time taken for runoff to commence ranged from 136 (Expt.4) to 1710 s (Expt.5).The time taken for the tracer to be recovered in surface-runoff after application ranged from just 45 (Expt.1) to 1210 s (Expt.5).The amount of time between terminating rainfall and the cessation of runoff ranged from 11 (Expt.5) to 230 s (Expt.6), and finally, runoff coefficients ranged from 0.1 (Expt.5) to 23.5 % (Expt.3). A relatively weak indirect exponential correlation (r 2 = 0.338 (33.8 %)) was obtained for the relationship between average riser gradient and the time taken for continuous runoff to commence.Despite the fact that a stronger result was not obtained for these two variables, the indirect relationship implies that gradient has the ability to exert some control over runoff generation times.No relationship was found between average riser gradient and tracer recovery time after application.Likely reasons for this could be localised variations in the physical and environmental characteristics of each plot. Runoff coefficients Runoff coefficients for the eight plots ranged from just 0.1 (Expt.5) to 23.5 % (Expt.3).The wide variation is believed to reflect variations in the physical characteristics of the soil, but in this instance it principally reflects variations in infiltration rates.Reasons why infiltration is considered to represent a major control on runoff generation relate to variations in the bulk density of the surface soil.Five in situ bulk density measurements were taken from the upper soil profile (0-5 cm deep) next to four of the plots before rainfall was applied.After the samples were oven dried, linear regression was undertaken for the variables, bulk density and runoff coefficients.This returned a very strong direct r 2 correlation of 0.992 (99.2 %), which is shown in Fig. 7. Consequently, an assumption is made that where bulk densities are high, infiltration rates are presumably low and vice versa.Higher runoff coefficients therefore imply that certain sections of runners within this particular terracette system have experienced some degree of compaction and this would appear to represent one of the major factors controlling runoff generation.The reason for this is cautiously attributed to the trampling of certain sections of runners within the terracette system, presumably, in this instance, by livestock.By contrast, where runoff coefficients are low, higher infiltration rates are taken as an indication of relatively uncompacted soil surface conditions.This would suggest that opportunity for both runoff generation and its downslope conveyance is limited during everything but the most intense rainfall event.Indeed, if an arbitrary 10 % runoff coefficient is taken as a "cut-off" threshold, then 50 % of plots that were subjected to simulated rainfall (i.e.four) would appear to be capable of generating runoff (i.e.plots 2, 3, 4 and 8).In generating Soil Bulk Density (g cm -3 ) Figure 7.A strong direct correlation was obtained for the relationship between soil bulk density and runoff coefficient at the four plots that were tested. runoff, not only could these sites realistically act as sediment sources, but the high connectivity would render them potentially capable of conveying runoff downslope.In retrospect, however, higher infiltration rates at the remaining plots (i.e. 1, 5, 6 and 7) means that these areas are largely incapable of conveying runoff, even during the most intense rainfall event, and are hence disconnected.Logically, such areas could feasibly act as depositional sites where substances, mobilised and transported from elsewhere upslope, enter into temporary storage.Attempts were made to further test the compaction hypothesis, by correlating the runoff coefficient values against times taken for continuous runoff to commence.A considerably weaker r 2 value of 0.615 (61.5 %) suggests that numerous other factors aside from compaction and reduced infiltration rates appear to exert a considerable control on runoff generation (Ries et al., 2013).Those under consideration could, again, be due to localised variations in vegetation and soil surface characteristics but in this instance are thought to relate to antecedent soil moisture; the likely implications of this are discussed in more detail in the following subsection. Localised variations in soil conditions Additional in situ bulk density measurements were obtained from a number of terracette paths, away from those areas subjected to simulated rainfall, and compared with measurements taken from a nearby, relatively flat meadow situated at the foot of the slope on which the terracette system is located, which also provides grazing for cattle.The average value of 0.82 g cm −3 for meadow samples (n = 6) contrasts with the 1.00 g cm −3 recorded from the runners (n = 25).The result strongly suggests that the soil surface, certainly of those runners that were measured, has probably experienced considerably more compaction than soil surface within the meadow environment.This conclusion is supported by in situ soilpore data taken from those same terracette and meadow locations as previously mentioned.Water loss from 100 cm 3 samples of in situ soil taken from both sets of locations (n = 12) were recorded over a 2-week period.Water-loss data were recorded; firstly, solely under the influence of gravity and then air pressures were gradually increased to the equivalent of 200, 400 and finally 800 mbar.Differences in the two data sets were then determined statistically by subjecting them to a non-parametric Friedman test, performed at the 95 % confidence level, to determine whether differences were significant.The statistical output indicated that water loss from soil samples taken from the surface of runners on the terracette system was significantly greater (P <0.05) than water loss from soil taken from the nearby meadow.Likely reasons for this are believed to relate to reduced pore space and the fact that water storage capacity is significantly less than soil from meadow areas.This is tentatively attributed to the effect of compaction and this, again, is presumably due to excessive trampling of the soil surface by livestock.Precipitation, air temperature and soil matric potential data were also recorded from a climate station located near to the experimental area prior to undertaking rainfall simulations.Taking an arbitrary yet suitably long 5-day period prior to when each set of rainfall simulations were conducted (i.e. from 5 to 10 April 2014 and from 20 to 25 April 2014), cumulative precipitation measured 14.0 mm over the first of those periods and 6.4 mm over the second 5-day period.However, average soil matric potential values for those corresponding periods measured −19 and −169 kPa respectively.The first value equates to very wet, almost saturated, condi-tions and the second to very dry conditions (Shukla, 2014).Although more than double the amount of precipitation fell over the first 5-day period prior to 10 April, average soil matric potential was approximately 9 times more negative over the second 5-day period prior to 25 April.It is difficult to attribute this large difference to precipitation alone, but contrasting air temperatures, presumably leading to variable evapotranspiration rates, may provide at least some explanation for the drier conditions leading up to, and including, the second experimental day.Average air temperature over the 5 days prior to 10 April when Simulations 1-4 were performed was 10.4 • C but averaged 20.7 • C over the 5 days prior to 25 April when Simulations 5-8 were performed.The marked difference in soil matric potential over the two periods is thus thought to reflect differences in antecedent soil moisture conditions.This may explain why the average time to continuous runoff was just 441 s for Simulations 1-4 but was 1358 s for Simulations 5-8.Despite making no changes to the way in which sites were selected over the 2 days, in general, 75 % of plots tested on the first day recorded relatively high runoff coefficients (i.e.> 10 %).With an ability to convey surface runoff, they therefore expressed some evidence of connectivity.By contrast, however, 75 % of plots tested on the second day recorded relatively low runoff coefficients (i.e.< 10 %).Higher infiltration and an apparent inability to generate and convey surface runoff means that these areas could act as depositional sites where runoff-associated substances may be temporarily stored between erosion events. Net changes in in situ volumetric soil moisture content In situ volumetric soil moisture conditions were determined using a TRIME-FM (version P2) in situ volumetric soil moisture probe (IMKO GmbH).Measurements were taken from pairs of runner and riser components within each plot before simulated rainfall was applied, and then approximately 30 s after rainfall was terminated.Moisture values from each geomorphic component were averaged for each plot, and postrainfall values were subtracted from pre-rainfall values in order to provide an indication of the net change in soil moisture content.The results of this procedure, which are listed in Table 2, combined with runoff coefficient data listed in Table 1, tend to agree with explanations made in previous subsections P. Greenwood et al.: Influence of terracettes on the surface hydrology of steep-sloping and subalpine environments 71 inasmuch as some runner and riser systems appear to generate surface runoff, whereas others do not. No moisture content data are available for Simulation 1; however, for Simulations 2-4 a net loss in soil moisture content was recorded for all three runners, with values ranging from −0.02 to −4.38 %.Although these values could be within the error uncertainty associated with the moisture probe, this overall result is generally interpreted as an indication of the reduced infiltration capacity of runners at locations where Simulations 1-4 were conducted.This, again, is tentatively attributed to compaction of the soil surface, presumably by trampling livestock.Assuming sufficient natural rainfall fell to initiate such conditions, it is possible that this particular group of runners could shed surface water and hence would be more likely to generate surface runoff, possibly leading to erosion in some instances. For the risers at sites on which Simulations 2-4 were conducted, all recorded a net gain in moisture content which ranged from 1.61 to 5.30 %.Despite the acute gradients associated with these three risers (39-75 %, Table 1), and the fact that the highest runoff coefficients were also recorded at these sites (Table 1), the net positive change in moisture content infers that steep-sloping risers such as these are still able to absorb some water during an intense and relatively prolonged rainfall event.Although these three sites recorded among the highest runoff coefficients of the eight simulations, the values are still relatively low; this suggests that runoff generation, the contribution of overland flow from these steep-sloping risers and their ability to convey runoff downslope would remain relatively small.In contrast, pairs of runners and risers from Simulations 5 to 8 all recorded net positive changes in soil moisture content.For the runners, increases ranged from 5.33 to 8.78 %.These values are particularly high and considerably higher than comparable values recorded by each corresponding riser.Considering the explanation given above, it is logical to assume, therefore, that the soil surface associated with this particular group of runners is probably less compacted and they are, henceforth, less likely to generate and convey surface runoff.This overall finding accords with the magnitude of runoff coefficients recorded from Simulations 5 to 8, which, as previously noted at 0.1-11.2% (Table 1), are considerably lower than the runoff coefficients generated during Simulations 2-4.It is possible, however, that lower compaction levels similar to those areas identified during Simulations 5-8 may exist along certain sections of runners within this particular terracette system.Based on this premise, it is entirely feasible that such areas would be unable to convey runoff downslope and as such could actually act as net depositional environments, thereby providing increased opportunity for runoff-eroded material to enter into temporary storage.Whether this scenario is prevalent and representative of other terracette systems elsewhere remains unknown at present, but issues such as this will undoubtedly need to be determined during future work. Future work Using the results from this investigation as a preliminary foundation on which to further elucidate the influence of terracettes on the surface hydrology of steeply sloping environments, a package of future work and/or refinements to existing methods is planned for the future. For the GIS component of the investigation, the main goal is to increase the sensitivity of any DEMs that are used in the future.Consequently, this will probably involve using terrestrial laser scanning to create high-resolution bespoke DEMs over relevant spatial scales.For the experimental work a number of key refinements will be implemented to the existing methods.These will likely include 1. Siting plots so that the proportion of the area occupied by the runner and riser are similar for all plots; 2. Selecting test areas that express generally similar risergradients; 3. Delineating each plot so that areas and geometries are similar; 4. Taking multiple in situ bulk density measurements close to all plots prior to rainfall application; 5. Generating rainfall with generally similar intensities; 6. Closer and more intense monitoring of soil hydrological properties and how these vary from site to site. In addition to the above, it is envisaged that, in future, the riser (i.e. the area where the majority of runoff is thought to be generated) will be temporarily separated from the runner in each plot during the first half of the rainfall event (i.e. for the first 15 min) in order to prevent runoff from entering the downslope, or runner, section of the plot.During this initial phase, time to runoff-generation, cumulative volume of runoff generated and runoff coefficient will be measured.Without stopping the rainfall, the boundary separating the two components will then be removed in order to allow connectivity between the two geomorphic components.The same variables as above will then be measured again.Although identifying suitable sections of terracettes that meet these more stringent criteria will inevitably make the site selection process more difficult and time consuming, the resultant data should, firstly, provide an indication of the volume of runoff that each geomorphic component of a terracette system (i.e. the runner and the riser) contributes to an overall surface runoff budget.Secondly, these changes will unify the series of experiments and thus serve to make plot-derived data more comparable. Conclusions The investigation combined a number of complimentary analytical approaches for the purpose of testing the hypothe-sis that terracettes have the ability to influence surface hydrology in subalpine and steeply sloping terrain.Within the constraints and limitations of the methodological and experimental techniques that were employed, a number of very preliminary conclusions can be drawn from this two-phase investigation.Firstly, the GIS-based approach provided adequate proof of concept that terracettes can indeed have an influence on the surface hydrology on steeply sloping environments.Whether individual terracette pathways act in an interconnected way to form a complete runoff conveyance system that guides, or even facilitates, the downslope movement of surface runoff is unknown.Some data from the experimental phase support this possibility by demonstrating that runoff could be generated from four areas within the terracette system.This suggests that those areas have probably experienced some degree of compaction, which is tentatively attributed to the effect of trampling, presumably by livestock.Importantly, however, in readily generating runoff, those areas may have the ability to convey surface runoff and associated material downslope.By contrast, and despite applying equally high-intensity rainfall, very little runoff was generated from four areas within the same terracette system that was investigated.Whilst this initially suggests that those areas are presumably relatively uncompacted, these findings also infer that such areas could probably not convey surface runoff downslope.As such, they represent a natural hiatus, or disconnection, in the downslope conveyance process that is commonly found in the majority of hillslope environments.Based on this surprising and almost serendipitous finding, we cautiously suggest that uncompacted areas within a terracette system may actually act as depositional sites, with the possibility of retaining, or even temporarily storing, runoffassociated material, at least until the next runoff event.Based on these interesting, yet somewhat contradictory and almost serendipitous results, further work is required to determine whether the findings reported here are representative of this and other terracette systems elsewhere. Figure 1 . Figure 1.Terracettes are particularly prominent and arguably become most visually striking in wintertime when snow accumulates at the point where the two geomorphic components (i.e. the runner and the riser) meet. Figure 2 . Figure 2. A section of a terracette system showing three individual steps has been overlain with dashed arrows in order to highlight runners.The dimensions of those shown in this example are asymmetric, and the length of risers are considerably longer than the length of runners. Figure 3 . Figure 3.A map showing the location of the study site in northwest Switzerland. Figure 4 . Figure 4.A DEM of a section of an area supporting a well-developed terracette system was merged with digitized trample paths to create a three-dimensional model of a section of a terracette system. Figure 5 . Figure5.The adjustable aluminium frame supporting the rainfall simulator.Such a configuration allows the simulator to be firmly supported and level, even on challenging terrain such as a welldeveloped terracette system on a steep slope. Figure 6 . Figure 6.A three-dimensional model shows clearly defined juxtaposed flow-accumulation paths and the natural gradient of the slope.Such a system may have the ability to influence surface runoff. Table 1 . Greenwood et al.:Influence of terracettes on the surface hydrology of steep-sloping and subalpine environments 69 Key characteristics associated with eight rainfall simulations, performed over 2 separate days in April 2014 on bounded sections of a terracette system. Table 2 . Mean % volumetric soil moisture content was calculated for each pair of runners and risers within each plot before and after rainfall application in order to quantify the net change in moisture content.Net volumetric change (%) moisture content, before and after experiments Sim.1Sim. 2 Sim. 3 Sim.4 Sim. 5 Sim.6 Sim.7 Sim.8
2018-12-05T08:40:13.657Z
2015-02-23T00:00:00.000
{ "year": 2015, "sha1": "64edbcf9689078ee384e348ad1151a91e80531b0", "oa_license": "CCBY", "oa_url": "https://www.geogr-helv.net/70/63/2015/gh-70-63-2015.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "64edbcf9689078ee384e348ad1151a91e80531b0", "s2fieldsofstudy": [ "Geography" ], "extfieldsofstudy": [ "Geography" ] }
255591179
pes2o/s2orc
v3-fos-license
Clinical effect of high-flow revascularization in microsurgery combined with endoscopic endonasal surgery for skull base tumors with intracranial and extracranial involvement Background The objective of the study is to investigate the surgical methods and clinical effects of high-flow revascularization in microsurgery combined with endoscopic endonasal surgery for skull base tumors with intracranial and extracranial involvement. Methods The relationships between skull base tumors and internal carotid artery (ICA), tumor location and size, and the extent of tumor invasion were assessed. Preoperative CT perfusion (CTP), magnetic resonance (MR) perfusion-weighted imaging (PWI) (MR-PWI), and digital subtraction angiography (DSA) were performed to evaluate collateral circulation and brain tissue perfusion. Then craniotomy through the fronto-orbitozygomatic approach was performed, based on which four cases received extended middle skull base approach+Dolenc approach + Fukushima bypass type I, and six cases received extended middle skull base approach+Fukushima bypass type III. After surgery, DSA, CT angiogram (CTA), and CTP/PWI were performed to evaluate the patency of the reconstructed vessels and cerebral perfusion, and contrast-enhanced MRI to evaluate the degree of tumor resection. All patients were followed up for 6–12 months. Results Among the 10 cases investigated, gross total resection was achieved in 8 cases, subtotal resection in 1 case, and partial resection in 1 case, as confirmed by CT and enhanced MRI. The patency of revascularization vessels was observed using fluorescein angiography during the operation in all patients and via DSA and CTA postoperatively in nine patients. One patient underwent ventilator-assisted ventilation because of respiratory failure and failed to undergo DSA and CTA. Regarding postoperative complications, one patient developed watershed cerebral infarction on the operated side but no sequelae after drug treatment, three patients developed facial numbness, which improved after 3 months, and two patients experienced worsened diplopia. After 6 to 12 months of follow-up on the nine evaluable patients, the Glasgow Outcome Scale (GOS) was 4–5 after surgery. In addition, 6-month follow-up results showed that one patient with clival chondrosarcoma developed recurrence on contrast-enhanced MRI, while no relapse was observed in the other patients. Conclusion For skull base tumors with intracranial and extracranial invasion and involving the ICA, revascularization might improve the total resection rate and reduce the recurrence rate and risk of intraoperative bleeding and postoperative ischemia. Introduction Skull base tumors are deeply located within complex surrounding tissues, and the extracranial part of intracranial and extracranial communicating tumors in the skull base can invade structures such as the sphenoid sinus, maxillary sinus, ethmoid sinus, infratemporal fossa, pterygopalatine fossa, and parapharyngeal space (1), making microscope-assisted surgery for such tumors very challenging (1). Currently, a combination of microscopy and transnasal endoscopy (TNE) has become a hot spot technique in neurosurgery. TNE has the advantage of multiple angles and close observation. Intracranial internal carotid artery (ICA) injury, a serious complication, may easily occur during endoscopic skull base surgery (2,3), and improper management of the injury may further lead to serious neurological complications and even death (4). Despite various treatments for repairing the injury of ICA, patients are still prone to postoperative occlusion of ICA and postoperative cerebral ischemia, affecting the degree of tumor resection. As a result, these patients are at risk of increased disability and mortality. In recent years, revascularization techniques have been applied in the surgical treatment of skull base tumors involving ICA to prophylactically reduce the risk of intraoperative bleeding and postoperative cerebral ischemia and provide a safeguard for the smooth progress of tumor resection (5). Intracranial and extracranial communicating tumors may widely invade the skull base structures and involve the ICA and important neural structures, leading to very risky surgery and difficult total resection. Thus, few studies have been conducted on such tumors. This study retrospectively analyzed the clinical data of 10 patients with skull base tumors with intracranial and extracranial invasion and involvement of the ICA who underwent microscopy and TNE. Based on the collected data, we attempted to investigate the surgical methods and clinical efficacy of high-flow revascularization in such patients. Study subjects From May 2016 to September 2019, the data of 10 patients with skull base tumors showing intracranial and extracranial invasion and ICA involvement, who underwent surgery assisted by microscopy and TNE at the Tianjin Huanhu Hospital (Tianjin, China), were collected. All patients received head computed tomography (CT), three-dimensional CT reconstruction of vessels, and enhanced magnetic resonance imaging (MRI) to identify the tumor size, location, and extent of invasion. The patients were well-informed about the implications of the treatments, potential risks, and complications and provided signed informed consent. This study has been approved by the Ethics Committee of Tianjin Huanhu Hospital (No. 2021-058). Preoperative assessment Preoperative CT perfusion (CTP), magnetic resonance perfusion-weighted imaging (MR-PWI), and digital subtraction angiography (DSA) were performed to evaluate collateral circulation compensation and brain tissue perfusion. Specifically, perfusion imaging parameters such as mean transit time, time to peak, cerebral blood volume, and cerebral blood flow were recorded. Collateral compensation was graded as follows: Grade I: slow collaterals to the ischemic site, with some defects; Grade II: rapid collaterals to the ischemic site, with some defects; Grade III: collaterals with complete angiographic blood flow of the ischemic site by the late venous phase; grade IV: complete and rapid collateral blood flow to the ischemic site (6). In addition, the balloon occlusion test of ICA was performed to evaluate the compensation of cerebral blood flow and observe the status of the forearm radial artery, and the Allen test to analyze the compensation of the radial artery. Surgical procedures Indications for cerebral revascularization for skull base tumors included (7) the following: (1) benign tumors encasing major blood vessels and with failure to avoid vascular injury during complete resection; (2) malignant tumors involving important blood vessels and with complete resection as the surgical target; (3) major vessels occlusion with cerebral ischemia symptoms or decreased cerebral blood flow reserve; and (4) occurrence of intraoperative major vessel injury that could not be directly repaired. In addition, two different high-flow revascularization procedures were conducted based on the nature of the tumor, its location, the extent of involvement, and its relationship with the C6 segment of the ICA (ICA-C6). Surgical procedures in the Fukushima bypass I group Extended middle skull base approach + Dolenc approach+ Fukushima type I bypass+transnasal endoscopic resection of the extracranial tumor was performed for patients meeting the following criteria: the ICA-C6 segment could be easily exposed early during surgery, and the vessels had suitable conditions to serve as anastomotic ends. The first step was to collect the radial artery group and craniotomy, which were operated on simultaneously. For the collection of the radial artery, preoperative ultrasound was performed to provide accurate location and avoid electrocoagulation of the radial artery. Then a non-absorbable suture was used to ligate the muscular branch vessels of the radial artery, and the radial artery was removed. Papaverine water irrigation was given after removal, and the pressure dilatation technique was used to dilate the removed radial artery to prevent vasospasm. The collected radial artery was wrapped with gauze swabs soaked in normal saline after verifying no leakage and marking the direction of the blood vessel for future use. For craniotomy, the patients were placed in the supine position with the head turned 45°to the unaffected side, the neck flexed 20°, and the top of the head drooped. Such position facilitated the temporal lobe to droop by gravity, thus reducing traction injury. We then performed a skin incision 1 cm below the lower edge of the zygomatic arch, as close as possible to the anterior border of the ear tragus, and went upward along the hairline to reach the midline. For the second step, routine expansion of the middle skull base approach was performed. Separation was performed for the dura mater in the middle skull base, electrocoagulation for the middle meningeal artery at the foramen spinosum, and mobilization or cutting for the greater petrosal nerve. During the process, special care was taken to protect the geniculate ganglion. Third, the bone at the Glasscock's triangle and Kawase's triangle was abraded to fully expose the ICA-C6 segment. The ICA-C6 segment blood flow was blocked with an aneurysm clip, then an end-to-side anastomosis between the proximal end of the radial artery and the ICA-C6 segment, and an end-to-side anastomosis between the distal end radial artery and ICA-C2 segment were performed. After completion of the anastomoses, an aneurysm clip was used to clamp the distal anastomotic end of the ICA-C6 segment and the proximal anastomotic end of the ICA-C2 segment to exclude the ICA between two ends, and intraoperative indocyanine green fluorescein angiography was performed to confirm no blood flow passage in this excluded segment and blood flew in the radial artery graft. After vascular bypass, the intracranial tumor was removed with special care to protect critical neural structures during tumor resection. If the tumor closely adhered to the excluded ICA segment mentioned above and was challenging to separate, or the malignant tumor invaded the vascular structure, resection of the excluded artery was feasible to achieve complete tumor removal. Careful hemostasis was performed after removing the intracranial tumor and exposing the extracranial tumor. After microscopic complete resection, the skull base dura mater was repaired using autologous fascia lata with watertight closure to prevent cerebrospinal fluid leakage. The epidural autologous fat was used to fill the bone defect, and attention was paid not to form compression on the ICA-C6 segment and reconstructed vessel. After routine cranial closure, the operating bed was rotated to the position of transnasal endoscopic surgery. The main surgical instrument used was the Karl Storz Endoscopic Sinus Surgery System, a wide angle endoscope with a diameter of 4 mm and length of 18 cm, which provided a field of view at 0°, 30°, and 70°. According to the location of the tumor, the main side of the transnasal endoscopic procedure was selected. The residual tumor was observed closely and exposed with a neuroendoscope at multiple angles, and the optimal angle was determined. Following the surgical concept of Pittsburgh and Kassam, the three-handed technique (three surgeons) or four-handed technique (two surgeons) was used. Through the bilateral nostril approach, a pedicled nasal septum mucosal flap was made to protect the posterior septal artery, following which the lower part of the middle turbinate was removed from the main side without damaging the sphenopalatine artery. In addition, lateral fracture of the inferior turbinate and removal of the ethmoidal bulla and uncinate process were performed, followed by opening the maxillary sinus, the anterior and posterior ethmoid sinuses, and the sphenoid sinus. Part of the bone of the medial and posterolateral walls of the maxillary sinus was removed to access the pterygopalatine fossa and expose the sphenoid sinus cavity, ethmoid sinus, pterygopalatine fossa, and infratemporal fossa. Afterwards, the tumor was removed as much as possible. If a tumor was present in the sphenoid sinus, intratumoral volume reduction was first performed; the cyst wall of the tumor was freed from the normal bone; and the tumor in this plane was removed. If the tumor had invaded the petroclival region, the transpterygoid approach was selected, followed by bone abrading, after which the surgeons observed for residual tumor. If the lesion had invaded the clivus, the clivus bone was abraded to expose the lesion, and the extracranial tumor was removed. In the case of nasopharyngeal tumors, the tumor was removed deep from the pharyngobasilar fascia, laterally to the parapharyngeal space; and the cartilage of the tubal torus, pharyngeal recess, and Eustachian tube were removed, with attention paid to protect the lower cranial nerves. After removing the tumor, endoscopic observation was conducted to confirm whether there was leakage after watertight closure of the skull base fascia was performed microscopically. If there was cerebrospinal fluid leakage, skull base reconstruction was performed. Microscopically, artificial dura mater and fascia lata were used to remodel the cavernous sinus. A watertight suture of fascia lata was performed to repair the ruptured dura mater, and autologous dura mater fat was used to fill the petrous bone abrasion site and skull base. Endoscopically, cerebrospinal fluid leakage was examined through the nose, and "sandwich" skull base reconstruction was given in combination with skull base repair to maximize the prevention of cerebrospinal fluid leakage. Surgical procedures in Fukushima bypass III group Extended middle skull base approach for resection of intracranial tumor was selected with Fukushima bypass III and transnasal endoscopic resection of the extracranial tumor for patients meeting the following criteria: ICA-C6 segment was not suitable for exposure or seriously involved by tumors, or the vascular lumen was narrow, causing poor blood supply and the compensation of contralateral part of anterior and posterior communicating arteries. The patient's position and the head surgical incision were the same as before. In this group of patients, an additional "S"-shaped incision (about 10 cm in length) was made, starting with the plane of carotid bifurcation as the center and moving along the anterior border of the sternocleidomastoid muscle. The procedures of radial artery collection and craniotomy were performed simultaneously. The team that operated the radial artery acquisition and treatment also operated the Fukushima bypass I group. At the same time, the team for craniotomy first completed the procedures on the neck. Specifically, the surgeons slightly lifted the mandible angle of patients and exposed the anterior triangle of the neck as far as possible to facilitate the exposure of the cervical vessels (sometimes requiring anesthesia cooperation). The operation was performed from the lower to the upper region to fully expose the cervical vessels. The external carotid artery was treated and served as the origin of the Fukushima III end-toend anastomosis. Then the incision for craniotomy was made about 5-cm anterior to the ear tragus, and after the subcutaneous layer was freed, a guide tube of approximately 1-cm diameter was passed through the position close to the ear tragus to reach the neck incision. The tube was inserted at the beginning of the surgery, which was beneficial for subcutaneous dilatation. The posterior part of the zygomatic root was ground slightly flat to prevent compression on the radial artery graft and avoid an effect on the patency of the blood flow. During the tube insertion procedures, careful attention was paid to protect the neighboring nerves: first, the guide tube was as close as possible to the ear tragus, and if it was close to the midpoint of the zygoma, the insertion could damage the nerve branches. Second, the subcutaneous tunnel was short, and the surgeons tried to insert the tube below the tragus and then backward into the neck incision, thus reducing compression. The neck incision was covered with gauze swabs soaked in normal saline for future use. Afterwards, the procedures for routine expansion of the middle skull base approach were the same as the Fukushima bypass I group, but without exposing the ICA-C6 and ICA-C2 segments. End-to-end anastomosis between the proximal end of the radial artery and external carotid artery was performed, following which the absence of any leakage at the anastomosis site was confirmed by the test of flow intensity in the intracranial radial artery, and the graft passed flow pressure and patency were measured. If the flow was poor, the reasons were required to be analyzed, such as anastomotic stenosis, radial artery spasm, and subcutaneous tunnel compression. With satisfactory blood flow pressure and patency of reconstructed vessels, the surgeons anastomosed the distal end of the radial artery to the intracranial vessels, with intracranial M2 superior or inferior trunk (end of M1 segment could be selected if necessary) as the recipient vessel. The end-to-side anastomosis was mostly selected. Upon completion of the anastomosis, intraoperative indocyanine green fluorescein angiography was performed to confirm the patency of blood flow. Then the proximal ophthalmic artery segment of the ICA was clamped with aneurysm clips, and the cervical segment of the ICA was ligated with nonabsorbable sutures. After that, the surgeons determined no blood flow in the excluded ICA, and subsequently, the intracranial tumor was carefully removed. Following this, the TNE-assisted operation was performed as before. It was recommended to reexamine DSA immediately after the operation to confirm whether the blood flow was adequate. Following satisfactory blood flow and full recovery after anesthesia, the patients were taken to the intensive care unit. If bypass flow was difficult to fill the affected hemisphere, it was necessary to rule out the causes, namely, muscle, bone, skin compression, radial artery spasm, and inadequate pressure at the external carotid artery anastomosis site. When necessary, the indicated location was opened again for repair. It was best to first incise the neck and longitudinally cut the revascularized radial artery to determine which end had problems with flow, following which the incision of the radial artery was sutured after the surgeons confirmed the proximal and distal pressures of the revascularized radial artery were sufficient, and the blood flow was unobstructed. In the case of severe spasms, balloon dilatation could be used. Then the surgery team of the Perioperative management Before surgery, relevant evaluation was completed, and patients received oral aspirin (100 mg/time, once daily) or clopidogrel (for patients with aspirin resistance by thromboelastography; 75 mg/time, once daily) for at least 1 week. After surgery, the patients were given aspirin (100 mg/time, once daily) or clopidogrel (for patients with aspirin resistance by thromboelastography; 75 mg/time, once daily) for at least 1 year. They were monitored for the following: swelling and errhysis in the neck, drainage volume of intracranial drainage tube and shape of drainage content, blood pressure (required range: systolic blood pressure of upper limb cuff between 120 mmHg and 140 mmHg, or systolic blood pressure of lower limb between 100 mmHg and 120 mmHg if the upper limb blood pressure was difficult to measure). Patients were reexamined with brain CT the next day after operation, and the nasal packing material was removed 2-3 days after the operation. On postoperative day 3, brain CT, CT angiogram (CTA), DSA, CTP, and MRI were reexamined according to the patients' condition. On postoperative day 30, the patients underwent outpatient reexamination at the Endoscopic Skull Base Center to remove scabs in the nasal cavity and observe whether there were adhesions, and their endocrine parameters were assessed. At 3 months after surgery, the patients were reexamined in the ophthalmology, otorhinolaryngology, and neurosurgery clinics. At 6 to 12 months after surgery, DSA or CTA was repeated to assess the patency of the anastomotic vessels and to assess the presence or absence of cerebral hypoperfusion. From 3 months after surgery, plain and enhanced MRI were performed regularly to evaluate tumor recurrence. Postoperative follow-up Intraoperative fluorescein angiography and postoperative DSA or CTA were performed to assess the patency of the anastomotic vessels, and CTP was reexamined to evaluate brain tissue perfusion. No residual tumor tissue was observed under fiberscope and TNE during operation, and plain and enhanced MRI were performed after operation to evaluate the degree of tumor resection: gross total resection: no residual tumor; subtotal resection: tumor resection of 80% and more; and partial resection: tumor resection of less than 80%. Postoperative neurological recovery and quality of life were estimated using the Glasgow Outcome Scale (GOS) (8), which is an objective assessment to assess patients' degree of recovery, and was graded as follows: Scale 1, death; Scale 2, vegetative state with only minimal response (e.g., periods of spontaneous eye-opening); Scale 3, severe disability, unable to live independently and requiring care in daily life; Scale 4, moderate disability, capable of living independently and working under protection; Scale 5, good recovery, capable of returning to everyday life but with mild deficit. Baseline data of patients This study included ten patients, two males and eight females, with intracranial and extracranial communicating skull base tumors involving the ICA ( Table 1). They were aged 31-68 years, with a median age of 49 years. Five of the 10 patients had skull base meningiomas involving the C6 to C2 segments of the ICA, sphenoid ridge, temporal lobe, cavernous sinus, ethmoid sinus, sphenoid sinus, infratemporal fossa, pterygopalatine fossa, petrous bone, and petrous apex. Three of the 10 patients had refractory pituitary adenomas involving the C6 to C2 segments of the ICA, parasellar region, optic chiasma, third ventricle, cavernous sinus, ethmoid sinus, sphenoid sinus, pterygopalatine fossa, and infratemporal fossa. One patient had clival chondroma involving the C6-C3 segment of the ICA, petrous bone, basilar venous plexus, cavernous sinus, posterior clinoid process, and Meckel's cave. One patient had recurrent fibrous dysplasia combined with internal carotid artery aneurysm. After embolization of the bilateral cavernous internal carotid artery aneurysms, the aneurysm cavity in the cavernous sinus segment of the left internal carotid artery remained, and the tumor lesion that protruded into the pterygoid sinus, posterior ethmoid sinus, and the base of the anterior cranial fossa was damaged. The main clinical symptoms were as follows: decreased visual acuity in one case, visual loss in one case, facial hypoesthesia in five cases, diplopia in three cases, inability of eyeball abduction in three cases, epistaxis in two cases, swallowing dysfunction in one case, and headache with dizziness in one case. Regarding collateral circulation, three patients had compensation of the anterior communicating artery, two patients had compensation of the posterior communicating artery, and five patients had no compensation of the anterior and posterior communicating arteries. Four patients underwent Fukushima high-flow bypass type I, and six received Fukushima high-flow bypass type III according to the relationship between the tumor and the ICA and the status of compensation of collateral circulation. Evaluation of revascularization and tumor resection In this study, intraoperative fluorescein angiography showed good patency of the radial artery graft in 10 patients, and Frontiers in Surgery postoperative CTA and DSA also revealed good patency of the reconstructed vessels and no blood flow in the excluded vessels. In addition, CTP showed no cerebral hypoperfusion on the reconstructed side after surgery. In terms of tumor resection, contrast-enhanced MRI of the brain confirmed total resection in eight cases, subtotal resection in one case, and partial resection in one case. Figures 1 and 2 are typical preoperative and postoperative images of a patient who underwent Fukushima bypass type I and a patient who underwent Fukushima bypass type III, respectively. Complications One patient underwent ventilator-assisted ventilation due to respiratory failure and failed to receive DSA and CTA. Postoperatively, one patient developed watershed cerebral infarction on the operated side, but no sequelae after antiplatelet and anticoagulation therapy. Three patients presented with facial numbness, which improved after 3 months. Two patients developed diplopia. Follow-up and prognosis Nine patients were followed up for 6 to 12 months. The last follow-up showed that none of the nine patients had new-onset cerebral ischemia or neurological dysfunction, and seven of nine patients underwent DSA reexamination, with six showing patency of the reconstructed vessel and one having mild stenosis of the reconstructed vessel. All nine patients underwent CTP examination, which revealed no cerebral hypoperfusion on the reconstructed vessel side. One patient developed postoperative cerebral infarction, one patient with clival chondrosarcoma developed a recurrence, which was identified on a contrast-enhanced MRI 6 months after surgery, and the remaining patients had no recurrence. During the follow-up period, two cases of diplopia were Wang and Tong 10.3389/fsurg.2022.1019400 Frontiers in Surgery improved but not cured at 12 months after surgery. Patients with facial numbness had poor recovery. After 6 to 12 months of follow-up, the GOS of nine patients reached scale 4-5, indicating that they needed no assistance in everyday life with minor neurological and psychological deficits. Discussion Skull base tumors with intracranial and extracranial involvement account for about 10% of all skull base tumors. They are defined as tumors originating from the intracranial or extracranial region but passing through the skull base bone and dura mater to cause intracranial and extracranial communication (9). For patients with intracranial invasion, the tumor widely involves the skull base bone, dura mater, orbital periosteum, and other structures; and surgical resection and radiotherapy are limited by anatomical factors. As a result, such patients have poor overall efficacy and are prone to recurrence, attributed to residual tumor tissues (10). Domestic and foreign literature reported that the perioperative mortality of skull base tumors had decreased significantly to only 0%-4.7%, but there are significant individual differences in the total resection rate (62.9%-100%) and the incidence rate of postoperative complications (20%-50%) (5,6,11). Microscope-guided craniotomy and TNE surgery complement each other and are an important combination for treating intracranial and extracranial communicating skull base tumors because together they can provide a good visual field and operating space. However, damage to the ICA may easily occur during TNE-assisted procedures because of the absence of bone around the ICA, abnormal course of the ICA, thin ICA wall, encasement or displacement of the ICA, history of secondary surgery or radiotherapy, and lack of appropriate instruments for transnasal endoscopic skull base surgery. Thus, acquisition of expertise in endonasal surgery, engagement in dedicated training programs, ongoing and intensive learning (i.e., mastering the perfect knowledge of nasal cavities and anatomy of the sellar region learned from dissection practice in the laboratory, endoscopic manipulation techniques, understanding of preoperative imaging, etc.), careful patient selection and multidisciplinary teamwork are key to achieve satisfactory surgical outcomes. Patients with a high risk of intraoperative ICA or even requiring the sacrifice of the ICA might benefit from balloon occlusion testing preoperatively (2), which was shown to be the most effective method to assess collateral circulation (12). Although theoretically, arterial sacrifice is possible in patients obtaining a negative result from the testing, 5%-20% of the patients will clinically experience cerebral ischemia after surgery (13,14). Lawton et al. recommended vascular graft bypass surgery in all patients who require occlusion of the main intracranial arteries (15). After revascularization, vascular grafts provide adequate blood supply to the brain (16). Relevant studies have shown that patients with Frontiers in Surgery 07 frontiersin.org skull base tumors treated using combined vascular reconstruction had higher total resection rate and longer disease-free survival, and their incidence of perioperative cerebral ischemia was also significantly decreased (6, 13,14,17). In this study, according to the location and nature of the tumor and the relationship between the tumor and the ICA, a high-flow bypass was selected, and tumor resection was performed after the surgeons confirmed the reconstructed vessels were unobstructed and the corresponding segment of the ICA was ligated. The follow-up results revealed that gross total resection of the tumor was achieved in eight cases, subtotal resection in one case, and partial resection in one case. Nine patients were followed up for 6 to 12 months, of whom one patient with clival chondrosarcoma showed recurrence on contrast-enhanced MRI at 6-month follow-up, and all nine patients underwent CTP examination suggesting no cerebral hypoperfusion on the reconstructed vessel side. These results are generally consistent with previous studies. In addition, our postoperative follow-up found that one patient developed watershed cerebral infarction on the operated side but no sequelae after drug treatment; three patients developed facial numbness, which was improved after 3 months; two patients developed diplopia; and one patient developed respiratory failure. The prognosis of nine patients who were followed up postoperatively was assessed using the GOS, and all of them were found to have a GOS score of 4-5. These results suggest that high-flow revascularization could reduce the incidence of postoperative complications associated with skull base tumor surgery. This study had some limitations. First, the sample size was small and lacked relevant controlled analysis for clear statistical inference. Therefore, a large sample of prospective controlled studies is needed to further verify the conclusions of this study. Second, the follow-up time was not long enough, and we did not assess the long-term efficacy and complications. Third, we investigated patients with skull base tumors showing intracranial and extracranial invasion and ICA involvement, irrespective of the type and stage of the tumor and despite promising outcomes observed in most patients, the treatment approach was quite aggressive, and indications for surgery should be improved. In their study, Kalani et al. performed maximal surgical intervention in 18 patients with tumors involving the ICA at the skull base and found that the patients' survival was dismal and the rate of complication was high despite ICA sacrifice at the skull base with revascularization (18). Thus, considering this was our preliminary study investigating the potential efficacy of highflow revascularization in microsurgery combined with endoscopic endonasal surgery, the indications for such surgery should be further clarified, with careful evaluation of the risk-to-benefit ratio, in future studies. Conclusion For complex intracranial and extracranial communicating tumors widely involving the sellar region, clivus, and petrous apex region of the skull base, a combination of the fronto-orbitozygomatic approach using a microscope and endoscopic transnasal approach may improve the curative rate and could reduce postoperative complications. High-flow revascularization technique and exclusion of the ICA involved by the tumor may decrease the risk of intraoperative bleeding, improve the gross total resection rate, and reduce damage to critical cranial nerves. Collectively, the findings from this study provide additional insight into the gross total resection rate of skull base tumors with intracranial and extracranial involvement, which was shown to reduce the recurrence rate and the risk of intraoperative bleeding and postoperative ischemia. However, it should be noted that the results presented here were from a retrospective observational study and the treatments performed in these settings need further investigation. A larger cohort, prospective settings, welldesigned additional group comparison and longer follow-ups are needed to validate these findings. Data availability statement The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s. Ethics statement The studies involving human participants were reviewed and approved by Tianjin Huanhu Hospital (No. 2021-058). The patients/participants provided their written informed consent to participate in this study. Author contributions Conception and design of the study was performed by Z-QW. Study search was performed by X-GT. Drafting the article was performed by Z-QW and X-GT. All authors contributed to the article and approved the submitted version.
2023-01-11T17:44:11.920Z
2023-01-06T00:00:00.000
{ "year": 2022, "sha1": "1873b29f31170ce46a25853a5f13c55313630f10", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "1873b29f31170ce46a25853a5f13c55313630f10", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
255956037
pes2o/s2orc
v3-fos-license
Clusters, lines and webs—so does my patient have psychosis? reflections on the use of psychiatric conceptual frameworks from a clinical vantage point Mental health professionals working in hospitals or community clinics inevitably face the realisation that we possess imperfect conceptual means to understand mental disorders. In this paper the authors bring together ideas from the fields of Philosophy, Psychiatry, Cognitive Psychology and Linguistics to reflect on the ways we represent phenomena of high practical importance that we often take for granted, but are nevertheless difficult to define in ontological terms. The paper follows through the development of the concept of psychosis over the last two centuries in the interplay of three different conceptual orientations: the categorical, dimensional and network approaches. Each of these represent the available knowledge and dominant thinking styles of the era in which they emerged and take markedly different stances regarding the nature of mental phenomena. Without particular commitment to any ontological positions or models described, the authors invite the reader into a thinking process about the strengths and weaknesses of these models, and how they can be reconciled in multidisciplinary settings to benefit the process of patient care. Introduction Psychiatric diagnosis deals with abstract concepts developed to capture the nature of mental disorders in all their multi-layered and multi-faceted complexity. Consequently, everyday work in mental health presents clinicians with questions that cannot be answered without a certain degree of philosophical enquiry. Although clinicians often tend to brush away a theoretical stance reaching for 'practical' solutions, all too often one gets stuck in the middle of a decision-making process in the absence of satisfying answers to fundamental questions. One such question is "Does my patient suffer from psychosis?", one that begs for answer at certain crucial bifurcations of care pathways, such as when deciding whether to provide care in a service specialized in psychosis (e.g. Early Intervention in Psychosis Teams) and whether to start antipsychotic medication. These questions can raise significant tensions in multidisciplinary settings where professionals with different training backgrounds often differ in their conceptual orientation and (implicit or explicit) views regarding the nature of mental disorders. Recognising the origins, merits and limitations of multiple views in this context therefore becomes a necessary aspect of care-in-action. In this paper we examine the main diagnostic approaches in the light of philosophical, historic, psychological and linguistic considerations, aiming to draw attention to potential biases and assumptions inherent to their specific ways of thinking. Due to its recent release and important changes in its conceptual orientation, we will focus mainly on ICD-11 [1]. The example of psychosis is used as a specific case against the backdrop of more general considerations regarding disorders with complex aetiologies. As scientific methodologies progress, the complexity of the field seems to increase along with the number of contributing factors we know about. To complicate matters further, the development of a diagnostic toolkit always unfolds in a specific historic context with characteristic social, political and financial influences. Therefore, it continues to be vulnerable to 'the four idols' of human thought that Francis Bacon warned us about 400 years ago in his Novum Organum ( [2], p. 22). He stated that "The human understanding, from its peculiar nature, easily supposes a greater degree of order and equality in things than it really finds; and although many things in nature be sui generis and most irregular, will yet invent parallels and conjugates and relatives, where no such thing is. ". The first type, the 'idols of the tribe' refer to biases universally shared by the entire human race as, Bacon asserts, "all the perceptions both of the senses and the mind bear reference to man and not to the universe". The 'idols of the den' refer to the influences of the particular kind of education, readings and the authority of previous scholars we encounter throughout our development. Using the term 'idols of the market' Bacon brings into our awareness the influence of words, concepts we use to interact, or 'trade' ideas with. If not 'properly' grounded in sound method and broad, gradual experimentation, words can create "a wonderful obstruction of the mind"[ [2], p. 21] and "manifestly force the understanding", thus leading to fallacies. Lastly, the 'idols of the theatre' warn against the blind influence of existing systems of thought, practices, and traditions "as so many plays brought out and performed, creating fictious and theatrical worlds" The metaphysical dilemma of mental disorders Views regarding the ontological reality of mental disorder have gone through remarkable changes over the last two centuries, mirroring developments in the Philosophy of Science. Some theories assume that the defining features (the 'essence') of mental disorders are rooted in biology, seeking for anatomical localisations, genes, and neural pathways (biological essentialism), while others define mental disorders as disturbances of psychological constructs such as self-experience, attachment, theory of mind (phenomenological and psychological essentialism). Still others try to completely break away from essentialism and seem to glimpse mental disorder residing solely in the web of interactions (network approach). Reflecting on the ontological 'kinds' of heterogeneous observations Haslam identifies several 'kinds of kinds' [3], including 'natural kinds' (where biological aetiology is known), 'discrete kinds' (clearly defined syndromes), 'practical kinds' (categories with arbitrary boundaries), 'fuzzy kinds' (with no clear boundaries) and 'non-kinds' (e.g., dysfunction defined along several dimensions). He points to separate disorders that exemplify each of these kinds, arguing for a pluralistic view in diagnosis. This diversity of kinds is clearly noticeable in the diagnostic entities of ICD-11, as these have different time courses, different number of symptom domains and different numbers of co-occurring syndromes. Although this diversity could be criticised as a weakness, it seems likely that it merely reflects the complex and heterogenous nature of the phenomena. Recognising the difficulty of defining essential entities, the fundamental wisdom of diagnostic classifications remains that-as suggested by Schwartz and Wiggins [4] -"Psychiatry is a practical science"; it is driven by values such as promoting health and alleviating suffering. Consistent with this pragmatic approach, our classification systems adopt the position of scientific realism, endorsing belief in the existence of both observable and unobservable entities, provided they are reasonably defined and supported by the scientific enquiry. The elusive concept of psychosis The concept of psychosis initially emerged in the early nineteenth century, at the time when mental illness started to be differentiated from other social deviances [5] and from other medical conditions. Initial definitions followed a virchowian localisationist tradition, assuming some forms of neurological lesion in the background. Cantstatt, perhaps the first to introduce the concept in 1841, used it synonymously with the term 'psychic neurosis' to delineate all those conditions affecting the nervous system that have primarily 'psychic' (psychological) manifestations [6]. However, the lack of histological lesions led some authors to question the 'reality' of some forms of psychoses. For example, Alois Alzheimer made the distinction between 'real psychoses' (such as dementia paralytica and dementia praecox) and 'functional psychoses' (including manic-depressive insanity), emphasising (in contrast with Franz Nissl) that not all psychoses could be linked to cortical pathological findings [7]. In the absence of visualised lesions Kahlbaum and Kraepelin made the leap from the materialist to the conceptual realm, postulating the existence of hypothetical entities (diagnostic constructs) that capture 'the essence' of different conditions [8] through systematic description of patient experience, behaviour and of the longitudinal course of disorders. Categorical approaches Creating mental categories based on similarities is one of the most efficient and ubiquitous tools of human cognition [9]. Categories enable us to abstract, register and transfer knowledge, exponentially increase cognitive economy and our efficiency to deal with large numbers of individual situations. Learning based on categories is present very early in human development and is reinforced throughout our daily, common-sense interactions and by the academic learning process [10]. In Categories [11], one of the core writings of Western Philosophy, Aristotle provides an examination of how things occur as separate entities in thought and how we refer to them in language. His system is firmly grounded in the natural kinds of things (substances) but includes categories such as 'quality' , 'quantity' and 'relations' . In addition to this theoretical construct, he created the first systematic study of the animal world based on observation of anatomical forms and functionality. One of the great triumphs of categorical thinking is the development of medical taxonomies in response to an ardent need for public health reform, as pointed out by Florence Nightingale in the Fourth International Statistical Conference, London (1860). Her proposal, in line with similar efforts in Continental Europe and the USA established the foundations of the International Statistical Classification of Diseases and Related Health Problems [12]. The early psychiatric classifications developed by Kahlbaum and Kraepelin were firmly rooted in the biological essentialist tradition [13,14], partly inspired by Carl Linneus, who published his Systema Naturae in 1735. It is not a coincidence that Emil Kraeplin's older brother, Karl Kraepelin was a biologist who created an exhaustive taxonomy on the order of Scorpiones, and it was suggested that Karl might have encouraged his younger brother to develop a classification system for mental illnesses [15]. After his initial differentiation of psychosis into 'dementia praecox' , 'manic-depressive insanity' and 'other psychoses' the concept of psychosis became detached from affective disorders and other presentations, being applied only to conditions characterised by some degree of reality distortion. However, these conditions have always shown a great deal of heterogeneity, which was captured in further subcategories such as paranoid, hebephrenic, catatonic and simplex sub-forms. After the term 'schizophrenia' was coined by Eugen Bleuler in 1911 [16], it has become an umbrella term synonymous with almost all non-organic psychoses, attracting the vast majority of efforts in terms of research and conceptual definition. More recently statistical methodologies have been used to identify symptom groups that 'hang together'-namely cluster analysis and discriminant factor analysis [17][18][19] with some authors linking symptom clusters with putative biological causation i.e., cortical thickness [20,21]. In contrast with this predominantly biological orientation stands the phenomenological approach developed by Karl Jaspers. He proclaims that "man is not confined to what is biologically known of him" ( [22], p. 559) and advocates for a holistic understanding of a person in their cultural and experiential reality. Jaspers introduces 'ideal types' as a phenomenological essentialist approach "to give structure to the transient manifold" ([22], p. 560). Following Jaspers definition, Schwartz and Wiggins [4] argue that the identification of ideal types in psychopathology is not only "a matter of simple averaging" but captures essential properties of anomalous experiences via phenomenological analysis. A computationally oriented approach that does not require the assumption of essential 'ideal types' is represented by Eleanor Rosch and Amos Tversky [23,24], among others, in their general theory of categorical thinking. According to them, we think of category membership in probabilistic ways as correlational prototypes, trying to "reduce the infinite differences among stimuli to behaviourally and cognitively usable proportions". Based on these processes, we judge objects (i.e., clinical presentations) as being more or less 'representative' or 'prototypical' examples of a given category (i.e., disorder), based on the number and validity of features they carry (diagnostic criteria) an application of this approach to Psychiatry is described by Westen [25]. Critical discussions of the dominant categorical diagnostic systems (ICD-10 and DSM-5 [26]) pointed out that although these systems increased diagnostic reliability, the validity of the categories is not satisfactorily established [27,28]. Categories tend to have 'fuzzy' boundaries and it is often difficult to tell whether a presentation belongs to one or another category; classic illustrations of this point include the complex interface between psychotic and mood disorders, as well as between mood and anxiety disorders, leading to diagnostic constructs such as 'Schizoaffective disorder' and 'Mixed depressive and anxiety disorder' . Kendell and Jablensky recommend that in order to identify a valid syndrome, this needs to be separated from other syndromes and from normality by a 'zone of rarity' around the edges and needs to be defined by some natural characteristics beyond 'superficial' descriptive features (i.e., symptoms, course and outcome) [28]. However, such distinctive, defining characteristics are rarely established with any clarity. Due to this rationale the ICD-11 abandoned traditional subcategories of schizophrenia, finding insufficient evidence for their predictive or treatment validity, and adopted instead a set of dimensional symptom specifiers (positive, negative, depressive, manic, psychomotor, and cognitive symptoms) rated on a four-point severity scale applied across all diagnostic categories in the group 'Schizophrenia or other primary psychotic disorders' [29]. This approach represents a major rupture both from the ideal type and prototype approaches, as different patients diagnosed with schizophrenia may well display entirely different symptom profiles, leaving the overarching conceptual construct difficult to grasp. Dimensional approaches Dimensional frameworks identify psychotic disorders on a 'spectrum' with other conditions and with healthy experiences. They require the acceptance that patients do not present uniformly, and any attempts to shoehorn presentations into discrete entities are precarious. An early protagonist of the dimensional view was Heinrich Neumann (1814-1884) who wrote that "classification is only possible when there are genera, but these do not exist in the absence of 'generation' [aetiology]" [30]. Even Kraepelin's enthusiasm to provide a neatly organised framework to Psychiatry was tempered after four decades of research. In 1922 he warned against the too rigid application of the system, acknowledging that there are numerous cases that share features of different categories [31]. Similarly, Eugen Bleuler in his seminal monograph Dementia Praecox or the Group of Schizophrenias ([16] p. 13) specified: "it is extremely important to recognise that they [symptoms] exist in varying degrees and shadings in the entire scale from pathological to normal". Partly inspired by Bleuler's ideas Sándor Radó suggested the existence of the latent variable of 'schizotype' [32] further elaborated into the concept of 'schizotypy' by Paul E. Meehl [33]. This was assumed to be reflective to a genetically acquired dysfunction of the brain, manifest along a quasi-linear spectrum as 'cognitive slippage' . Interestingly, schizotypy made its way into the newest diagnostic systems as a separate diagnostic entity (i.e., schizotypal disorder), rather than an entity on a continuum with schizophrenia. More recently dimensional models have been developed using factor analysis, identifying symptom clusters such as positive, negative and disorganisation symptom groups. The number of clusters varies between models, ranging from three [34] to five [35], or even seven [36]. Other models conceptualise psychosis on a continuum with healthy experiences and with affective disorders, incorporating external factors (such as psychological trauma, socio-demographic influences and substance use) forming the basis of bio-psycho-social formulations [37,38]. These approaches have been boosted by a recent surge of studies demonstrating association between childhood psychological trauma and schizophrenia [39][40][41]. The impact of psychological trauma in psychosis has led to important theoretical models [42,43] and even to the proposal of Traumatic Psychosis as a separate diagnostic entity [44]. Dimensional models of psychosis differ from each other in terms of what is it they envisage as having 'dimensional' character. Some of these models seek to maintain existing categories of our classification systems, while acknowledging the overlap between diagnoses (DSM-5 and ICD-11). In this sense they maintain a medical essentialist view, adopting a 'soft-realist position' regarding the ontological nature of diagnostic entities (as suggested by Kenneth Kendler [45]). Other approaches completely break away from existing diagnostic classifications, introducing 'observables' based either on a phenomenological or biological basis [46]. The former identifies phenomenological observables based on constitutive features of human experience and behaviour; an example of this would be the EASE model developed by Parnas et al., talking about cognition, stream of consciousness, self-awareness, bodily experiences, demarcation, existential orientation. [47]. The latter proposes the existence of 'domains' of human functioning as understood from the perspective of neuroscience, neurophysiology, genetics and experimental psychology, as in the six-domain model of the RDoc initiative [48]. Regarding dimensional approaches, it is often difficult to see how dysfunctions in different dimensions, as measured separately, end up occurring together to constitute disorders so often recognisable in clinical practice. Recent theoretical models try to address this dilemma by suggesting ways in which the whole can be more than the summation of its parts. An example of this is the 'extended phenotype' model of psychosis introduced by Jim van Os and Uli Reininghaus in 2016, recognising a general, trans-diagnostic psychosis factor and specific illness dimensions [49]. This model can also account for sub-threshold psychotic experiences that could be helpful to identify individuals with 'at risk mental states' for psychosis [50,51]. A phenomenologist answer to the problem of dimensional atomisation has been proposed by Henriksen and Parnas suggesting that schizophrenia constitutes a certain psychopathological Gestalt of disturbed self-awareness, manifest in multiple experiential domains such as sense of identity, self-demarcation, self-organisation and belonging to the world [52]. Network approaches The most radical challenge of categorical and dimensional essentialist views has been brought about by the theory of complex systems, identifying mental disorder in 'patterns of interactions' . This approach strongly embraces the idea of the 'whole more than the sum of its parts' and implies that mental disorders emerge dynamically as networks of complex systems interact with each other at different levels of organisation (biological, psychological, social). Although the elements of complex system theory can be found in numerous stages of philosophical thinking (Aristotle, Immanuel Kant, John Stuart Mill), unified models started to take ground in the 1970s with the work of Gregory Bateson, Albert-László Barabási, Murray Gell-Mann, Stuart Kaufman, and in psychopathology mainly by Denni Borsboom and his colleagues. The network approach can be used within the confines of one particular field (e.g., protein, neuron or symptom networks) or can be extended to include multiple systems (such as the six domains identified in the RDoc approach) and interactions with environmental variables (trauma, substance use, family interactions). The network approach has been applied to psychosis in a relatively small number of studies [53]. Some of these papers explored interconnectedness between symptoms of psychosis [54,55], between psychosis and other pathological experiences (such as depression, anxiety and distress) [56] and between psychotic symptoms and a history of trauma [57] and substance misuse [58]. Borsboom and colleagues propose a dynamic framework of mental disorders [59] in which the system moves from a 'dispositional' , stable state into a 'disturbed' state (i.e., illness episode) precipitated by situational factors. The constitutive elements of the system interact with each other in reciprocal ways (feedback loops) forming self-sustaining patterns of interactions. In this model mental disorders are viewed in a soft-realist way, similarly to the way the reality of patterns is treated by Daniel C. Dennett, who states that "A pattern exists in some data-is real-if there is a description of the data that is more efficient than the bit map [verbatim description]" [60]. Without completely excluding the role of biological contributing factors Borsboom and colleagues reject the idea of biological explanatory reductionism, regarding mental disorders as intrinsically complex, born out of the very process of interactions [61]. In response to this statement Bringman and Eronen [62] argues that the network approach does not inherently contradict the biologically or psychologically oriented latent variable models. They suggest that strengths of the network models rely on providing a dynamic approach to psychopathology, offering insightful ways to visualise relationships and even to build mathematical models of them, but they do not change our knowledge of whether some factors could or could not be considered root causes of mental disorders. Indeed, the observation of superficial interactions between experiences (e.g., lack of sleep -altered reality perception -delusional ideation) does not rule out in principle the possibility of shared background causative factors. Also, while the network approach can provide relatively plausible explanations for episodic disorders, it struggles to explain long-lasting phenomena such as developmental traits or longstanding negative and cognitive symptoms [59]. Another fundamental challenge of the network approach (similarly to factor analysis models) is the need to decide how broad should be the range of phenomena included in the network. Studies using the network approach need to have a broad and open-minded perspective, while being aware of the risk of making the analysis untenable by exponentially increasing complexity with increasing number of components. Learning from Cognitive Psychology and Linguistics As we have seen above, regardless of our position about the ontological status of mental disorders, we create highly simplified mental representations of them based on our fundamental assumptions of their nature. This is where researchers' and clinicians' own psychological makeup comes into play, making their judgment vulnerable to Bacon's idols [2]. The first of these, i.e. the 'idol of the tribe' is recognizable in our shared tendency to be drawn to the familiar, identify patterns and regard them with a greater or lesser degree of subjectivity, even emotional attachment. Also, the answers we get are always a function of the kind of questions we ask and the methods we use to investigate them (e.g., quantitative, qualitative or mixed design). The 'idols of the den' in the clinical context might refer to the different priorities, predispositions and assumptions inherent to our professions of choice (e.g. Nurse, Psychiatrist, Psychologist, Social Worker etc.). This might influence for example, whether we are predominantly drawn to more biological, psychological or socially orientated theories of psychosis and opt for corresponding assessment and therapeutic approaches. Perhaps the most subtle and problematic among the idols are the 'idols of the market' [2] which refer to the words we are using and the concepts they signify. Lexicalization (attaching labels to things) appears early in life by spontaneous exploration of the world [9] but also powerfully reinforced by parents or guardians [10]. Medin and Ortony [63] argue that psychological essentialism is a fundamental mechanism of representing objects of thought, and there is abundant evidence regarding the importance of object representation and constancy as steps of human cognitive development [10,64,65]. This process continues throughout our professional education, playing into the illusion that the concepts we internalise (e.g., diagnostic labels) possess some form of timeless, unchangeable 'essential reality' , similar to Plato's Forms or to mathematical axioms. However, as we have seen, the ontological status of mental disorders is not so straightforward. Taking it one step further, thinking about how our concepts integrate in theoretical systems we can draw some important insights from the theories of schemas. The concept of 'schema' was first introduced by Immanuel Kant in 1971 in Critique of Pure Reason to describe the mediating procedural mental operations, or 'rules' , connecting 'pure concepts of understanding' with the corresponding (manifold) concrete 'objects of experience' . ( [66], pp. 271-277). Much later 'schema' has become a fundamental concept in Cognitive Psychology through the work of Frederic Bartlett [67] (and many others later, including Piaget, Rumelhart, Minsky, Vygotsky) signifying patterns of declarative and procedural knowledge that are activated simultaneously, representing certain aspects of the world. These schemas aid recognition and facilitate mental economy, but also have an organising influence that can lead to selective attention, omission, and 'fitting' perception into pre-existing assumptions. Bartlett suggested that cognitive schemas are (primarily, but not exclusively) represented in the form of visual mental images as well as language. He also suggested that a schema can be generalised and transposed from one field to another, enabling the constructive imagination and thinking characteristic to human intellect. The concept of schema has been taken forward in Cognitive Linguistics by George Lakoff and Mark Johnson [68] in the form of the Conceptual Metaphor theory, describing how schemas (or, in their usage, image-schemas) become consolidated in linguistic expressions -conventional metaphoric expressions in everyday language, but also formalised expressions of philosophical and scientific constructs. In the light of the above considerations, we would suggest that 'clinical schemata' are therefore linked to an internal schematic visual imagery as well as a set of collectively developed language, with its characteristic metaphoric mappings. In this sense the categorial approach seems to equate mental disorders with so many 'bounded regions' , each 'containing' a 'cluster' of symptoms (individual 'objects'). If we have sufficient 'objects' (even if these are different from case to case) then we assume that a certain diagnostic entity is present. This approach also invokes the metaphor of a physical building with a certain 'architecture' , 'building blocks' and 'thresholds' in a complex hierarchical diagnostic system with mutually 'exclusive' categories (ICD-11, [1]). Using the dimensional approach, we might tend to employ the visual imagery of straight continuous lines, like a coordinate system, representing disorders on a 'continuity' with usual experience and symptoms as 'dimensions' of a disorder (ICD-11, [1]). It is also interesting how the most recent diagnostic systems are trying to reconcile the difference between these two views by talking about "fluid boundaries between categories" with "[symptom] dimensions that cut across current [diagnostic] boundaries" (DSM-V, Introduction [26]). Similarly, the network approach can lead us to envisage disturbances as 'nodes' of a 'network' and co-occurrences of symptoms as 'edges connecting nodes of a network' , representing putative causal interactions. We recognise that there are limitations to applying the Conceptual Metaphor theory to clinical thinking, as it is not always easy to find identifiable image schemas to clinical concepts. Also, by the above analysis we do not aim to dismiss the validity of the clinical frameworks discussed, as we recognise metaphoric representation as a "basic functional aspect of the symbolisation process" [47]. The reason why we are sharing these observations is that if we want to avoid falling victim to the 'idols of the theatre' [2], we need to bear in mind that these modes of thinking are all approximations, each with their characteristic biases. This awareness will enable us to avoid thinking of diagnostic categories a rigid 'boxes' , of latent variables as 'smooth uninterrupted lines' , and of 'edges between nodes' as simple, necessary and direct causal relationships. Conclusion While unable to bring a definitive answer to the question of what exactly psychosis is, in this paper we attempted to trace back the development of the concept and look at three different ways it is formulated. While the three approaches discussed in this paper are relatively separated within research settings, they are inseparably intertwined in the clinicians' mind during the process of taking a history, proposing diagnoses, and creating formulations. It seems that our visual imaginative faculty plays an important part in this process, as we tend to envisage disorders as clusters, lines or webs and subsequently use a metaphoric symbolisation process to describe them. We linked our discussion to Francis Bacon's epistemological writings and to considerations regarding the ontological status of mental health diagnosis. We brought into discussion theories from Cognitive Psychology and Linguistics to emphasize the subjective nature of these frameworks and to highlight the importance of clinicians' awareness of their strengths and limitations. In our view, this will help to avoid the dangers of dogmatic, unilateral thinking and will make clinicians less vulnerable to the 'idols' of their minds. The answer to our initial question about the nature of psychosis will then develop in the process of individual understanding of each person's difficulties within a multilateral and collaborative process; although this will never work well with rigid care pathways and inflexible guidelines.
2023-01-18T15:20:08.875Z
2022-02-14T00:00:00.000
{ "year": 2022, "sha1": "e247447d1b2117a19c121b6f2f870d4b530f0c4e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s13010-022-00118-0", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "e247447d1b2117a19c121b6f2f870d4b530f0c4e", "s2fieldsofstudy": [ "Psychology", "Philosophy" ], "extfieldsofstudy": [] }
244731013
pes2o/s2orc
v3-fos-license
Slope classicality via completed cohomology We give a new proof of the slope classicality theorem in classical and higher Coleman theory for modular curves at arbitrary level using the completed cohomology classes attached to overconvergent modular forms. The latter give an embedding of the quotient of overconvergent modular forms by classical modular forms, which is the obstruction space for classicality in either cohomological degree, into a unitary representation of $\mathrm{GL}_2(\mathbb{Q}_p)$. The $U_p$ operator becomes a double-coset, and unitarity yields the slope vanishing. Introduction and proof of slope classicality Fix a sufficiently small compact open subgroup K p ≤ GL 2 (A (p) f ) and let C p be the completion of an algebraic closure of Q p . Let X 1 (p n )/C p be the smooth compactification of the modular curve parametrizing elliptic curves with a point of exact order p n and level K p structure. Everywhere below we view X 1 (p n ) as an adic space over C p . The closed canonical ordinary locus X 1 (p n ) e is the topological closure of the locus of rank one points parameterizing elliptic curves of ordinary reduction equipped with a point generating the canonical subgroup of level p n . We write X 1 (p n ) w = X 1 (p n )\X 1 (p n ) e for its open complement (the subscripts e and w refer to the trivial and non-trivial elements of the Weyl group for GL 2 ). Writing ω for the modular sheaf, the space H 0 (X 1 (p n ) e , ω k ) is naturally identified with the direct sum of spaces of overconvergent modular forms of weights κ such that κ = z k χ for χ a character of (Z/p n Z) × . From the perspective of the higher Coleman theory of Boxer-Pilloni [1,2], it is natural to also consider the compactly supported cohomology H 1 c (X 1 (p n ) w , ω k ). These groups are related by the exact sequence of compactly supported topological sheaf cohomology arising from the exact sequence of sheaves on the topological space As in [1,2], there is an operator U p on each of these spaces induced by a cohomological correspondence and extending a classical double-coset Hecke operator U p on H • (X 1 (p n ), ω k ) (up to matching choices of the normalization). For any s ∈ R and C p -vector space V equipped with an action of a linear operator U p , we can pass to the part V <s of slope < s, defined to be the span of all generalized eigenspaces of U p for eigenvalues λ with |λ| > p −s . The main result is: In cohomological degree zero, this is a result of Coleman [3,4]. In degree one, this is a result of Boxer-Pilloni [1, 2] 1 (who also reprove Coleman's result). It is also an immediate consequence of the following lemma, which itself is immediate from the results of [5] or [6]. To state it, we denote by X the infinite level (compactified) modular curve of prime-to-p level K p . It admits an action of GL 2 (Q p ) and, by Scholze's primitive comparison (see [6,Corollary 4.4.3]), H 1 (X, O X ) is identified with the C p -completed cohomology of the tower of modular curves of prime-to-p level K p -below we need only that it is a Banach space with a unitary action of GL 2 (Q p ) (it is unitary because the unit ball, i.e. the image of H 1 (X, O + X ), is preserved). In the following N ≤ GL 2 is the group of upper triangular unipotents. Lemma 1. Let U p denote the Hecke operator of [1,2] (the normalization depends on the weight; see §2). For t = 0, the cup products of [5] give an embedding Proof of Theorem 1, assuming Lemma 1. Consider the operator . Because the action of GL 2 (Q p ) is unitary, it has operator norm ≤ 1/p |t| , so the slope < |t| part vanishes in H 1 (X, O X ) N (Zp) . If we write then, combining the above with Lemma 1, we find that for t = 1, Q <|t| t = 0. To obtain Theorem 1, we split (1) for k = 1 + t into two short exact sequences: Taking the slope < |t| part yields the isomorphisms -for the first sequence this is immediate since this functor is always left exact, and for the second sequence right exactness follows from compactness of the U p operator on overconvergent forms. It thus remains only to prove Lemma 1. This is essentially immediate from the results of [5] or [6], once the GL 2 (Q p )-actions are matched up. This matching is actually a bit subtle, as there are multiple possible conventions for the Hodge-Tate period map and the equivariant structure on the modular sheaf. Any set of choices gives the same GL 2 (Q p )-action modulo inverse transpose and some determinants, so often the precise choices are irrelevant. Here though we must follow a power of p coming from the action of diag(p, 1), so it is crucial to screw our heads on exactly right on this point. In the next section we fix normalizations then prove Lemma 1. 1 In [1] and the introduction of [2] this result is stated at level Γ 0 (p) using the smaller group of cohomology with support in X 0 (p) ord w . It is immediate from the results of loc. cit. that this smaller space has the same finite slope part, and arbitrary level is treated in [2, Theorem 5.12.2] -we thank George Boxer for answering our questions about these results in the higher level case. Normalizations and proof of Lemma 1 2.1. Choices. We fix the action of GL 2 (Q p ) on X so that, over the non-compactified infinite level curve Y , GL 2 (Q p ) = Aut(Q 2 p ) acts by composition with the trivialization of the Tate module of the universal elliptic curve -i.e., we use the action on the homological normalization of the moduli problem. This differs by an inverse transpose from the cohomological normalization, where the action is on the trivialization of the firstétale cohomology of the universal elliptic curve. We take the Hodge-Tate period map π HT : X → P 1 so that π HT | Y is the classifying map for the Hodge-Tate line inside of the firstétale cohomology of the universal elliptic curve. Thus over Y we have a GL 2 (Q p )-equivariant commuting diagram HT Here e 1 and e 2 are the universal basis for the Tate module V p (E) = T p (E)[1/p], x and y are the standard basis for H 0 (P 1 , O P 1 (1)) so that homogeneous coordinates are [x : y], and E ∨ denotes the dual of the universal elliptic curve. Of course, there is a canonical isomorphism E ∨ ∼ = E inducing ω E ∨ ∼ = ω E , however, this isomorphism does not respect the natural GL 2 (Q p )-equivariant structures! Equivariantly, where here | det | comes from the action of isogenies on H 1 (E, Ω E ). Note that this twist is actually on the entire GL 2 (A f )-action (via (g ℓ ) ℓ → ℓ | det(g ℓ )| −1 ℓ ), so that the distinction between these equivariant structures also determines the normalization of the prime-to-p Hecke operators. Below we will continue as in the introduction to write simply ω for the modular sheaf, with the understanding that we have adopted the equivariant structure described above. Under the natural map to X → X 1 (p n ), X 1 (p n ) e is the image of π −1 HT ([0 : 1]), and the action of GL 2 (Q p ) on x, y = H 0 (P 1 , O P 1 (1)) is by the standard representation a b c d · x = ax + cy and a b c d · y = bx + dy. 2.2. The U p operator. The operator U naive p at level X 1 (p n ) of [1,2] is defined using the correspondence C parameterizing degree p-isogenies ψ : (E 1 , P 1 ) → (E 2 , P 2 ) (here we suppress prime-to-p level structure from the notation). Writing the two obvious projections as p 1 , p 2 : C → X 1 (p n ), U naive p is defined on ω k as tr•p 1! •ψ * •p * 2 . Given a geometric point (E, P ) that is not a cusp and a non-zero differential η on E, we can compute (U naive p f )(E, P, η) as follows: first choose a basis (e 1 , e 2 ) of T p (E) such that e 1 reduces to P mod p n . Then, for 0 ≤ i ≤ p − 1, write where e i denotes the image of e i in E[p]. Then ψ * i : ω Ei → ω E is invertible, and We will now realize this same U naive p as a double-coset operator: let B denote the upper triangular Borel in GL 2 . The space of overconvergent modular forms of completed cohomology, one obtains the desired statement by the same arguments. The map is B(Q p )-equivariant if one twists the action on M † k by (the twist comes from the action on x k , of course!). We deduce that p −k U naive We now treat the case k = 1 + t ≤ 0. In this case, [5,Theorem A] show that s → [s/(xy t )] induces an embedding M † k /M cl k ֒→ H 1 (X, O X ). Actually, here one must be slightly more careful invoking the arguments of [5], which are stated with cusp forms, in the case k = 0: of course M cl k = 0 when k < 0, but when k = 0 we have that M cl 0 is the locally constant functions whereas the cusp forms are still trivial. However, it is elementary to see that M cl 0 is in the kernel (as s/(xy −1 ) = s/z extends to a function on the complement of [0 : 1] where 1/z is a local coordinate), and the argument of loc. cit. still establishes an injection on the quotient by M cl 0 . The embedding is where again the twist comes from the action on xy t . We deduce that p −1 U naive , concluding the proof. Remark 1. Lemma 1 can also be deduced from [6, Theorem 1.0.1], and this has the advantage that loc. cit. is stated already with completed cohomology instead of compactly supported completed cohomology and cusp forms. We have used [5] above simply because it was easier for us to check carefully our own normalizations! Remark 2. The vectors used for k ≤ 0 also exist for k ≥ 2, where they induce an injection on all overconvergent modular forms. The same argument then recovers the fact that U p has non-negative slopes when k ≥ 2 (of course, it is much simpler to deduce this from the action on q-expansions!). Representation-theoretically, this vector comes from a highest weight Verma module, which, when k ≥ 2, admits an algebraic quotient; the classical forms are exactly those that factor through the algebraic quotient, and the vector we used above for the k ≥ 2 case is the lower highest weight vector generating the kernel. This perspective is explained in [5]. Remark 3. The reason one is led to use different normalizations depending on k ≤ 0 or k ≥ 2 is mostly explained by the form of the Hodge-Tate sequence Indeed, since we are using the double-coset operator for the p-integral matrix diag(p, 1) acting on V p E, to obtain an operator with non-negative slopes it is natural to use the equivariant structure from ω E ∨ for k = 1 and from LieE = ω −1 E for k = −1, and similarly for larger |k| by taking symmetric powers of T p (E). The equivariant structures differ by an absolute value of the determinant, which mainfests here as different powers of p for the double-coset operator coming from diag(p, 1).
2021-12-01T02:15:52.662Z
2021-11-30T00:00:00.000
{ "year": 2021, "sha1": "9b352f457cd8ce39eac33bb8f69daeaac76918a1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "9b352f457cd8ce39eac33bb8f69daeaac76918a1", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
243795051
pes2o/s2orc
v3-fos-license
Proteomic and Single-Cell Transcriptomic Dissection of Human Plasmacytoid Dendritic Cell Response to Influenza Virus Plasmacytoid dendritic cells [pDCs] represent a rare innate immune subset uniquely endowed with the capacity to produce substantial amounts of type-I interferons. This function of pDCs is critical for effective antiviral defenses and has been implicated in autoimmunity. While IFN-I and select cytokines have been recognized as pDC secreted products, a comprehensive agnostic profiling of the pDC secretome in response to a physiologic stimulus has not been reported. We applied LC-MS/MS to catalogue the repertoire of proteins secreted by pDCs in the unperturbed condition and in response to challenge with influenza H1N1. We report the identification of a baseline pDC secretome, and the repertoire of virus-induced proteins including most type-I interferons, various cytokines, chemokines and granzyme B. Additionally, using single-cell RNA-seq [scRNA-seq], we perform multidimensional analyses of pDC transcriptional diversity immediately ex vivo and following stimulation. Our data evidence preexisting pDC heterogeneity, with subsequent highly specialized roles within the pDC population upon stimulation ranging from dedicated cytokine super-producers to cells with APC-like traits. Dynamic expression of transcription factors and surface markers characterize subclusters within activated pDCs. Integrating the proteomic and transcriptomic datasets confirms the pDC-subcluster origin of the proteins identified in the secretome. Our findings represent the most comprehensive molecular characterization of primary human pDCs at baseline, and in response to influenza virus, reported to date. INTRODUCTION pDCs were presumptively identified in 1958 as cells with a 'plasma cell-like' morphology observed in human splenic white pulp and lymph nodes (1). Decades later, these cells were isolated from the circulation as the 'natural IFN-producing cells' (2). pDCs are notable for their constitutive high expression of the IFN-inducer IRF7 and the nucleic acid sensors TLR7 and TLR9. Together, with their highly developed endoplasmic reticulum [ER], pDCs are poised to rapidly respond to viral nucleic acid by producing massive amounts of IFN-I. This response is aided by the lack of introns in IFN-I genes, circumventing the steps required for pre-mRNA splicing. Secretion of IFN-I is critical to antiviral defense: signaling downstream of the ubiquitous IFN-a/b receptor results in the induction of antiviral programs, collectively known as the 'interferon signature. ' The aberrant activation of the IFN pathway is a hallmark of several autoimmune diseases, as exemplified by systemic lupus erythematosus (3,4). Since their initial description, pDCs have mostly been studied in the context of IFN-I production. IFN-I comprises of IFN-a (itself a family of 13 genes encoding 12 distinct polypeptides), IFN-b, IFN-k, IFN-w and IFN-ϵ. Traditional approaches have also shown that activated pDCs secrete varying amounts of type-III interferon [IFN-III], TNF-a, IL-6 and CXCL10. Historically, these methods have relied on pDC stimulation with wellcharacterized synthetic TLR agonists and used analyte-specific ELISAs for readout. To our knowledge, a comprehensive profiling of the primary human pDC secretome in response to a physiologically relevant stimulus has not been reported. Moreover, whether pDCs maintain production of secreted proteins in the baseline 'idle' state has not been explored. Plasma cells, whose morphology bears striking similarity to pDCs, are likewise a rare immune subset, but are notable for their baseline secretion of massive amounts of immunoglobulin necessary for humoral immunity. To address these gaps in knowledge, we catalogued the pDC protein secretome from primary human pDCs using LC-MS/MS for both unstimulated and influenza H1N1 treated conditions. Our data reveal an abundance of known pDC products in addition to novel pDCsourced protein candidates. Another characteristic feature of pDCs is their 'diversity' following stimulation. It has long been known that upon stimulation, only a minority of pDCs ultimately produce IFN-I, with the remainder assuming alternative or unrecognized roles (5)(6)(7). These studies have been limited by the monoclonal antibodies used in flow cytometry, which cannot capture the breadth of pDC heterogeneity. We therefore endeavored to characterize pDC transcriptional states immediately ex vivo as well as following stimulation with influenza H1N1 using scRNAseq. Our data suggest the presence of preexisting pDC transcriptional variation, and further define clearly specialized cell fates post-stimulation. RESULTS pDC Secretome Profiling by LC-MS/MS A major challenge to proteomic profiling of serum or culture media is the phenomenon of ionization suppression, whereby species dominant in a solution (such as albumin and immunoglobulin in serum) compete for ionization energy with less abundant species thereby impeding their detection (8). To minimize the necessary media protein supplement, we tested a variety of culture conditions assaying for pDC viability and IFN-I output as a measure of media compatibility. pDCs remained viable and produced significant quantities of IFN-a with fetal calf serum (FCS) levels down to 1%. However, we observed that even trace amounts of FCS accounted for the majority of media protein following pDC culture. Therefore, we opted to use serum-free media supplemented with minimal amounts of albumin, insulin and holo-transferrin. pDCs maintained IFN-I production capacity and viability in this media similar to media containing FCS ( Figure S1). Media completely free of exogenous protein was incompatible with pDC interferon production (data not shown). pDCs were enriched from three female donors. In order to maximize pDC recovery, pDC yield was prioritized over pDC purity ( Figure S2). Cells were cultured at high density for 24 hours in the presence or absence of influenza H1N1 (A/PR/8/34). Following stimulation, cells were collected for viability assessment, culture media was cleared of cellular debris and media protein was precipitated for proteomic processing by LC-MS/MS. As has been previously established, pDC viability is enhanced ex vivo with stimulation (9). Roughly 94% of the detected peptides corresponded to the exogenous media additives BSA, human transferrin and porcine trypsin (used in proteomic processing), reinforcing the imperative to minimize non-pDC derived media proteins a priori ( Figure S3A). The remainder of peptides were attributable to the cultured cells, identifying a total of 1,241 protein species, 819 of which were identified with high confidence, represented by 2 or more peptides (Figures S3B, C). A list of identified proteins can be found in the supplementary appendix (Table S1, Data Sheet 2). Detected proteins which did not differ in abundance between stimulated and unstimulated conditions generally include exogenous media additives and human serum proteins not known to be produced by hematopoietic cells ( Figure S4A). The latter likely represent human serum proteins carried over through pDC isolation from whole blood. The protein secretome of unstimulated pDCs was enriched for proteins not known to be secreted, including proteins with various roles in the cytoskeleton and cellular metabolism ( Figure S4B). These proteins likely represent leakage from dying cells, and is consistent with the observed relative impairment of pDC viability in culture in the absence of stimulation. Several proteins were present in a manner consistent with baseline secretory function of pDCs, however. These include pigment epithelium-derived factor/SERPINF1, alpha 2antiplasmin/SERPINF2, and prostaglandin D2 synthase/ PTGDS. These proteins are annotated as 'secreted' in the UniProt database and are recognized as enriched for expression in pDCs among circulating leukocytes in the Human Protein Atlas (http://www.proteinatlas.org/) (10). IL-16 is also enriched in the unstimulated condition; however, this cytokine is expressed by numerous cell types and thus may represent a contribution from contaminating non-pDC leukocytes in our cultures. Stimulation with influenza virus is noted to induce the robust upregulation of numerous cytokines, including all 12 members of the IFN-a protein family (Figure 1 and Figures S4C, D). Other detected IFN-I members include IFN-b and IFN-w. IFN-k and IFN-ϵ were not detected. The IFN-III family was represented by IFN-l1 and IFN-l3. The sole IFN-II candidate, IFN-g was observed at low levels and is likely is derived from contaminating cells, since this cytokine is not known to be produced by pDCs (see below scRNA-seq data for follow up). Other strongly induced cytokines include TNF-a and IL-6, and the chemokines CXCL9, CXCL10, CCL4 and CCL19. Several proteins encoded by secreted interferon-stimulated genes [ISGs] were detected, including ISG15 and SRGN. Notably, a number of proteins lacking a signal peptide and not annotated as 'secreted' elsewhere were overrepresented in the influenza stimulated group, including the proteases cathepsin c, cathepsin z, legumain, and the transcription factor IRF8. Finally, granzyme B is released from pDCs in response to influenza virus. Metrics supporting the identification of select proteins of interest are presented in Table 1. These data confirm IFN-a as the chief secreted product of stimulated pDCs. Moreover, they represent a more comprehensive and agnostic analysis of non-interferon secreted products, both upon stimulation, and in the unperturbed condition. pDC Diversification in Response to Influenza Virus Mapped by scRNA-Seq Our above data are consistent with the well-established robust IFN-a secretion capabilities of pDCs. In our hands, we calculated the 24hr IFN-a production capacity of pDCs to 4.5-7pg/cell when averaged over a population of pure pDCs ( Figure S5). Making this value more astonishing is the reported observation that upon stimulation, only a minor fraction of pDCs stain positive for IFN-a when assayed by flow cytometry. This observation is limited by the fact that anti-IFN-a mAbs can only detect one to several of the IFN-a subtypes at a time, leaving open the possibility that while some pDCs stain positive for IFNa, others account for the production of the uncaptured subtypes. Nevertheless, these reported findings suggest that what is traditionally thought of as a homogenous pool of pDCs may diverge in function following stimulation. pDC heterogeneity pre-and post-stimulation has been described (see Discussion below). These studies were similarly limited, however, by the selection of mAbs used to delineate the pDC subpopulations. Therefore, we undertook a de novo characterization of pDC diversity, at baseline and in response to influenza virus using scRNA-seq. pDCs were isolated from three genotyped female donors to near total purity (Figure 2A). A third of pDCs from each donor was immediately partitioned for baseline pDC transcriptional profiling. The remainder were cultured with influenza virus for 6 and 24hrs; with cells partitioned at the close of each time point. To avoid batch effects, cells from the different donors were pooled upon partitioning by time point (ex vivo, 6hrs/flu, 24hrs/flu), with subsequent library preparation and Illumina sequencing steps shared among donors. Cell yields and sequencing output are reflected in Table S2. Single cells were assigned to their respective donors using Demuxlet (11). Donorspecific signals were controlled using Harmony (12). Clusters were generated using Seurat (13) ( Figure 2B). The data reveal that the cells segregate predominantly by time point ( Figure 2C), with clusters evident within each major condition. Coloring cells by donor reveals that the donor cells are well-interspersed among the clusters ( Figure 2D), negating the possibility that donorspecific characteristics are main drivers of cell diversity. 18 clusters are identified by Seurat. The top differentiating genes among the clusters are listed in the supplementary appendix (Table S3). Of the clusters, clusters 13 and 16 represent minor B and T cell contamination of the pDC cultures, respectively. Cluster 17 represents AXL + pDCs, a recently described dendritic cell with an intermediate phenotype between pDCs and cDCs (14). The remaining clusters represent bona fide pDCs with distinct transcriptional profiles. Clusters 1 and 11, in the 6hr and 24hr influenza virus-treated conditions, respectively, represent the IFN producing cells ( Figure 3A). These clusters mapped with close proximity to each other across the two time points. Notably, these clusters expressed the totality of IFN-alpha genes and other IFN-Is produced by pDCs ( Figure 3B). TCF4, the transcription factor instructing pDC identity, is depleted in these clusters, whereas ID2, the transcription factor opposing pDC identity, is upregulated (see next paragraph). Moreover, the classical pDC markers NRP1, CLEC4C and CD123 are depleted in the IFN producing cells ( Figure 4A). These data confirm that among a population of pDCs treated with the same stimulus, only a minor portion mature into the IFN producing population. These cells appear to adopt a dedicated role, losing canonical transcriptional features of pDCs. Alternative surface markers suggested by the data for the IFN producers include IL18RAP, LILRA5 and LILRB2, among others ( Figure 4B). Another notable phenomenon within the single cell data was the dynamism of expression of the transcription factors TCF4 and ID2. (Figure 5A). TCF4 dictates pDC identity (15), and its continuous expression is required for the maintenance of this identity (16). ID2 is an antagonist of TCF4 which instructs a cDC profile. All pDCs ex vivo were TCF4 + ID2 -, whereas upon stimulation, expression of these transcription factors varied widely by cluster. Cluster 5 is particularly prominent for loss of TCF4 and gain of ID2. This change is accompanied by upregulation of the T cell co-stimulatory markers CD80/CD86, cDC markers FSCN1, and the secondary lymphoid organ T cell zone homing receptor CCR7 ( Figure 5B). These observations suggest that cluster 5 is a cDC/APC-like derivative of pDCs whose principal role is in antigen presentation to T cells. Cluster 8 represents a group of cells retaining classical pDC features even following 24hr culture with influenza virus. These markers include CLEC4C and CD123, as well as pDC specific genes such as SCAMP5, GZMB, TCF4 and IRF7 ( Figure S6). Whether this cluster represents a group of cells resistant to stimulation, or a 'return to baseline' phenotype, remains to be determined. Finally, pDC heterogeneity is evident at baseline ( Figure 6). On fine grain clustering, 9 subclusters are identified within the unstimulated group partitioned immediately ex vivo. Importantly, the classical pDC markers NRP1 and CLEC4C are evenly distributed among these clusters, as is the master pDC regulator TCF4. ID2 is absent. PTGDS, TCL1A, TCL1B and IGLC2 clearly distinguish these clusters. Whether this preexisting heterogeneity informs maturation into IFN producers or other phenotypes following stimulation is the subject of future investigations. Taken together, this dataset represents a multidimensional, discovery-led approach revealing pDC transcriptional diversity at baseline and further upon stimulation with influenza virus. Integration of LC-MS/MS and scRNA-Seq Datasets The scRNA-seq data also allow for a more complete understanding of the secretome data discussed previously. As mentioned, pDCs in latter experiments were only~60% pure, versus near total purity in the scRNA-seq dataset. Many identified cytokines, such as IFN-I and IFN-III, are readily attributable to pDCs. Other identified cytokines and chemokines, such as IL-16, CXCL9, CCL4 and CCL9 may or may not be pDC-derived. As the secretome and scRNA-seq experiments similarly involved pDC stimulation with influenza virus over a course of 24hrs, the scRNA-seq data may be leveraged to determine whether pDCs are in fact contributing the identified proteins in the secretome dataset. As shown in Figure 7, IL-16, CCL4 and CXCL10 are confirmed to be pDCderived, whereas IFNG is confirmed to come from non-pDCs. Interestingly, IL16 is expressed predominantly among cells of the ex vivo time point, and is lost upon stimulation, confirming its status as a member of the 'baseline' pDC secretome discussed above. In Vitro Validation of pDC Markers Suggested by scRNA-Seq The above findings merit validation and further exploration in the appropriate experimental and clinical contexts. For example, the finding that IFN-a + pDCs gain a novel surface marker profile can be validated by flow cytometry. LILRB2 is noted to be enriched for expression in the IFN-a producers. Gating on the IFN-a + cells in a population of pDCs stimulated with influenza virus demonstrates significant surface staining for LILRB2 relative to the IFN-acells ( Figure 8A). Similarly, other clusters can be further explored. Cluster 5, shown to adopt a 'cDC-like' transcriptional profile, is noted to upregulate expression of CD80, CD86, and CCR7. Cells of this cluster can be detected using mAbs against these respective markers after overnight culture with influenza virus, whereas cells cultured in the absence of virus do not stain for any of these markers ( Figure 8B). Future validation in samples from patients with SLE, influenza or COVID-19 will be necessary to follow up on the clinical relevance of these data. DISCUSSION pDC Secretory Activity pDCs were definitively identified as the 'natural-IFN producing cell' in the circulation, and IFN-I has been the principle focus of pDC investigations since. Besides IFN-I, traditional approaches have shown that pDCs also produce substantial amounts of IFN-III, TNF-alpha, IL-6 and CXCL10, typically using single-analyte ELISAs and often in response to synthetic stimuli. An early attempt by Decalf et al. towards characterizing the pDC secretome utilized a 12-plex Luminex assay to show that pDCs also produce CCL2, CCL3, CCL4, CCL5, IL-1RA and IL-8 in response to stimulation with influenza virus or ODN-2216 (17). Another group attempted to characterize DC secretion via 2D-PAGE followed by shotgun proteomics from monocyte-differentiated DCs (18). The resemblance of in vitro differentiated DCs to primary DCs is questionable, however, as even the most advanced techniques produce cells transcriptionally distant from their natural counterparts (Martin Jakobsen, personal communication). To our knowledge, a comprehensive and agnostic study of the pDC secretome has not been previously attempted. Moreover, secreted products of unperturbed pDCs have not been systematically catalogued. To accomplish these goals, we subjected media protein preparations from high density pDC cultures to LC-MS/MS analysis. To increase the odds of detecting pDC-derived proteins, we used the minimal amount of exogenous protein possible. In addition, we did not enrich the pDCs to total purity, as this would inevitably reduce the yield of pDCs. The finding that bovine albumin was the dominant species in the culture media was expected, the extent of which corroborated the need to minimize the presence of media protein additive. Methods exist to deplete albumin from sample preparations; we elected not to use these methods for risk of introducing identification bias (BSA depletion may also deplete non-specifically). proteins (data not shown), serving as proof-of-concept. Our follow-up analysis, presented here, using unstimulated and influenza virus-stimulated pDCs from an additional 3 donors netted 1,241 protein identifications. Our approach to analyzing the list of identified proteins prioritized species annotated as 'secreted' in the Uniprot database. To ascribe pDCs as the source of a particular protein in the unstimulated condition, we referred to the Human Protein Atlas to determine the degree of pDCspecific expression. Secreted proteins identified in the influenza virus-treated conditions were ascribed to pDCs on the basis of historical association or upregulation in the scRNA-seq data, which also used influenza virus as the stimulus. Here, our results suggested the presence of a 'baseline' pDC secretome. Interleukin-16 (IL-16/IL16), pigment epithelium- derived factor (PEDF/SERPINF1), alpha-2-antiplasmin (A2AP/ SERPINF2) and prostaglandin-D2-synthase (PGD2/PTGDS) are identified as top candidates for pDC-specific secreted proteins in unperturbed conditions. The biological significance of PEDF and A2AP secretion by pDCs has not been explored. PGD2 catalyzes the formation of the arachidonic acid metabolite prostaglandin D2, a neuromodulator and potent vasodilator. A single report (19) described pDCs as a hematopoietic source of PGD2 but does not further examine the role of PGD2 from these cells in particular (19). Other baseline pDC secreted products were also identified, including several additional members of the SERPIN family, although pDCs are likely not the sole source of these proteins among leukocytes. For example, IL-16, a T cell chemoattractant, was identified in the secretome data as enriched in the unstimulated condition. Our scRNA-seq data points to expression of IL-16 in ex vivo pDCs which is lost upon stimulation. The Protein Atlas suggests that non-pDCs serve as IL-16 producers as well. Together, these proteins represent underexplored baseline pDC secretory activity. In the induced secretome, IFN-I was clearly the dominant category of secreted protein, including all 12 IFN-a subtypes, IFN-b and IFN-w. The IFN-III category was represented via IFN-l2 and l3. TNF-a, IL-6 and CXCL10-known pDC cytokines, were also well represented. Cytokines less commonly associated with pDCs, such as CCL4 and IFN-gamma, were respectively ruled-in and ruled-out as pDC-sourced on the basis of scRNA-seq data. Not all cytokines reported by Decalf et al. were identified, however. CCL3 and CCL5, for example, were absent in the secretome dataset, but highly upregulated in the IFN-producers in the scRNA-seq dataset. This likely indicates that our secretome analysis missed important identifications, reflecting a technical limitation as currently performed. On the other hand, the cytokines CCL2 and IL-1RA were absent in both secretome and scRNA-seq datasets, suggesting non-pDC contamination as the source of these proteins in Decalf et al. Granzymes are conventionally thought of in terms of cytotoxic lymphocyte degranulation and represent a major antiviral tool of adaptive immunity (20). The Protein Atlas illustrates preferential expression of granzymes A, H, K and M in CD8 T cells and NK cells, as expected. Granzyme B, however, is by far most highly expressed in pDCs ( Figure S7). Granzyme B is enriched in the culture media of influenza stimulated pDCs, as observed in the secretome analysis. In the scRNA-seq dataset, GZMB expression is noted at baseline and was lost upon stimulation, counter to the protein level observation. This finding is consistent with granzyme release via fusion of preformed granules with the plasma membrane, a process distinct from IFN secretion, which is induced de novo at the transcript level for transit through the secretory pathway. Overall, the induced pDC secretome confirms the robust reactionary nature of these cells, whose chief product is IFN-a, and argues against baseline hypersecretory function akin to the namesake plasma cells. Other secreted protein products represent a more diverse picture of virus-induced pDC activity. A striking component of the induced secretome consisted of proteins not annotated as secreted yet which are pDC-specific, including IRF8, legumain and various cathepsins. Leakage from dying cells is unlikely to explain the enrichment of these proteins in the media of stimulated cells, as stimulation enhances pDC survival in vitro. We hypothesize that pDC stimulation results not only in secretion through the classical ER-Golgi pathway, but also results in the release of protein-loaded exosomes. These exosomes may serve as an alternative method for transferring antiviral programs to recipient cells. Recently it was demonstrated that pDCs form 'interferogenic synapses' with viral infected cells (21). Communication of an antiviral program via transcription factor and protease loaded exosomes in a paracrine fashion is an exciting avenue for future exploration. This work marks the first agnostic description of the primary human pDC secretome, at baseline, and in response to a physiologic stimulus. Our data support multiple mechanisms of regulated protein release from pDCs, including the classical secretory pathway, degranulation and, putatively, exosome formation. Transcriptional Diversity of pDCs pDC heterogeneity pre-and post-stimulation has been described. A CD2 hi Lysozyme + population of pDCs was reported to possess enhanced T cell activation properties (22). A similar report describes CD2 hi CD5 + CD81 + pDCs as potent inducers of T cell proliferation (23). That a minority of pDCs produce IFN-a upon stimulation has long been known. The evidence for this, however, typically involves intracellular antibody staining for IFN-a towards flow cytometric analyses. As there are 12 IFN-a subtypes, no single IFN-a antibody could detect all subtypes simultaneously. As such, it remained possible that a minority of pDCs produced particular subtypes of IFN-a, with other pDCs producing the remaining subtypes. Alculumbre et al. report a detailed analysis of pDC diversification following stimulation with influenza virus (24). Their results show that upon treatment with influenza virus, pDCs diversify along a 2-parameter, CD80/PDL1 axis. CD80 + PD-L1cells adopt APC-like functions with dendritic morphology, whereas CD80 -PD-L1 + cells secrete interferon and retain plasmacytoid morphology. These results were later recapitulated using SARS-CoV-2 as the stimulus (25). Using mass cytometry, it was further shown that human pDCs mount distinct, heterogenous responses dependent on the stimulus (26). To better understand pDC diversity pre-and poststimulation in a highly multidimensional format, we analyzed the transcriptomes of pDCs from 3 donors immediately ex vivo, upon 6hrs of influenza virus stimulation and upon 24hrs of influenza virus stimulation at the single-cell level. To our knowledge, we are the first to demonstrate that a single population of IFN-producers is responsible not only for the totality of IFN-a production, but for the other IFN-I family members, IFN-III, and the majority of induced cytokines as well. This population represents roughly 24% of cells at the 6hr time point and 5.5% of cells at the 24hr time point. These clusters are notable for dropout of classical pDC markers, including BDCA4, BDCA2, CD123 and ILT7. pDCs have been reported to diminish in the circulation of SLE patients with active disease (27,28), patients affected by pandemic influenza (29,30), and in patients with severe COVID-19 (29)(30)(31). This has commonly been interpreted to indicate that pDCs in these patients infiltrate affected tissues, decreasing their presence in the circulation. pDCs have been shown to diminish from the bronchoalveolar fluid of COVID-19 patients (32), however, challenging this theory. Our findings suggest an alternative explanation: pDCs appear to be reduced in these conditions because they are no longer detectable using the standard pDC surface antigens. We suggest alternative surface markers for the detection of IFNproducing, activated pDCs. These markers can then be applied to characterize pDCs in the relevant clinical contexts. Another notable phenomenon within the single cell data was the dynamism of TCF4/ID2 expression. TCF4 dictates pDC identity (15), and its continuous expression is required for the maintenance of this identity (16). ID2 is an antagonist of TCF4 which instructs a cDC profile. All pDCs ex vivo were TCF4 + ID2 -, whereas upon stimulation, expression of these transcription factors varied widely by cluster. Cluster 5 is particularly prominent for loss of TCF4 and gain of ID2. This change is accompanied by upregulation of the T cell co-stimulatory markers CD80/CD86, cDC marker FSCN1 and the secondary lymphoid organ T cell zone homing receptor CCR7. These observations suggest that cluster 5 is a cDC/APC like derivative of pDCs whose principal role is in antigen presentation to T cells. A small population, cluster 8, is observed to maintain the panel of pDC specifying transcription factors, including TCF4, SPIB, and IRF8, along with pDC-typical genes including IRF7, GZMB, SCAMP5 and pDC surface markers BDCA2, ILT7 and CD123. Whether the presence of this cluster at 24hrs represents cells unaffected by stimulation or cells which lost and subsequently recovered pDC identity is not clear. It is apparent, nonetheless, that pDC identity reflects a continuous rebalancing of master regulators. Preexisting heterogeneity is also evident from our data, with 9 baseline subclusters identified. Whether this heterogeneity determines the fate of a particular cell upon influenza virus stimulation remains to be determined. Unfortunately, the genes discriminating these clusters are not expressed at the cell surface, precluding the sorting of live cells towards fate-determination assays. Notably, CD2, CD5 and CD81, markers previously reported to define baseline pDC heterogeneity, did not discriminate these baseline pDC clusters. Rather, these markers were enriched in cluster 17, AXL + pDCs. This population was recently identified as cells with an intermediate phenotype between pDCs and cDCs (14), and likely explains the enhanced T cell proliferation capacity of the cells reported earlier (22,23). A potential limitation of our study in comparing the baseline versus stimulated cells was the exposure of the stimulated cells to culture media. While this may partially explain the clustering distance between the baseline cells and stimulated cells, we believe that the dramatic upregulation of many genes in the stimulated cells can be more specifically explained by the response to virus. For example, we have never observed interferon upregulation in pDCs cultured but not stimulated. Moreover, markers upregulated by stimulation in cluster 5 were specifically observed in virus-exposed pDCs by flow cytometry but not in pDCs cultured in the absence of virus. Therefore, the antiviral response likely represents the main driver of separation between cluster 0 and the remaining pDC clusters. Future experiments with finer timeseries may be amenable to pseudotime analyses detailing the transcriptional evolution of pDCs from baseline to phenotypic divergence. In summary, we have applied novel techniques to the study of pDC function and identity. Our results shed new light on pDC secretory activity at baseline and upon stimulation. Altogether, these findings suggest a need to address new aspects of pDC biology, which do not necessarily center on cell activation or IFN-I secretion. Furthermore, the pDC maturation trajectories described here bear clinical relevance in terms of use as biomarkers for disease states, towards pDC-centric immunotherapies and towards TLR7/9-based vaccine adjuvants. Sample Preparation for Proteomics Methods are summarized here with additional details described previously (34). Secreted proteins from three biological replicates of each treatment were precipitated with chloroform/methanol, reduced, alkylated, and digested with trypsin. Samples were desalted using Nestgroup C18 Macrospin columns. Eluted peptides were lyophilized, and redissolved in 3% acetonitrile, 0.1% formic acid. Liquid Chromatography and Mass Spectrometry Separations were performed with an Ultimate 3000 RSLCNano (Thermo Scientific) and a 75 mm ID x 50 cm Acclaim PepMap reversed phase C18, 2 µm particle size column. Flow rate was 300 nL/min with an acetonitrile/formic acid gradient and a column temperature of 40°C with details as described previously (34). Mass spectra were collected with a Q Exactive HF mass spectrometer (Thermo Scientific) in positive ion mode using data-dependent acquisition (DDA Single Cell RNA-Seq For the scRNA-seq experiments, isolated pDCs were cultured in Advanced RPMI-1640 [Gibco 12633012] supplemented with 5% FCS, with the same concentration of influenza virus as in the secretome experiment where indicated. pDCs were counted, pooled at equal ratios from the 3 donors for each time condition, washed, resuspended in PBS/0.4% BSA at a final concentration of 1000 cells/ul and loaded onto the 10X Chromium Controller (10X Genomics). GEMs (Gel Bead-in Emulsion) were generated using Chromium Next GEM Single Cell 3ʹ GEM, Library & Gel Bead Kit v3.1, following the manufacturer's recommendations. Single cell libraries were assessed and quantitated using a High Sensitivity DNA chip (Agilent). Libraries were loaded onto a High Output Cartridge v2 (PN # 15057931) at 1.8pM and run on a NextSeq 500 sequencer (Illumina). The raw data from the sequencer was demultiplexed into reads along with cell and unique molecular identified [UMI] barcodes which were then aligned to GrCH38 human reference and quantified via CellRangerV3 from 10x Genomics. For each condition, cells belonging to each donor were determined using their genotypes from the GSA and Demuxlet (11). Further downstream analysis was done using Seurat (13). Doublets and ambiguous cells identified by Demuxlet were removed along with any cell with < 200 unique genes, any cell expressing > 20% mitochondrial genes and any gene not identified in at least 3 cells. Clusters were identified using Seurat's SCT transform analysis using 30 PCs with time point as an effect being removed using Harmony (12). Statistical analysis and figure plotting was also done using R [https://www.r-project.org/] and Tidyverse (35). Intracellular staining for IFN-a2 was accomplished using BD Cytofix/Cytoperm in accordance with the manufacturer's protocol, along with a monoclonal anti-IFN antibody (Abcam EPR19074). LILRB2 (564345), BDCA4 (565951) and BDCA2 (566427) antibodies were obtained from BD. Cells were acquired on the BD Fortessa flow cytometer and data were analyzed using FlowJo v10. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Institutional Review Board and Committee for Participant Protection at the Feinstein Institutes for Medical Research/Northwell Health. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS MG and PG conceived the study. MG, KS, LB, and PG designed the experiments. MG, HK, EW, and JC performed the experiments. MG, AS, EW, JC, LB, KS, and PG analyzed the data. MG, AS, KS, LB, and PG wrote the paper. All authors reviewed and approved the final manuscript.
2021-11-06T15:10:23.819Z
2021-11-04T00:00:00.000
{ "year": 2022, "sha1": "815452f40e37bdf0035946a256223555019f238d", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2022.814627/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "56e56574d504a475bc9ab27791d62de4a303280f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
255939381
pes2o/s2orc
v3-fos-license
Genomic prediction of switchgrass winter survivorship across diverse lowland populations Abstract In the North-Central United States, lowland ecotype switchgrass can increase yield by up to 50% compared with locally adapted but early flowering cultivars. However, lowland ecotypes are not winter tolerant. The mechanism for winter damage is unknown but previously has been associated with late flowering time. This study investigated heading date (measured for two years) and winter survivorship (measured for three years) in a multi-generation population generated from two winter-hardy lowland individuals and diverse southern lowland populations. Sequencing data (311,776 markers) from 1,306 individuals were used to evaluate genome-wide trait prediction through cross-validation and progeny prediction (n = 52). Genetic variance for heading date and winter survivorship was additive with high narrow-sense heritability (0.64 and 0.71, respectively) and reliability (0.68 and 0.76, respectively). The initial negative correlation between winter survivorship and heading date degraded across generations (F1 r = −0.43, pseudo-F2 r = −0.28, pseudo-F2 progeny r = −0.15). Within-family predictive ability was moderately high for heading date and winter survivorship (0.53 and 0.52, respectively). A multi-trait model did not improve predictive ability for either trait. Progeny predictive ability was 0.71 for winter survivorship and 0.53 for heading date. These results suggest that lowland ecotype populations can obtain sufficient survival rates in the northern United States with two or three cycles of effective selection. Despite accurate genomic prediction, naturally occurring winter mortality successfully isolated winter tolerant genotypes and appears to be an efficient method to develop high-yielding, cold-tolerant switchgrass cultivars. Introduction The perennial bunchgrass switchgrass (Panicum virgatum L.) is undergoing breeding for improved agronomic performance as a biomass crop (Sanderson et al. 2006;Bhandari et al. 2015). Commercial adoption of switchgrass is currently limited by insufficient yield performance and lack of robust markets (Dumortier et al. 2017;Brandes et al. 2018). Breeding efforts in switchgrass have improved yield through multiple routes and within multiple switchgrass ecotypes (Casler and Vogel 2014;Casler et al. 2018). One strategy for increasing yield in the north-central United States is through adoption of populations from the southern United States, broadly referred to as lowland ecotypes (Poudel et al. 2019a(Poudel et al. , 2020. Unimproved southern populations are capable of a 50% increase in biomass yield relative to northern-adapted populations when grown in northern regions (Poudel et al. 2020). This effect is due to late flowering traits which are common in the southern United States (Lowry et al. 2019). Late flowering switchgrass populations have longer periods of vegetative growth which result in greater biomass accumulation (Schwartz et al. 2010;Schwartz and Amasino 2013;Casler 2019). Unfortunately, late flowering populations also suffer from high levels of winter mortality (>90%) in northern environments (Poudel et al. 2019a). Fortunately, heritable variation in winter hardiness exists in many switchgrass gene pools including southern lowlands (Lovell et al. 2021). Breeding progress has been reported for increased winter survivorship in populations through recurrent natural winter mortality, but molecular breeding methods could accelerate selection for winter survival (Poudel et al. 2020). There are multiple potential pathways that could contribute to winter damage and mortality Peixoto and Sage 2016;Poudel et al. 2019b). For example, perennial plant species require hardening periods during fall to obtain cold hardiness . A reduced hardening period due to later initiation could result in plant mortality and has been observed in lowland ecotypes (Palmer et al. 2014). Alternatively, loss of winter hardness (de-acclimatization) during a short winter or spring warming event could result in damage. Peixoto and Sage (2016) observed differential de-acclimatization in response to simulated spring warming followed by refreezing in Miscanthus cultivars. Another possible route is through root mortality, since lowland genotypes produce relatively coarse and long-lived roots compared with northern-adapted ecotypes (Chen et al. 2021). Roots produced by these lowland genotypes could be more prone to damage and slow to recover during harsh northern winters. Lastly, mid-winter minimum temperatures or anoxia due to ice cover could also induce mortality. The presence of multiple potential winter stressors both hinders the creation of robust cold tolerance assays and reduces the efficacy of single-year winter selection events. Likewise, it is likely that there are multiple genetic mechanisms for winter survivorship. The lowland ecotype contains high diversity and multiple differentiated sub-populations (Evans et al. 2018). After undergoing multiple cycles of selection for winter survivorship, populations originating from a wide geographic region were all able to obtain greater than 50% winter survivorship (Poudel et al. 2020). However, a genetic association study of lowland survivors found few consistent genetic regions under selection across populations (Poudel et al. 2021). If multiple mechanisms for winter survival exist in lowland switchgrass, further research may reveal which germplasm sources or loci are the most advantageous to long-term yield gain. For example, many populations collected from the eastern United States were defined as the coastal ecotype and are capable of winter survival in the north-central United States (Poudel et al. 2019a;Lovell et al. 2021). However, these populations often flower up to a month earlier than the lowland ecotype, a characteristic which limits their use for biomass production (Casler 2019;Poudel et al. 2020). Initial observations suggest genetic linkage between flowering time and winter survival which could the limit of yield of northern-adapted lowland switchgrass populations (Schwartz and Amasino 2013;Poudel et al. 2020). Genetic correlations between flowering time and winter survival could be due to the natural history of the species (i. e. population structure; Lovell et al. 2021) or due to physical linkage of loci influencing each trait within the genome. The latter linkage could be due to physically close loci (linkage disequilibrium), or loci which impact both traits (pleiotropy). Determining if winter survivorship and flowering time are tightly linked can provide valuable information for determining which switchgrass sub-populations are the most promising for future breeding progress. This study investigated the reliability and genetic determinants of winter survivorship within multiple lowland germplasm sources by constructing a multi-generation pedigree focused on crosses of two individuals with strong winter survival and a diverse group of southern lowland individuals. The dataset was used as training data for genome-wide predictions which were evaluated using cross-validation and through prediction of progeny individuals. Last, individuals grown from bulked progeny seed from this experiment were evaluated for yield and compared to other switchgrass lines from populations under selection. Germplasm and experimental design In 2016, initial crosses were carried out by bagging inflorescences of switchgrass ramets in a greenhouse. The majority of crosses occurred between a diverse group of southern genotypes (n = 57) from lowland populations and two lowland genotypes that showed strong winter survivorship as multiple clonal replicates over 6 winters near Arlington, WI. There were also a limited number of crosses between southern genotypes which had not been evaluated for winter survivorship. The winter tolerant genotypes are referred to as Tolerant 1 and Tolerant 2, and they originated from an unknown population originally collected in North Carolina, South Carolina or northern Florida (Timothy DH, personal communication). Collectively, the individuals used for initial crosses will be referred to as Founders. Crosses resulted in 2,058 individuals unevenly distributed across 29 unique F 1 families. The number of individuals per family was the result of variable seed quantity and viability. During the following year, a set of pseudo-F 2 families were generated by crossing randomly selected siblings within F 1 families. This resulted in 1,039 pseudo-F 2 individuals unevenly distributed among 20 full-sib pseudo-F 2 families. Some pseudo-F 2 families were generated from pairs of siblings within an F 1 family, so only 10 F 1 families were represented in the pseudo-F 2 families. Among controlled greenhouse crosses, the success rate for initial crosses among Founder individuals was 71%, with success defined as resulting in at least one progeny seedling from a parent (mean 36 seedlings per successful cross). Within F 1 sibling matings, used to generate pseudo-F 2 families, the success rate was 20%, but with a mean of 74 seedlings generated per successful cross parent. All Founder individuals and F 1 parents of pseudo-F 2 families were maintained in a greenhouse and divided into vegetative replicates by dividing crowns. In July 2018, a completely randomized spaced plant nursery was planted with 195 genotypes. The spaced plants were genotypes maintained in 12-plant rows with 0.7 m between and 0.7 m within rows. Weeds were controlled between individuals genotype crowns using roto-tilling and occasional hand weeding. The nursery contained a minimum of two vegetative replicates per individual. In 2019, vegetative replicates reserved from Founder individuals and F 1 parents of pseudo-F 2 families were used to replace individuals that were lost to winterkill in the spaced plant nursery during the first winter. In addition, an unreplicated, stratified by genotype spaced row nursery (unique genotypes planted with 0.7 m between rows and 0.3 m within rows) was established of the F 1 families and pseudo-F 2 families in the spring of 2018. Each row contained 10 unique genotypes. In the summer of 2018, heavy rain and standing water in sections of the nursery and resulted in uneven and poor plant vigor. To account for establishment damage that was unrelated to winter survival, fall vigor ratings were made on a scale of 0 to 5 during September in 2018 and 2019. Fall vigor was then used as a covariate for the subsequent spring vigor scores. A fall vigor score of 5 indicated a healthy switchgrass plant and zero indicated a deceased plant. Winter survivorship scores and heading date was measured for each individual in both nurseries during 2019, 2020, and 2021 (spring vigor only). Spring vigor was recorded using a scale from 0-20, with 20 indicating no visible damage and 0 indicating mortality. Heading date was recorded as the date in which panicles were observed on at least 50% of an individual's tillers. Progeny performance experiment A small population of progeny derived from the primary experiment were evaluated as part of a trial to measure yield performance in row plots conditions. Individuals within the rows were used to estimate genomic prediction accuracy. Open-pollinated seed from eight pseudo-F 2 individuals which survived three winters from the primary experiment (described above) were planted as a progeny population in a row-plot trial constructed from greenhouse grown seedlings. The progeny seed was planted alongside half-sib families selected from the Liberty cultivar (9 families), a lowland population (1 family), and upland families (8 families). Each comparison family was the result of multiple selection cycles for late flowering or strong winter survival. Within each family, seedlings were randomly assigned to family rows (30 cm between plants, 90 cm between rows; 15 individuals per row) and rows were assigned using an incomplete block design. Uneven germination and seed quantities resulted in an unbalanced design among the genotypes. Due to strong germination, the bulked progeny seedlings were used as a check family and was assigned to each incomplete block. Therefore, the progeny population was replicated 26 times, while the other families were replicated a mean of 5.6 times. The validation nursery was planted at Arlington, WI in May 2020. All plants and plots were allowed to grow during the establishment year and biomass was removed after killing frost. No fertilizer was applied. Plots were harvested with a flail chopper and plot weights determined by a load cell. Biomass harvest occurred during November 2021 and dry matter adjustment was based on three dry matter samples collected on the same date (∼500 g fresh weight each). Heading date and winter survival was also measured on individual plants during the spring and summer of 2021. Winter survival scores were collected on 366 progeny individuals. Heading date was collected on 175 progeny individuals. Row-plot biomass yield best linear unbiased estimates (BLUEs) were calculated in a mixed-model with incomplete blocks as a random effect and genotypes as a fixed effect. Post hoc means comparison was carried out using a Dunnett's multiple comparison test which compared all families to the progeny population. Primary experiment DNA extraction, sequencing, and bioinformatics Leaf samples were collected from all individuals after establishment in 2018. In 2019, a subset of the nursery was extracted for sequencing based on observed segregation for winter survival during spring 2019 and sufficient sample sizes within families. This resulted in genetic data from 18 pseudo-F 2 families (n = 1,013), 17 F 1 families (n = 618), 18 Founder individuals, and 23 F 1 parents. The Founder individuals were deeply sequenced (targeting 40 reads per site), while the F 1 and pseudo-F 2 individuals were shallow sequenced (∼1-5 reads per site). Sequencing data were generated at the DOE Joint Genome Institute using an Illumina NovaSeq S4 platform. Briefly, platebased DNA library preparation for Illumina sequencing was performed on the PerkinElmer Sciclone NGS robotic liquid handling system using Kapa Biosystems library preparation kit (Roche). Next, 200 ng of sample DNA was sheared to 500 bp using a Covaris LE220 focused ultrasonicator. The sheared DNA fragments were size selected by double-SPRI and then the selected fragments were end-repaired, A-tailed, and ligated with Illumina compatible sequencing adaptors from IDT containing a unique molecular index barcode for each sample library. The prepared libraries were then quantified using KAPA Illumina library quantification kit (Roche) and run on a LightCycler 480 real-time PCR instrument (Roche). The quantified libraries were then multiplexed and the pool of libraries was prepared for sequencing on the Illumina NovaSeq 6,000 sequencing platform using NovaSeq XP v1 reagent kits (Illumina), S4 flow cell, following a 2 × 150 indexed run recipe. The program BBDuk (version 38.87) was used to remove contaminants, remove adapter sequences and trim reads where quality drops below 6 (Bushnell et al. 2014). Marker calling was carried out by aligning FASTQ reads using bwa-mem 0.7.17. Any PCR duplicates were marked using Picard tools. Alignment statistics were estimated using Samtools 1.9 and VCFs generated for each sample using Samtools mpileup (V 1.9) and VarScan.v2.4.3. Multi-sample VCFs were created after filtering for polymorphisms using bcftools-1.9. To compensate for shallow sequencing within pseudo-F 2 individuals, haplotype maps were assembled. This was carried out independently within each large family by creating a subset of bi-alleleic markers with both contrasting homozygotes within the ancestral Founder individuals and sufficient read depth in the F 1 parents. Using this marker subset, a sliding window (100 sites) counting reads from either grandparent was used to assign ancestral probability within each pseudo-F 2 . Specifically, for each position parental calls were made with probability <0.1 or >0.9 assigned as homozygote of the given Founder ancestor and probabilities >0.2 and <0.75 assigned as heterozygotes. Run-length equivalents of parentage calls were calculated. Then assigned calls were decoded into haplotype breakpoints and short runs of heterozygosity between two homozygous regions were dropped. Specifically, short runs of <100 sites were dropped with recalculation of run-length equivalents to construct a final haplotype map. Individuals were removed from the haplotype map if they contained greater than 85% heterozygosity (n = 8) or only contained haplotypes from only one parent (n = 21). These individuals are most likely the result of pollen contamination. Progeny evaluation DNA extraction and sequencing Leaf samples were collected for sequencing from 52 individuals within the progeny population. Since the identification of outliers is the most critical goal of genomic selection, a sampling method was used increase the incorporation of outlier individuals in the validation set. Specifically, individuals were sampled using weights from an inverted density distribution of the population's mean Z-scores. The mean Z-scores were calculated from each individual's heading date and winter survivorship scores. This resulted in a subset of the population with trait values slightly oversampled from the tails of the gaussian distribution. Genotyping by sequencing occurred on an Illumina sequencer (NovaSeq 6000) through the University of Wisconsin Biotechnology Center using PstI-MspI restriction enzyme digestion before ligating fragments to barcoded adaptors prior to polymerase chain reaction amplification. Data analysis of sequencer output used TASSEL (Glaubitz et al. 2014). Briefly, the barcoded sequence read outputs were collapsed into a set of unique sequence tags with counts. Tags were aligned to the reference genome (P. virgatum v5.1), assigning each tag to a position with the best unique alignment. The occupancies of tags for each sample were observed from barcode data. Resulting files were used to call single-nucleotide polymorphism markers (SNPs) at the tag locations on the genome, resulting 1,072,642 SNPs. Marker imputation and filtering Because of the difference in sequencing platforms between the primary experiment and progeny population, the methods described below were run in parallel with and without the progeny samples included. The former SNP set was used for the variance analysis and genomic prediction cross-validation, and the latter was used for progeny prediction. For the primary data set, markers were filtered for the percentage missing sites (<20%), minor allele frequency (<0.05), and linkage disequilibrium (<0.90 with a marker within the nearest 15 variant sites) within the Founder individuals. The analysis with the progeny used a less stringent minor allele frequency (<0.025) to maximize the number of overlapping sites between the two sequencing runs. These initial filters resulted in 365,996 markers within the primary marker array and 204,682 with the array including progeny. To supplement shallower sequencing within the F 1 individuals, missing markers were assigned where the resulting allele state is unambiguous (matching or contrasting homozygotes among the parents). Next, imputation of the remaining sites was carried out using the expectation-maximization algorithm (A.mat function, "rrBLUP" R package; Endelman 2011). This imputed data set was then assigned to pseudo-F 2 individuals based on the haplotype map. Sites with fewer than 20% calls within the haplotype map were removed and a second round of imputation was used on the sites missing from the haplotype map. This second round of analysis and filtering resulted in 311,776 markers within the primary experimental population and 99,367 within the progeny data set. Quantitative genetic analysis Variance estimation was carried out for heading date and winter survival scores using two single-trait models and a multi-trait model. The following model was used: where y is the vector of phenotypes, b is the vector of fixed effects (year, fall vigor, and the interaction), Z represent the incidence matrix for individual genotypes, u a , u d , u e , and u R vectors represent the additive, dominance, epistatic and residual genetic effects respectively. The e represents a vector of the residuals. Variance structures are u a ∼N(0, G σ a 2 ), , where σ a 2 , σ d 2 , σ E 2 , and σ R 2 are the additive, dominance, epistatic, and residual genotypic variance. The matrices G, D, and E are the realized additive relationship matrix, realized dominance relationship matrix, and realized epistatic relationship matrix. The matrix I is an identity matrix and used for isolating residual genotypic effects. Relationship matrices were derived from marker data using the R package sommer (Covarrubias-Pazaran 2016). The matrix I allows independent error variances between measurement years. For multi-trait analysis, heading date and winter survival score BLUPs were predicted simultaneously assuming unstructured covariance between traits. Reliability was estimated for each trait and model. Reliability was calculated as: where Vg BLUP i is the prediction error variance of an individual i (from the diagonal of the C 22g matrix), and Vg i is the G matrix diagonal (Schmidt et al. 2019). This statistic is comparable with heritability but is calculated on a genotype-difference basis (Schmidt et al. 2019). For simplicity, genomic predictions were based on the above model with only major genetic effects (additive and residual) included. Genomic prediction was evaluated by cross-validation among families and by prediction of a generation of progeny individuals. Cross-validation used a two-stage process, where BLUEs were generated from a fixed effect model and BLUEs were then used in a model which accounted for relatedness between genotypes to predict breeding values for heading date and winter survivorship. The second model included weights based on the inverse of the square root of their standard error from the BLUE model (Cullis et al. 1996;Schulz-Streeck et al. 2013). The BLUPs were generated from the additive predicted breeding value of individuals. The residual genotypic variance was included in the prediction models because the variable predicted a meaningful proportion of variance and resulted in a superior model based on Akaike information criterion score. Three cross-validation methods were used to assess model performance. This included masking half of a single family, masking and entire family, and masking the entire pseudo-F 2 generation. Cross-validation predictive ability was calculated as the correlation between BLUPs from a complete model and BLUPs with a subset of field observations masked. Cross-validation masked either 50% or 100% of all pseudo-F 2 or F 1 families with greater than 40 individuals (n = 9). Progeny predictive ability was evaluated using individuals from the progeny performance experiment (n = 52) for both winter survivorship scores and heading date. For progeny prediction, an additional set of predictions was carried out integrating epistatic variance and dominance variance for winter survivorship score prediction and heading date prediction, respectively. In addition, predictive ability of the progeny set was evaluated by masking all pseudo-F 2 individuals to estimate the erosion of predictive ability when training data is not updated. Summary of field data Overall, winter survivorship in Arlington, WI, was variable between and within families. Within the Founder individuals, mean winter survivorship ranged from 0 to 20 (Fig. 1). The Tolerant 1 and Tolerant 2 individuals had mean winter survivorship scores of 8.5 and 17.7, respectively. Populations from Texas had overall mean winter survivorship scores of 1.4 (maximum 8.1). Populations from Mexico had overall mean winter survivorship scores of 1.0 (maximum: 8.5). One collection site from Mississippi contained three individuals and all had mean winter survivorship scores of 20 (Fig. 1). Each of these individuals also flowered 15 days earlier than the next earliest Founder individual and likely represent a coastal ecotype population. Outside of this single outlier location, the mean winter survivorship scores for the Mississippi populations was 7.3. The three individuals from the Kanlow cultivar had mean winter survivorship scores of 11.3. Three individuals from the Kansas population had mean winter survivorship scores of 6.7. Winter survivorship scores were collected for 3,158 individuals, 1,306 of which were re-sequenced. Since heading date measurements required at least 1 year of survival, only 1,458 individuals were measured for heading date, 609 of which were resequenced. During the initial winter (2018-2019), the F 1 families sustained 48% mortality and the pseudo-F 2 families sustained 66% mortality. During the second winter (2018-2019), mortality among the remaining individuals dropped to 9% for F 1 families and 16% for the pseudo-F 2 families. During the final year of measurements, mortality rate was 10% for F 1 families and 12% for pseudo-F 2 families. The mean winter survivorship scores, indicating spring vigor or degree of survival, in 2019 were 6.9 for F 1 families and 2.9 for pseudo-F 2 families. In the spring of 2020, mean survivorship scores were 3.7 for F 1 families and 1.3 for pseudo-F 2 families. In the spring of 2021, mean survivorship scores were 7.1 for F 1 families and 7.2 for pseudo-F 2 families. Overall, a strong parent-progeny relationship was observed, with a winter survivorship mid-parent regression narrow-sense heritability of 0.71 (Fig. 2). Within the F 1 families mid-parent regression heritability was 0.70, and mid-parent regression heritability was 0.88 within pseudo-F 2 families. The mean heading date was 227 ordinal DOY with a standard deviation of 10 days (Table 1). An overall phenotypic correlation of r = −0.32 was observed between winter survivorship and heading date (Supplementary Table 1). Similar to winter survivorship, narrow-sense heritability based on mid-parent regression was 0.64 overall, 0.54 within F 1 families, and 0.74 within pseudo-F 2 families. The mean progeny winter survival score in 2021 was 12.5, with only one deceased individual observed in spring 2020 (0.2% of observations). The mean heading date was 210 DOY (range = 197 to 222 DOY). The phenotypic correlation between heading date and winter survivorship scores was r = −0.15 within the progeny. Dry biomass yield BLUEs within the progeny evaluation row plots ranged from 4.75 Mg ha −1 to 10.19 Mg ha −1 , with the largest value representing the bulked progeny seed population (Fig. 3). The correlation between population heading date BLUEs and yield Fig. 3) had significantly lower yield relative to the progeny population. Winter survival and heading date variance is additive and moderately to highly reliable In the single-trait model, genetic variance for winter survivorship was primarily additive, with moderate residual genetic variance, no dominance variance, and a small degree of epistatic variance (Table 1). Mean reliability was 0.76, with means of 0.66, 0.76, and 0.80 within the Founder, F 1 and pseudo-F 2 populations, respectively. In the single-trait model, genetic variance for heading date was primarily additive, with moderate residual genetic variance, no epistatic variance, and a small amount of dominance variance. Mean reliability was 0.68, with means of 0.50, 0.70, and 0.62 within the Founder, F 1 and pseudo-F 2 populations, respectively. In the multi-trait model, variances were similar to those from the single-trait models ( Genomic prediction has high predictive ability within lowlands Broadly, genomic predictive ability of winter survivorship was high (Table 3). Mean predictive ability through cross-validation within large families was 0.73 when sibling observations were included in the training data (50% masked, Table 3). Mean predictive ability was greater for pseudo-F 2 relative to F 1 families (0.88 and 0.63, respectively). When whole families were removed from the dataset, mean predictive ability was 0.52 and mean predictive ability was greater for pseudo-F 2 relative to F 1 families (0.66 and 0.41, respectively). When observations of the entire pseudo-F 2 generation were removed from the training data and predicted, the mean predictive ability was 0.79. However, high predictive ability in this iteration was largely due to the accuracy associated with predicting the mean performance of families. The mean predictive ability within families when the entire pseudo-F 2 generation was masked was 0.40. Due to the smaller training dataset, the predictive ability of heading date was generally lower than winter survivorship (Table 4). Mean predictive ability within large families was 0.65 when sibling observations were included (Table 4). Mean predictive ability was slightly greater for pseudo-F 2 relative to F 1 families (0.70 and 0.63, respectively). When whole families were removed from the dataset, mean predictive ability was 0.53. Mean predictive ability was greater for pseudo-F 2 relative to F 1 families (0.58 and 0.51, respectively). Predictions of the entire pseudo-F 2 generation resulted in a mean predictive ability of 0.74. Similar to observations for winter survivorship, the mean predictive ability within families was substantially lower when the entire pseudo-F 2 generation was predicted (0.26). Progeny winter survivorship scores had a predictive ability of 0.71. With the pseudo-F 2 families removed from the training data, the predictive ability unexpectedly increased to 0.73. A model including epistatic genetic variance resulted in a predictive ability of 0.72. Predictive ability for heading date within the progeny population was 0.53. A model including dominance genetic variance resulted in a predictive ability of 0.56. If the pseudo-F 2 families were removed from the training data, the predictive ability increased slightly to 0.55. A post hoc analysis found that the inverted density sampling regime used for progeny population selection could inflate predictive ability by approximately 8% (reanalysis of Tilhou and Casler 2022a;unpublished data). This was not due to improved model performance, but appeared to be the result of predictive ability calculations with an excess number of individuals with trait values in the tails in the initial trait distribution. Strong predictive ability could be an artifact of pedigree constructed in this study. Specifically, with only two major genetic donors of winter survivorship in the study (Tolerant1 and Tolerant2), winter survivorship could be proportional to the percentage of ancestry from a tolerant founder. A post hoc examination found significant correlations between winter survivorship scores of an individual and the genetic distance of that individual to the tolerant founders. The mean genetic distance between the progeny population (n = 52) and two tolerant founders was negatively correlated with progeny winter survival scores (−0.64). However, the relationship between genetic distance from tolerant founders and winter survivorship BLUPs was comparable witho the pseudo-F 2 families which were not derived from a cross involving a tolerant parent (−0.63). Rapid improvements in winter survivorship can be made among diverse populations Simply as a survey of biological adaptability, this study highlights how polyploid grasses can rapidly adapt to new environments through strong within-family segregation (Rieseberg et al. 1999). Despite moving three or four hardiness zones north of their initial environment (Fig. 1), many full-sib families produced progeny capable of surviving for three winters (Fig. 2). Intuitively, crosses which used one of the two winter tolerant founders resulted in greater improvements in survival (not presented). These results reinforce the conclusion that many southern populations contain traits which can confer winter survivorship and this aligns with previous reports of breeding progress within multiple parallel collections (Poudel et al. 2020). Agronomically, this study shows that winter survivorship is highly heritable and promising lowland populations can be rapidly adapted to the north-central United States. Using the breeder's equation (Lush 1943; ΔG=σ a ir), one can calculate the expected gain from selection using the narrow-sense heritability for winter survivorship (r 2 = 0.71, from parent-offspring regression in Fig. 2), the estimated additive variance (σ a 2 = 12.5; Table 1), and selection of the top 10% of a population (i = 1.75). Theoretically, these variables predict an improvement of 5.2 points in winter survivorship scores per selection cycle. Of course, the exact rate of gain is difficult to extrapolate since winter survivorship scores do not represent a linear biological trait. In practice, survivorship improvement could be further accelerated during early winter mortality events by increasing population size and selection intensity. Growing hundreds or thousands of plants to acquire tolerant individuals is feasible, particularly if individuals can be grown in dense seeded sod, which is a more accurate representation of commercial production conditions (Tilhou et al. 2022). This strategy, combined with gradual movement of material into harsher, northern sites, could maintain strong selection pressure during field evaluations (Poudel et al. 2020). The ability to rapidly adapt southern lowland populations to northern regions opens up many new breeding opportunities, since the lowland ecotype includes the center of switchgrass diversity along the United States Gulf Coast (Zhang et al. 2011;Evans et al. 2018). Partial inbreeding may be a tool to accelerate switchgrass breeding progress Sibling mating was adopted in this study to attempt to isolate genetic regions that could confer winter survivorship, but these Table 3. The cross-validated predictive ability of winter survivorship scores based on masking either 50% or 100% of a family. Predictive ability was the correlation coefficient of the genotypically estimated breeding values (GEBVs) and the best unbiased linear predictors (BLUPs) within each family. Bias is the slope between the GEBVs and BLUPs. Families with multiple rows indicate independent sibling mating events. results indicate that sibling crosses could be a useful breeding tool. In the current project, this method increased winter mortality in moderately tolerant populations. More broadly, however, it shows the feasibility of producing weakly-inbred lines of switchgrass. Switchgrass self-pollination is rare and unpredictable (Liu and Wu 2012). Therefore, sibling mating could provide an alternative method which could facilitate study of switchgrass heterosis. In the current study, the pseudo-F 2 families were visually shorter than their F 1 relatives and prone to greater winter mortality (Fig. 3). Similar inbreeding depression has been reported switchgrass and is a genetic outcome consistent with its outcrossing reproductive habit (Chang et al. 2022;Casler and Lee, personal communication). Since inbreeding depression tends to co-occur with heterosis (Mackay et al. 2021), these results suggest that further progress can be made through yield heterosis in switchgrass (Vogel and Mitchell 2008;Shrestha et al. 2021;Edmé and Mitchell 2021). Is late flowering and winter tolerant switchgrass possible? There have been observations of a strong antagonistic relationship between winter survivorship and late flowering (Schwartz and Amasino 2013). This relationship is a challenge because flowering date is positively correlated with biomass yield (Supplementary Table 1 However, it is likely that the degree of genetic linkage is also being reduced. Due to the magnitude of linkage reduction across only three generations, a large proportion of previously observed genetic linkage was due to population structure. Since population structure can be rapidly reduced, this is promising evidence that tradeoffs between winter survivorship and yield will be minor. There is prior evidence that flowering is linked to nutrient remobilization and hardening in switchgrass (Schwartz and Amasino 2013;Sarath et al. 2014). This assumption is, most likely, broadly true for the species in the wild. However, selected switchgrass individuals could prepare for winter in response to photoperiod, rather than flowering per se. If this is possible, then full senescence is not biologically necessary for winter survival and prioritizing further extensions of vegetative growth would be valuable for biomass production. For example, most of this germplasm originated in the southern United States, and clearly produces greater biomass production than locally derived populations (Poudel et al. 2020). The southern limit of the switchgrass native range is near the Mexico-Guatemala border. Therefore, it is conceivable that genotypes from still further south from the current collection region could provide long periods of vegetative growth and greater biomass accumulation. Alternatively, it is possible (even likely) that unseen damage or nutrient loss is occurring due to excessively late flowering, and that late flowering genotypes will have poor vigor in commercial sward conditions. In the progeny row-plot evaluation, two families derived from the Liberty cultivar produced comparable biomass yield to the lowlands despite earlier flowering time ( Fig. 3; Supplementary Fig. 1). Future research is needed on this topic. From a breeding perspective, additional selection effort for vigor in late flowering genotypes may be necessary even after acceptable survivorship rates are obtained. It is likely that the ability to produce reliable long-term biomass requires genetic improvements beyond what is required for mere survival. Genomic prediction for winter survivorship was accurate but appears unnecessary Predictive ability of winter survivorship using genomic selection was relatively strong and this level of predictive ability would result in reliable genetic progress with field evaluations occurring only as needed to recalibrate after one or two generations of selection (Tables 1 and 2). With this level of precision, recurrent genomic selection may only require a small number of selection cycles to generate a robust population, but the exact rate of gain is difficult to extrapolate since winter survivorship scores are arbitrary visual measurements. Although genomic selection is accurate, a simple pedigreed breeding program could result in comparable progress if sequencing is unavailable or cost prohibitive. Therefore, genomic selection will be most valuable as an additional target of selection if sequencing is already being carried out for a complex trait such as biomass yield (Simeao Resende et al. 2014). Progeny predictive ability was not reduced when the entire pseudo-F 2 family generation was omitted, which was a surprising result. Usually, genomic prediction performance is superior then the population being predicted is closely related to the training data. This results in a penalty in predictive ability when multiple cycles of selection and recombination are carried out without updating training data (Neyhart et al. 2017). Therefore, this result suggests that only minor reductions in predictive ability occur when predictions are made across multiple generations. It is possible that the pseudo-F 2 families provided poor training data and their omission improved model performance but this is unlikely. Reliability was comparable between the F 1 and pseudo-F 2 families for winter survivorship (0.76 vs 0.80, respectively), and only a minor decrease was observed between F 1 and pseudo-F 2 families for heading date reliability (0.70 vs 0.62, respectively). Alternatively, this strong and persistent predictive ability could be an artifact of the population structure generated in this study. Specifically, winter survivorship was proportional to the overall percentage of ancestry from a tolerant founder. This correlation was strong within progeny validation, but not as strong as the predictive ability obtained through genomic prediction (−0.62 and 0.71, respectively). This result is not surprising since this experiment utilized the GBLUP model, which relies heavily on genetic relationships for prediction (VanRaden 2008). Overall, this strong predictive ability indicates that individual sequencing may be an overly resource-intensive prediction strategy. Instead, a moderate number of rapid morphological markers may have sufficient predictive ability for winter survivorship selections. This strategy has been referred to as phenomic selection (Rincent et al. 2018). Further research would be needed to evaluate best-practices in phenomic trait prediction in switchgrass, but promising results have been reported using near-infrared spectroscopy, which is already used for biomass quality traits (Lane et al. 2020). Conclusion This study described the winter survivorship and survival of multiple lowland switchgrass families across three years in the North-Central United States and found that genetic variance for winter survivorship is largely additive, has high narrow-sense heritability (0.71) and reliability (0.76). Heading date, a potential covariate for winter survivorship, had similarly high reliability (0.68), but a multi-trait model including heading date did not improve winter survival predictive ability. Further, the genetic correlation between heading date and winter survivorship appeared to erode across multiple recombination events. In a single-trait model, genomic predictive ability was generally high, even with large portions of the dataset omitted. Despite these promising results for genomic prediction, phenotypic selection successfully isolated winter tolerant genotypes from multiple crosses of different backgrounds and may continue to be the most efficient selection strategy to develop high-biomass and sustainable switchgrass cultivars. Instead, genomic prediction of winter survival will be applicable in populations already being sequenced for more complex traits, such as biomass yield. Data availability Raw reads for this study are available in the Sequence Read Archive database (https://www.ncbi.nlm.nih.gov/sra; associated data included in supplementary files). Field data and progeny marker data in variant call format is available through Dryad Digital Repository (https://doi.org/10.5061/dryad.2jm63xss7). Field data and pedigree information are included as supplementary files. To aid in replication, R code used for analysis is also attached as Supplementary supplementary data. Supplemental material available at G3 online.
2023-01-18T06:17:24.705Z
2023-01-17T00:00:00.000
{ "year": 2023, "sha1": "e811201c880d70cd91c15fa7ca0d5afc3fb5a5b8", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/g3journal/advance-article-pdf/doi/10.1093/g3journal/jkad014/49043362/jkad014.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "23e2c8e981a80f7e72b04b2b44667cb4c35d3571", "s2fieldsofstudy": [ "Environmental Science", "Biology", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
56258994
pes2o/s2orc
v3-fos-license
Hesperoides , a new “ hairy ” flea beetle genus from southern Africa ( Coleoptera : Chrysomelidae , Galerucinae , Alticini ) Hesperoides afromeridionalis gen. nov. and sp. nov. from the Republic of South Africa (Western and Eastern Cape Provinces and KwaZulu-Natal) is described. Despite some similarities with Hespera Weise, it exhibits major affinities with the genera attributed to the subtribe Aphthonini, especially with Aphthona Chevrolat and Montiaphthona Scherer. Data on distribution are supplied, along with preliminary ecological notes. Photomicrographs of main morphological characters, including male and female genitalia, and metafemoral extensor tendon are provided. Key to the six “hairy” flea beetle genera occurring in sub-Saharan African and their habitus photos are also given. In this contribution, an interesting new flea beetle genus from the Republic of South Africa (Western and Eastern Cape Provinces and KwaZulu-Natal), Hesperoides gen.nov., is described.This new genus shows, as more evident external characteristic, a distinct pubescence on the dorsal integuments, which is not common in Alticini.In fact, in sub-Saharan Africa, only 6 genera out of 84 can be considered "hairy": Epitrix Foudras, Eriotica Harold, Hespera Weise, Hesperoides gen.nov., Homichloda Weise, and Sanckia Duvivier.This characteristic, however, seems to have no phylogenetic meaning in the systematics of Alticini.Specimens were examined, measured and dissected using a Leica M205C binocular microscope.Photomicrographs were taken using a Leica DFC500 camera and the Zerene Stacker software version 1.04.Scanning electron micrographs were taken using a Hitachi TM-1000.Ten males and ten females were measured to determine the mean, standard deviation and range of some morphometric measurements for each sex.The terminology follows D'Alessandro et al. (2016, Fig. 10E) for the median lobe of aedeagus, Döberl (1986), Furth & Suzuki (1994) and Suzuki (1988) for the spermatheca, and Furth (1982), Furth & Suzuki (1998) and Nadein & Betz (2016) for the metafemoral extensor tendon. Material Geographical coordinates of the localities were reported in degrees, minutes and seconds (DMS-WGS84 format); coordinates and geographical information that are included in square brackets were added by the authors using information from the web site of Google Earth.Chorotypes follow Biondi & D'Alessandro (2006). Diagnosis.The new genus exhibits some external morphological similarities with Hespera Weise, another "hairy" genus occurring in the Afrotropical region: body elongate and weakly convex (Figs 1-2); pronotal and elytral surface uniformly pubescent (Fig. 7); pronotum without any dimples or antebasal sulci (Fig. 7); elytral margins not or very finely bordered laterally; tarsal claws subappendiculate (Fig. 12).Hespera, however, is a genus that occupies a basal position within Alticini, sharing several symplesiomorphies with Galerucini (Ge et al. 2012), while Hesperoides gen.nov.exhibits characters, such as the compact shape of the median lobe of aedeagus (Fig. 13), the spermatheca of "alticine type" (Type A of Furth & Suzuki 1994) (Fig. 14 c), and the anterior and posterior sclerotization of the vaginal palpi not connected (Fig. 14 b), that allow to attribute it to the more "modern" Aphthonina subtribe (= tribe Aphthonini sensu Konstantinov 1998).Very probably, also the genus Penghou Ruan, Konstantinov, Prathapan, Ge & Yang, recently described from China and considered probably related to Hespera (Ruan et al. 2015), has to be included in this flea beetle subtribe.Therefore, despite the different external appearance, Hesperoides gen.nov.should be considered closely related, in subSaharan Africa, mainly to Aphthona Chevrolat and Montiaphthona Scherer (Biondi & D'Alessandro 2012). Etymology.The name of the new flea beetle genus refers to its apparent similarity with the genus Hespera. Ecological notes. No ecological notes are available for this new species.However, the collecting localities are included mainly within two different vegetation types: Mediterranean Scrub & Grassland (Fynbos and Renosterveld) and Temperate Grassland, Meadow & Shrubland (Moist Highveld Grassland) (Fig. 15) (Sayre et al. 2013).
2018-12-18T04:00:47.406Z
2017-12-22T00:00:00.000
{ "year": 2017, "sha1": "f098081fab0925c8de6e303456b404440827c3d9", "oa_license": "CCBYNC", "oa_url": "http://www.fragmentaentomol.org/index.php/fragmenta/article/download/257/248", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f098081fab0925c8de6e303456b404440827c3d9", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
235585045
pes2o/s2orc
v3-fos-license
The Impact of Employee Engagement on Organization’s Productivity on United Methods on Relief Services Employee engagement has arisen as a widespread organizational perception in current years. It is the level of strength of mental and emotional connection employees feel toward their workplaces and its values and beliefs. When employees are engaged and aware of the business framework and work as a team to improve performance within the job for the advantage of the organization. Employee engagement resourcefulness has a straight effect on the organization's level of production. Every company/ organization requires their workers to be engaged in their respective work. Employee engagement is linked to customer satisfaction which is connected to an organization's financial success. Engagement arises when adequate individuals give attention to performing good work and care concerned about what the company is thriving to attain and in what way it is an accomplishment. This helpful mentality and behavior only arise once people get satisfied with their jobs they do and are convinced that the organization supports them, with an effective HR manager. This paper covers a literature review from several study findings and practices employed by the use of an expressive research method. It schemes the effect of worker’s engagement on the productivity of the organization. It also showcased the factors affecting the worker’s assignation and organizational results. Introduction Over the past years, many writers have written on the topic 'Employee engagement. Employee engagement is referred to a result of how staff take their work, the leadership of the management of the organizations, the rewards and recognition they get, and the communication style of the organization [1]. It is arguably the most critical metric for organizations in the 21st Century. Employee engagement is rightly impacted by the progress of the organization's value added practiced by personnel and personnel perception of the organization. Human Resource experts believe that the engagement has more to do with how the employees feel about their job experience and how they are treated in the organization. It has more to do with sentiments which are basically linked to drive bottom line accomplishment in an organization. Employee engagement inventiveness has a direct effect on the organization's level of production. The idea of engagement has logically progressed from past studies on high empowerment, involvement, motivation, trust, and organizational commitment. The key factors in engagement are as alignment of employees toward strategy, enabling employees to have the capability to engage themselves; and creating a sense of engagement. The many-sided nature of workers engagement is fit taken by the Employee engagement Association at United Methods on Relief Services. The researchers say that: 'fundamental to the concept of employee engagement is the idea that all employees can contribute to the successful functioning and continuous improvement of organizational processes. All in all, employee engagement is about generating prospects for personnel to link with their managers, colleagues, and the broader organization. It is about creating an environment where employees are motivated to connect with their work and care about doing a good job'. Materials and Methods The population of the study consists of employees and managers of the United Methodist Committee on Relief organization. They are in full-time employment. The rationale behind the selection of this sample is the high exposure of the respondent at a managerial level. Since the staff may be viewed as the drivers of the organization and the country's economy, their perceptions may be deemed very influential and informed due to their strong work experience and educational background. Questionnaire Development A questionnaire was developed from the literature study and selected employees to indicate the importance of the 10 employee engagement constructs by answering 15 measuring criteria in relation to employee engagement in the Organization. The questionnaire employed a 5-point Likert scale to show the insights of the respondents' employee engagement. Although the 10 constructs depict specific employee engagement components, the synergetic effect provides a coherent picture for data collection when they are interpreted together. Data Collection The data collected for this study was through a survey, which is the technique of gathering data by asking a set of articulated questions in a programmed order in an organized survey form to illustrate individuals drawn so as to be representative of a defined population [2]. A total of 15 questionnaires were administered independently by this study to respondents for completion. In the bid to guarantee high response frequency, interviewees could finish the survey forms at the start of the meeting, where the researchers enlightened them on the importance of the study to wait beforehand for survey forms to be finalized. A total of 15 questionnaires were completed and 3 questionnaires were incomplete resulting in a non-response of 3 questionnaires. This gives rise to a total rate of response of 86.6%. Data Analysis The Statistical Package for the Social Sciences Incorporated (SPSS) Version 22 was used to analyze the data statistically. Similar research in the past successfully employed the following statistical procedures and decision criteria [3]. Ƈ Exploratory Factor Analysis (EFA) Because of its experimental nature, loading factors of 0.4 and more were measured to confirm the stuff that quantify each of the personnel achievement effects [4] (Objective 3). Ƈ The Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy was utilized to ensure that the sample used is adequate. Various authors have proposed that a KMO rate of 0.6 ought to be the least suitable rate if experimental factor analysis is taken into consideration. These values are observed to be average, while additional satisfactory rates are between 0.7 and 0.8. rates between 0.8 and 0.9 are very promising values above 0.9 are superb) (Objective 4). This study followed advice from the literature and set a maximum value of 0.005 [4]. Data lower than 0.005 portrayed that the figures are suitable for multivariate statistical analysis, therefore, exploratory factor analysis (Objective 5). Ƈ the variance clarified by the factor analysis aids as an indicator to control the importance of each of the constructs to measure employee engagement (Objective 6). A variance of 60% and higher is a good fit to the data. This study aimed to achieve a good fit for the data, aiming to achieve 60% of variance per factor. Satisfactory reliability coefficients exceed 0.70. Secondary lower consistency coefficient was set at 0.58. As highlighted by some authors, such as Cortina, that when ratio and interval scales are used (such as the Likert scale used in this questionnaire), it does permit a lower reliability coefficient (Objective 7) [5]. Statistical Validation Each employee engagement construct was authorized by calculating the KMO values, Bartlett's tests of sphericity, the variance clarified by the specific construct in the factor investigation, and the exact construct reliability. Furthermore, determining standards by a factor of loadings below 0.40 were left out from the analysis, while robust double loading standards were also left out due to their double nature [4,6]. This technique also measures the criteria loads as one factor, meaning that the measures are a specific construct. In the situation where many factors are identified, the sub-factors are recognized and branded as separate sub-factors of the exact employee engagement construct. Results of the Reliability Assessment As stated earlier, the reliability assessment for the entire survey was 0.83. Results of the reliability assessment for the entire survey instrument indicated that the Cronbach's alpha for each section of the questionnaire was between 0.78 and 0.85. Therefore, the four segments of the study are all higher than the 0.70 required to establish reliability and are at an acceptable level of reliability [7]. Results for the 15 specific questions to the research question are shown in Table 1. Once again, the results are all greater than 0.84, with the engagement section being 0.71, the communication section 0.81, the leadership section 0.71, and the Loyalty section being 0.80. Descriptive Statistics A total of 10 questions were asked of the respondents regarding various factors that contribute to their level of engagement to the organization. The questions were divided into four sectors through the design procedure of the survey tool and they include; Employee engagement, Loyalty, Communication and Leadership. All but one category of responses was organized on a 5-point Likert scale (1strongly disagree, 2-disagree, 3-Neither disagree nor agree, 4-agree, 5-strongly agree). Demographics were compiled to show means, standard deviations, and frequencies for each. The respondents ranked "Employee Engagement" as the highest section related to their level of engagement (M = 3.87). Interesting to note, within the top-ranking section of "Engagement of Employees", the number one ranking item was specifically related to those who have the materials and equipment they need to do their work (M = 4.1, SD-0.84). The Communication" section ranked second in the results with a mean of 3.81, while "Loyalty" with a mean of 3.80. Finally, the lowest ranking item in the lowest-ranked section of "Leadership" with a mean of 3.71 was the question. My manager keeps me up to date on meeting my objective or goal and is the least scoring factor (M= 3.69; SD = 0.84). By examining the preliminary outcomes, it is apparent that Engagement of workers and communication does have a substantial influence on Employee Engagement [8]. Employee Engagement is the emotional and intellectual involvement and commitment by personnel to the organization. The personnel actions are quantifiable at the work group level and not at the distinct level. Several scholars such as Rothbard believe that engagement is an important factor in business performance influenced by how the business is conducted [9]. Critical components include the role of leadership, the culture and ecosystem in the organization created through the policies, processes, communication mechanisms, and overall business practices. They believe that engagement prompts an employee to focus on the organizations' success and how he/she can best contribute to that success. They enjoy what they do and feeling valued for doing it. Communication and Employee Engagement It has been asserted that communication is a very significant variable in conveying a larger organizational effectiveness [10]. Similarly, other scholars also agreed with the significance of communication and the effect of management's communication on performance [11]. Intuitively, this only makes sense as one needs to have access to information, knowledge and training to meet expectations and performance objectives. The initial research findings from this study further support these theories with leadership, communication and employee loyalty having at least a 0.70 correlation to engagement. The engagement has been defined as consisting of relationships and process of communication that engage the employees while suggesting many researchers have overlooked the relationship between communication as a dimension of engagement [1]. Furthermore, the findings support previous work on organizational commitment in one's desire to continue within an organization due to psychological affection [12]. Leadership and Employee Engagement Ten leadership qualities essential to employee engagement had been identified, and intent to stay with communicating effectively being ranked third and developing talent and coaching employees as sixth [13]. These results would support Kahn's management style theory and provide feedback and development opportunities being related to one's level of engagement [14]. This was also confirmed with managers' communication being ranked number one in the work activities construct, which was the highestranking component of the overall engagement. One previous study revealed primarily that direct communication between the senior leadership and employees is strongly related to employee commitment and engagement [15]. More recent business trends suggest that employees want their leaders to be open and honest about the company details. They expect their leaders to be mentors and give them feedback. They also need leadership to motivate, encourage and generate a desire for the work, including face to face communication about the company's objectives, growth, and impact [13,16,17]. In the present study, leadership's communication and engagement correlation was 0.78, indicating a strong relationship between the two. The four key areas of effective leadership communication had been identified as purpose clarity, effective interfaces, information sharing, and communication behaviour [18]. In comparing the communication items of the study to these four areas, the factor analysis results further substantiated the importance of effective leadership communication relative to engagement. This includes the need to provide clear direction and information about business goals and objectives. The role of communication by management and senior leadership is also significant regarding one's level of commitment to the organization, with the role of the immediate manager being critical to building overall engagement. This research study also supports strong co-relation of leadership and employee engagement where employees appreciate their managers keep them informed about how well each employee meeting their goals/objectives (Mean-3.79); Giving them useful feedback on their performance (Mean-3.94); provide them clear direction for the future of the organization and their function (Mean-3.72). Discussion This study demonstrates that employee engagement has a major impact on both, staff performance and overall organization performance. This influence is positive and critical in a sense the organization's productivity will go very high while the employees are happy to deliver results. This answers the research question, which states that "what is the impact of employee engagement on an organization's productivity?" This finding is surprising considering the ubiquitous nature of employeemanager's relationships stemming from concerns about the deteriorating quality of productivity in some companies across the whole world. At present, the personnel's engagement, had been defined positive performance attitude, which can amount in an improved level of activation and identification with the organization's goals-leading to a positive effect on the employee's work determination, is measured as one of the main questions concerning the work of all organizations which is not for in developed economies alone [20]. Employee engagement in this context is about how we create the conditions in which employees offer more of their capabilities and potential. Indeed, most managers had a general understanding that the most significant impact that their businesses have on the employee's engagement. Finding from this study have several practical implications that have relevance to both organizations and employees. It implies that for sustainable business practices to become more common across the business world, which is the biggest polluter nationally and globally, employees must be involved in all affairs of the organization to and not be allowed to feel left out. The positive association between the two variables means the more we have of the independent variable-employee engagement, the more we will have of the dependent variableloyalty. Figures and Tables Correlation analysis of employee engagement and manager's perception produced a Person's correlation coefficient of 0.811 and a significance of 0.000 at 0.01 confidence level, indicating that there was a positive and significant correlation between the two variables that deserved further investigation (Table 2). A model summary indicating r values showed that employee engagement is a strong predictor of organizational practices at 65.8%, meaning that if a manager has engaged employees at all levels in the organizations business, they are 65.8% likely to be motivated practices (Table 3). Conclusions After reviewing the various research and survey findings of employee engagement, it can be concluded that high levels of employee engagement will lead to improved employee commitment & involvement towards job and consequently forming inspired personnel, who will work to attain the common organizational goals. Attaining workforce experts is not enough in today's evolving economy like this one; rather many need to be done to hold, contain, and make employees dedicated to the organization and its goals. Therefore, engagement is a situation in which a person is not only intelligently devoted to his work but has countless emotive attachment with his job that goes beyond the duty call such that it's also beyond the Company's interest. All organizations should equip their staff with enough facilities and autonomy over their work to make it more exciting, providing an environment work-life balance. The organizations should apply retention strategies as an outcome area of the Human Resource concentration areas like staff enthusiasm, career development, growth, and compensation. Hence, working in a friendly environment to increase engagement level of an employee for higher productivity.
2021-06-22T17:55:13.374Z
2021-04-30T00:00:00.000
{ "year": 2021, "sha1": "1ae5e5749ee3e69dbc626dfee186a6117e0dae6d", "oa_license": null, "oa_url": "https://www.texilajournal.com/adminlogin/download.php?category=article&file=Academic_Research_Vol8_Issue2_Article_2.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5fdacedc1ad9d79ab1011cd11f433cb6d75819b7", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
43511415
pes2o/s2orc
v3-fos-license
Retrospective analysis of quality improvement when using liposome bupivacaine for postoperative pain control Background/objective Liposome bupivacaine, a prolonged-release bupivacaine formulation, recently became available at the Naval Medical Center San Diego (NMCSD); before availability, postsurgical pain for large thoracic/abdominal procedures was primarily managed with opioids with/without continuous thoracic epidural (CTE) anesthesia. This retrospective chart review was part of a clinical quality initiative to determine whether postsurgical outcomes improved after liposome bupivacaine became available. Methods Data from patients who underwent laparotomy, sternotomy, or thoracotomy at NMCSD from May 2013 to May 2014 (after liposome bupivacaine treatment became available) were compared with data from patients who underwent these same procedures from December 2011 to May 2012 (before liposome bupivacaine treatment became available). Collected data included demographics, postoperative pain control methods, opioid consumption, perioperative pain scores, and lengths of intensive care unit and overall hospital stays. Results Data from 182 patients were collected: 88 pre-liposome bupivacaine (laparotomy, n=52; sternotomy, n=26; and thoracotomy, n=10) and 94 post-liposome bupivacaine (laparotomy, n=49; sternotomy, n=31; and thoracotomy, n=14) records. Mean hospital stay was 7.0 vs 5.8 days (P=0.009) in the pre- and post-liposome bupivacaine groups, respectively, and mean highest reported postoperative pain score was 7.1 vs 6.2 (P=0.007), respectively. No other significant between-group differences were observed for the overall population. In the laparotomy subgroup, there was a reduction in the proportion of patients who received CTE anesthesia post-liposome bupivacaine (22% [11/49] vs 35% [18/52] pre-liposome bupivacaine). Conclusion Surgeons and anesthesiologists have changed the way they manage postoperative pain since the time point that liposome bupivacaine was introduced at NMCSD. Our findings suggest that utilization of liposome bupivacaine may be a useful alternative to epidural anesthesia. Introduction Postsurgical pain is a significant concern for patients undergoing inpatient and outpatient procedures at US hospitals. In a recent survey regarding pre-and postsurgical pain experiences of patients (N=300) from randomly selected surgical practices across the US, pain after surgery was the most prominent presurgery concern expressed by patients in the sample; 80% reported having concerns about postsurgical pain, and 46% indicated that these concerns resulted in "high" or "very high" levels of anxiety. 1 Such concerns are well founded, because approximately two-thirds of respondents reported experiencing postsurgical pain of moderate-to-extreme intensity. 1 The inadequacy of postsurgical pain control has been recognized for decades, 1 and numerous government agencies and clinical societies have published recommendations with strategies intended to improve postsurgical analgesia practices. [2][3][4][5] The American Pain Society, in collaboration with the Pain Care Coalition, 6 has also advocated for the creation of a national pain and palliative care research and quality program that would ensure that military personnel, veterans, and Medicare beneficiaries receive appropriate pain management. 7 However, despite these efforts, there appears to have been little or no improvement in patients' reported levels of postsurgical pain control over the past 20 years. 1 Opioid analgesics are a cornerstone of postsurgical pain management 7,8 because these agents are widely recognized as the most effective option for controlling moderate-tosevere pain. 7,8 However, commonly reported opioid-related adverse events (ORAEs), including constipation, nausea, and vomiting, can be burdensome, 7-10 especially in the setting of abdominal surgery. 11,12 In addition, health care costs have been reported to be higher for patients who experience ORAEs because of increased pharmacy and nursing requirements and increased length of hospital stay. [7][8][9] To minimize the risk of ORAEs while still providing adequate postsurgical pain control, the American Society of Anesthesiologists (ASA) recommends the use of multimodal approaches to pain management that incorporate perioperative infiltration of local anesthetics into surgical incision sites whenever possible. 3 Historically, postsurgical analgesia regimens used at the Naval Medical Center San Diego (NMCSD) for patients undergoing chest or abdominal surgery consisted of opioid analgesia with adjunctive use of a continuous thoracic epidural (CTE) anesthesia in some of the laparotomy cases. In May 2013, liposome bupivacaine became available, on a restricted basis, for use at NMCSD. This prolonged-release liposomal formulation of bupivacaine is indicated for single-dose administration into the surgical site to produce postsurgical analgesia. 13 The safety and efficacy of liposome bupivacaine-based multimodal analgesic regimens compared with bupivacaine HCl and intravenous opioidbased patient-controlled analgesia have been investigated in several surgical models across multiple Phase II, III, and IV studies. [14][15][16] Positive outcomes have also been reported in exploratory prospective and retrospective studies that evaluated transversus abdominis plane (TAP) infiltration of liposome bupivacaine for postsurgical analgesia in patients undergoing abdominoplasty, hysterectomy, prostatectomy, or umbilical hernia repair. [17][18][19][20] On the basis of these findings, we hypothesized that incorporating liposome bupivacaine into multimodal analgesia regimens at NMCSD could result in clinical quality improvement (CQI). This analysis evaluated whether postsurgical outcomes, including pain scores, opioid consumption, length of intensive care unit (ICU) stay, and length of hospital stay, improved after liposome bupivacaine became available at NMCSD for use in patients undergoing laparotomy, sternotomy, or thoracotomy procedures. The objective was to determine whether possible quality improvements associated with liposome bupivacaine justify the additional pharmacy cost of liposome bupivacaine compared with traditional postsurgical analgesia. study design This analysis was based on a retrospective chart review performed for CQI purposes. As such, CQI was implemented as part of practices administered to improve patient care at NMCSD; the analysis was not required to go through a formal institutional review board process or obtain informed consent, as per guidance from the US Department of Health and Human Services. 21 Data from all patients who underwent laparotomy, sternotomy, or thoracotomy procedures during the 12 months after liposome bupivacaine (bupivacaine liposome injectable suspension, EXPAREL ® ; Pacira Pharmaceuticals, Inc, Parsippany, NJ, USA) 13 became available at NMCSD (May 2013 through May 2014; post-liposome bupivacaine group) were compared with data from patients who underwent these same surgical procedures during the 6 months before the introduction of liposome bupivacaine at NMCSD (December 2011 through May 2012; pre-liposome bupivacaine group). Patients were identified for inclusion using current procedural terminology (CPT ® ) codes for laparotomy, sternotomy, and thoracotomy (Table 1). Pain control methods used in these surgical procedures included CTE anesthesia (laparotomy patients only), TAP block, wound infiltration with liposome bupivacaine, and wound infiltration via elastomeric pump (used prior to formulary adoption of liposome bupivacaine for thoracotomy procedures; patients received a continuous infusion of bupivacaine HCl into their surgical wound for 3 days after surgery). Outcomes Each medical record was reviewed and relevant data were extracted for each patient. Collected demographic and baseline clinical characteristics included age, sex, ASA physical status classification score, and preoperative pain score on an Repair procedures for venous anomalies 33863 Repair procedures for thoracic aortic aneurysm 33864 Repair procedures for thoracic aortic aneurysm 39220 excision/resection procedures on the mediasternum eleven-point numeric rating scale (NRS; 0= no pain to 10= worst pain imaginable). Pain scores captured in nursing notes were also recorded at 4-hour intervals during the first 72 hours after surgery. Postsurgical consumption of intravenous and oral opioids (converted to oral morphine equivalents) was recorded for each patient; drugs used included morphine, hydromorphone, fentanyl, meperidine, hydrocodone, and oxycodone. Length of ICU stay and total hospital length of stay (both in days) were recorded for each patient. Data analysis Data for patients in the pre-and post-liposome bupivacaine groups were stratified by surgery type (laparotomy, sternotomy, or thoracotomy). Additional subset analyses were performed for the laparotomy group based on pain control method (CTE anesthesia or no CTE in the pre-liposome bupivacaine group, and CTE anesthesia only or liposome bupivacaine only in the post-liposome bupivacaine group). Epidural use was not an option for sternotomy or thoracotomy procedures. Comparisons between the pre-and post-liposome bupivacaine groups were made for the outcomes of overall mean and highest mean pain scores through 72 hours postsurgery, opioid use (milligrams of oral morphine equivalents), length of ICU stay, and length of hospital stay. Data were summarized using descriptive statistics. The between-group comparisons were conducted using a t-test, with the significance level set at P,0.05. Results Patients A total of 182 patients were included in the analysis: 88 in the pre-liposome bupivacaine group (laparotomy, n=52; sternotomy, n=26; and thoracotomy, n=10) and 94 in the postliposome bupivacaine group (laparotomy, n=49; sternotomy, n=31; and thoracotomy, n=14). Of the laparotomy patients in the post-liposome bupivacaine group, eleven received a CTE anesthesia and 38 did not. Of the laparotomy patients in the pre-liposome bupivacaine group, 18 received CTE anesthesia and 34 did not. Results for subgroups stratified by type of surgery Mean pain scores, postsurgical opioid use, and lengths of ICU and hospital stay results are summarized in Table 3. In patients who underwent laparotomy, mean length of hospital stay was significantly shorter in the post-liposome bupivacaine group (5.8 days) compared with the pre-liposome bupivacaine group (7.4 days; P=0.027). In patients who underwent sternotomy, the mean maximum postsurgical pain intensity score was significantly lower in the post-liposome bupivacaine group (5.7) compared with the pre-liposome bupivacaine group (7.2; P=0.039). No other statistically significant between-group differences were observed. However, there was a trend toward reduced postsurgical opioid use in the post-liposome bupivacaine group in the subset of patients who underwent laparotomy (232 vs 345 mg of oral morphine equivalents in the post-and pre-liposome bupivacaine groups, respectively; P=0.059), and a trend toward increased opioid use in the post-liposome bupivacaine group in the subset that underwent sternotomy (254 vs 192 mg of oral morphine equivalents in the post-and pre-liposome bupivacaine groups, respectively; P=0.051). Results for subset analyses of laparotomy patients Results for mean pain scores, postsurgical opioid use, and lengths of ICU and hospital stays for laparotomy patients stratified by pain control method are summarized in Table 4. On average, length of hospital stay was significantly shorter (by ∼1 day; P=0.028) in patients who received CTE anesthesia during the period when liposome bupivacaine was available compared with the time period before liposome bupivacaine became available. An analysis of data from the pre-liposome bupivacaine period showed that patients who received CTE anesthesia Patient demographic and baseline clinical characteristics are summarized in Table 2. The groups were relatively well matched at baseline, with the exception of preoperative pain scores, which were significantly lower in the overall postliposome bupivacaine group, as well as in the laparotomy and sternotomy subgroups. A greater proportion of patients had severe pain (NRS score $7) preoperatively during the preliposome bupivacaine period (10% [9/88]) compared with patients who underwent surgery during the post-liposome bupivacaine period (2% [2/94]). had significantly longer mean ICU stays and longer mean hospital stays than patients who did not receive CTE anesthesia (P,0.05 for both comparisons; Table 4). However, those who received CTE anesthesia reported lower mean pain intensity scores (P=0.037). Among patients who underwent laparotomy during the period when liposome bupivacaine was available, those who received liposome bupivacaine had a significantly shorter mean duration of ICU and hospital stay than those who received CTE anesthesia (P,0.05 for both comparisons; Table 4). No statistically significant between-group differences were observed in mean pain scores or amount of mean postsurgical oral opioids consumed in these two patient subsets. Discussion Local anesthetic wound infiltration and TAP block are gaining acceptance as simple and effective techniques to manage postoperative pain following a variety of open and laparoscopic procedures. [22][23][24] Wound infiltration analgesia is typically administered as a single injection at the end of an operation while patients are under general or regional anesthesia, 22 while TAP block is injected into the neurovascular plane of the abdominal musculature. 25 Multimodal analgesia regimens that include wound infiltration or TAP blocks with local anesthetics are reported to be associated with decreased postoperative pain scores, reduced opioid consumption, fewer ORAEs, earlier patient mobility, shorter hospital stays, and greater patient satisfaction compared with other pain management strategies. 22,23,[26][27][28] Side effects and surgical complications are infrequent, and systemic toxicity is rare with TAP block or wound infiltration of local anesthetics; in contrast, epidural approaches can be associated with unwanted motor blockade, bladder dysfunction, and other potentially serious complications. 24,26,27,[29][30][31][32][33] 238 King et al than epidural analgesia and do not require special expertise to perform. 23,30 TAP blocks can also be used for patients undergoing major surgery who have contraindications to epidural analgesia (eg, those with clotting disorders or sepsis). 27,28 Based on these findings from the medical literature, we postulated that incorporating liposome bupivacaine into multimodal analgesia regimens for postsurgical pain management at NMCSD could result in CQI at our facility. This retrospective chart review was undertaken to compare postsurgical outcomes before and after liposome bupivacaine became available at NMCSD. Findings from our analysis suggest that overall, the quality of postsurgical analgesia (mean pain intensity scores and amounts of orally administered opioids consumed) was similar during the pre-and post-liposome bupivacaine periods, but the average length of hospital stay was significantly shorter during the post-liposome bupivacaine period. This difference was apparently driven by the between-group difference in the laparotomy surgery subgroups, which represent the largest patient populations in the study. The number of patients included in the sternotomy and thoracotomy surgery treatment groups may have been too small to show statistically significant differences on this parameter. Interestingly, the use of CTE anesthesia decreased after liposome bupivacaine became available at NMCSD. During the pre-liposome bupivacaine period, 35% (18/52) of patients received CTE anesthesia compared with 22% (11/49) of patients during the post-liposome bupivacaine period. This is noteworthy because of the potential safety concerns associated with the use of CTE anesthesia (eg, spinal hematoma, abscess, and permanent neurologic damage). 33 Avoiding the use of CTE anesthesia can be particularly useful in cases wherein anticoagulation, ambulation requirements, hemodynamic concerns, or inpatient epidural management requirements may preclude the use of epidurals. [33][34][35] Some anesthesiologists have indicated that they are performing fewer epidural procedures, in large part due to fear of litigation and lack of evidence supporting clinical benefits compared with other less-invasive pain management strategies. 33 Analgesic techniques that allow for avoidance of continuous infusion modalities and/or are associated with shorter hospital stays may lead to decreased health care costs. While formal cost analyses were not conducted in this study, even a 1-day reduction in hospital stay would be expected to result in significant cost savings. Based on data from a recent survey of clinicians and economic professionals from US hospitals, the average hospital cost per day following inpatient general/colorectal surgery is ∼US$2,000. 36 Findings from the same survey 36 indicate that the estimated average direct cost per hospital stay for a patient who uses intravenous opioid patient-controlled analgesia is ∼$600, plus an average of ∼4 hours of staff time associated with administration, documentation, and monitoring. The direct cost associated with continuous infusion of local anesthetics via elastomeric pumps is ∼$650 per patient plus ∼3 hours of staff time associated with administration, documentation, and monitoring, while the direct cost of a 266 mg/20 mL vial of liposome bupivacaine is ∼$300. Assuming that a similar level of analgesia is achieved with each modality, use of liposome bupivacaine could lead to meaningful cost savings (∼$300 per patient or $300,000 per 1,000 patients). Furthermore, findings from a series of open-label economic studies support the use of liposome bupivacaine-based multimodal analgesic regimens over intravenous opioid-based regimens for postsurgical analgesia in patients undergoing open colectomy, 37 laparoscopic colectomy, 38 and ileostomy reversal. 39,40 A pooled analysis of data from the 191 patients (liposome bupivacaine-based multimodal analgesia, n=86; intravenous opioid-based analgesia, n=105) across these studies showed that the multimodal analgesia group had significantly less mean postsurgical opioid consumption (38 vs 96 mg morphine equivalents; P,0.0001), shorter median hospital length of stay (2.9 vs 4.3 days; P,0.0001), and lower mean hospitalization costs ($8,271 vs $10,726; P=0.011), compared with intravenous opioid-based analgesia. 16 There are several limitations to the interpretation of results from our analysis. The study was inherently limited by its retrospective observational design, which could not control for possible selection bias (eg, sicker/more complex patients may have been more likely to receive CTE anesthesia than healthier patients). Moreover, the results were derived from patients who were treated at a single institution; our observations may not be generalizable to other institutions or patient populations. Finally, there are several potential factors other than the intervention studied that could have contributed to the observed results (eg, other improvements in surgical or postoperative practices may have occurred between December 2011 and May 2014, which could have influenced the results). It should also be noted that although the characteristics of the patient groups treated during the pre-and post-liposome bupivacaine periods of the study were generally similar, mean preoperative pain intensity scores were significantly higher in the pre-liposome bupivacaine group (2.0) compared with the post-liposome bupivacaine group (0.7; P=0.001). This difference was primarily driven by a higher number of outliers in the post-liposome bupivacaine group. Larger, prospective, 239 Quality improvement with liposome bupivacaine controlled studies are needed to confirm the reproducibility of these findings across a heterogeneous range of patient populations and surgical practices. Conclusion This analysis allowed us to observe how our surgeons and anesthesiologists have changed the way they manage postoperative pain after liposome bupivacaine was introduced at NMCSD. Since the time point that liposome bupivacaine became available, there has been a noticeable decrease in the use of CTE anesthesia. Given the relative simplicity of administration and the seemingly comparable efficacy for postsurgical analgesia, liposome bupivacaine may be a useful alternative to epidural anesthesia.
2018-04-03T05:36:18.381Z
2016-04-21T00:00:00.000
{ "year": 2016, "sha1": "9954a1f100c0d6a3950aaae6f33f458e934d6dfe", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.2147/jpr.s102305", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "17b8a64cb4d8972ba06f4158af044e1cdfccde08", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237449321
pes2o/s2orc
v3-fos-license
Recent Advances of Lattice Boltzmann Method in Microfluidic numerical Simulation The technology of microfluidics is widely adopted in various fields such as biomedicine, microanalysis, and microelectronics. For example, pharmaceutical scientists often use microfluidics as a tool for drug delivery or cell separation. The LBM (Lattice Boltzmann method) is a commonly used numerical simulation in microfluidic researches. LBM is used extensively for simulations containing complicated boundary conditions and multiphase interfaces as it needs relatively low computing power compared to other numerical simulation methods in complex situations. The brilliant capability in parallelism also allows it to have a high multitasking performance, increasing overall efficiency. In this paper, we reviewed several typical applications of LBM in the following three fields: (1) particle regulation; (2) flow control; (3) drug delivery. We concluded defects in current studies and proposed potential improvements to be investigated in the future. Introduction Microfluidics is widely used in cell separation [1]and drug delivery [2]. Experiment, theory analysis, and numerical simulation are the three main means to investigate the microfluidic-related phenomenon and mechanism. The experiment is usually accurate, but they are often expensive. Although the theoretical analysis is cheap, it is usually not accurate enough. Numerical simulations usually achieve a trade-off between cost and accuracy. For numerical simulation, the Lattice Boltzmann method (LBM), a flow field simulation method established and developed in the mid-1980s, has gained wide attention in recent decades. For example, LBM is used to simulate a 2D electrothermal pump [3]. The droplet-on-demand (DOD) is an important part of the microfluidic system. In [4], the authors used LBM numerical simulation to explore a lowcost Polymerase Chain Reaction based design and manufacturing method, and analyzed a flow-focused DOD system. To verify the suitability of the continuity medium hypothesis in the design of the emitter, the LBM was used to study the flow characteristics of the emitter [5]. The lattice Boltzmann method and dynamic ray tracing method are used to study the retention and coalescence behavior of microfluidic droplets under the action of an optical trap [6]. The use of two-dimensional numerical simulations with a reduced particle-based Reynolds number for studying particle migration in a microchannel with equally spaced multiple constrictions was investigated. Two-and three-dimensional colloidal lattice Boltzmann models were used to simulate particle-fluid hydrodynamics [7]. This paper mainly reviews the application of LBM in microfluidic numerical simulation. Sec.2 introduces the basic principles and implementation steps of LBM. LBM applications in microfluidic related particle regulation, flow control, and drug delivery are summarized in Sec. 3. Conclusion and prospects are provided in Sec.4. Background LBM is a numerical method to simulate a flow field based on molecular motion theory and statistical mechanics. LBM is a kind of flow field simulation method established and developed in the mid-1980s. It inherits the main principle of lattice gas automata (Lattice Gas Automaton, LGA) and improves the LGA. Its particle distribution function satisfies the lattice Boltzmann (LB) equation. The LB equation can be expressed as: where fi(x,t) and Ωi(x,t) are the particle distribution function and collision operator in the ith direction at the position x and the time t, δt is the time step, e i is the lattice velocity in the ith direction. The most commonly used collision operator is: Where τ f is the relaxation time, and f i eq is the equilibrium distribution function, which is defined as The LBM simulation includes the following steps:(1) simulates the particle's collision at that particular point with another fluid particle (2) after spread (3) apply boundary conditions (4) update time step (5) calculate the forces required (6) instantaneous update to get local density and velocity (7) particles move toward equilibrium state after a while (8) output the LBM simulation results. Application of LBM in Microfluidic Flow In this section, typical applications of LBM in particle regulation, flow control, and drug delivery are introduced. Particle Regulation In [8], the effect of particle compliance on the inertial migration of microfluidic channels is investigated They effectively captured the inertial ordering of particles observed in the experiment. The authors present a function of the dimensionless particle migration velocity and the channel wall distance. They find that the positive velocity is closer to the wall and negative closer to the midplane for all channels. The positive velocity means that the particle is away from the channel wall, where the negative velocity indicates the migration away from the plane in the channel. Therefore, there is a stable deviation from the center equilibrium position, where the particle velocity is zero. The simulations show the following observations: (1) the larger particle equilibrium is closer to the plane in the channel. Similarly, they find that the particle equilibrium position depends on the shell compliance, and the softer particles are closer to the middle plane than the hard ones. Thus, particle deformation enhances a lift that keeps the particle away from the wall. (2) the equilibrium positions of rigid and soft particles transported by channel flow in the range of 1 to 100 do not vary with the Reynolds number of Rec channels. (3) the inertial effects in microfluidic flows can yield a useful means for sorting, focusing, and separating polymeric particles and biological cells by size, compliance, and the quality of encapsulated fluids. In [9], the study aims to understand the effect of deformability on red blood cell (RBC) behavior of deterministic lateral displacement (DLD) devices. A model is developed and benchmarked for deformable RBCs in DLD devices to end this. In numerical mode, the lattice-Boltzmann method (LBM) is used as fluid phase, finite element method (FEM) as membrane dynamics, immersion boundary method (IBM) as bidirectional fluid-film coupling. In order to reduce the simulation domain to a single obstacle unit, shift periodic conditions are used in the flow direction. Simulations as following are taken: (1) they take simulations without particles to validate the Stokes flow assumption, the bounce-base boundary condition describes the confining walls and obstacles in simulated DLD geometry. (2) they analyze the streamline and obtain the streamline's separation distance parallel to the wall and the wall. (3) the stretching of an RBC in an optical tweezer.is simulated. Three-dimensional high resolution immersed-boundary-lattice Boltzmann-finite-element simulations of single deformable RBCs have been performed in deterministic DLD devices. While keeping other geometric parameters fixed, they have varied the row shift d of the DLD setup, the RBC capillary number Ca, and the ratio between deforming viscous stresses and restoring the RBC membrane's elastic stresses. They showed that a deformability-based separation of RBCs in DLD devices is possible. When the volume fraction of red blood cells becomes larger, the separation efficiency will decrease. It has the risk of flow resistance and blockage. This study can contribute to the design of new DLD settings, deformation-based red blood cell separation, and understanding of red blood cell trajectories in such devices. This may ultimately help design cheaper, faster, and more reliable diagnostic equipment to detect malaria and other diseases. For example, It can be used to separate red blood cells from healthy red blood cells. 3.2 Flow Control San-John An et al. [10] did a numerical study for rotating and oscillating stirrers in a microchannel. They found a new time-averaged mixing index formula to get critical values of stirring speed and maximize the efficiency under low Reynolds number . They used the LBM, as it is convenient to make adjustments to experimental boundary conditions, for example, Reynolds number and revolutions per minute for the stirrers, to evaluate the mixing rate for periodic, unsteady flows instead The simulation has the following results: (1). A new time-averaged mixing index with constant periods was defined as follows: (2). The mixing of a stationary stirrer at a lower Reynolds number is larger than that of a stationary stirrer at a large Reynolds number. Zachary et al. [2] used computational simulations to probe the utility of actuated synthetic cilia for local regulation of the heat transport in microfluidic channels. They discovered that it could be enhanced by beating synthetic cilia. The TLBM (Thermal Lattice Boltzmann Method) is combined with the lattice spring model to simulate thermal energy transport and fluid dynamics because of the low requirement of computing power. The TLBM model gets the overall macroscopic result by simulating collision between fluid particles in microscale, and then combine all the microscopic value to get a conclusion and analytical solution in macroscale. Two separated distribution functions of TLBM are used respectively in this modeldensity distribution function for microscopic fluid particles and energy distribution function for the varying temperature. A few periodic boundary conditions are also made, for example, no slipping effect and constant temperature, to simulate the regular arrangement of cilia from a single cilium. Zachary et al. [2] calculate both steady and unsteady heat distributions under the condition of conductive and convective heat transport to validate the simulation of the TLBM method. In particular, the conductive heat transfer is modeled in rectangular domains bounded by walls with dissimilar temperatures. The convective heat transport was validated by a model in which the inlet temperatures were instantly elevated to a larger temperature at t=0. The result shows that the TLBM method is valid and can reproduce numerical and analytical solutions for the system. The research shows that beating cilia, actuated by an external periodical force, can significantly enhance the heat transfer rate. Elastic filaments of cilia will be actuated and create a secondary flow due to the periodical motion. When the dimensionless sperm number is equal to 3.5, the most efficient heat transport will take place. Since the heat transfer rate depends on the parameters of cilium actuation, this method can be applied to directly regulate the local heat transport at the microscale. Drug Delivery 1. Rolf Verberg et al. [11] developed a new computing method to simulate the release of nanoparticles from a microcapsule. Their aim includes two aspects: (1) model particle-filled capsules moving along a surface in a flowing fluid and the diffusion of the particles away from these carriers. (2) to determine the interactions between the released particles and an underlying substrate. To that end, they take advantage of their two recently developed computational approaches: one for simulating the behavior of fluid-filled elastic shells, which model the capsules, and another for capturing the dynamic behavior of nanoparticle-filled fluids. The behavior of fluid-filled elastic shells adopted their developed hybrid method, which integrates LBM for hydrodynamics and LSM for solid elastic micromechanics. Firstly, establish which LBM are connected at the solid/fluid interface. Then getting the velocities of these intersections from the adjacent LSM nodes and next, propagating distribution functions through flowing fluid particles to their adjacent nodes, regardless of whether these nodes are in the fluid domain, otherwise, applying the appropriate boundary conditions. Finally, modify the LBM node's distribution function to consider the collision step and then repeat the whole cycle. For the modeling of compliant capsules and surfaces, Rolf and his members simulate the system's relevant fluid-structure interactions by using the lattice Boltzmann model (LBM) and lattice spring model (LSM). They integrate the LBM for fluid dynamics and the LSM for the micromechanics of elastic solids. Brownian dynamics model (BDM) is used for the dynamics of the nanoparticles Their findings output a guideline of effective utilization for microcapsule carriers in the targeted delivery of nanoparticles. The Peclet number, the elasticity of the capsules, and the adhesive interaction between the capsules and the substrate all play an important role in nanoparticle deposition efficiency. Besides, their simulations revealed that the properties of the carrier capsule affected the number of adsorbed nanoparticles. The more compliant and more adhesive capsules yielded a greater number of particles at the surface. Finally, they contrasted the relative efficiency of delivering the particles via the microcapsules versus simply introducing free particles at the channel's inlet. The future study will examine how heterogeneities within the surface can be exploited to direct the deposition of the encapsulated particles. In this manner, the particle-filled carriers could potentially be used to fill cracks in a surface and thus be harnessed to repair microscale fissures or damage on the surface of microchannels or microfluidic devices. 2. Rolf Verberg et al. [12] simulate the rolling motion of fluid-driven particle-filled microcapsules along a heterogeneous adhesive substrate to determine how the release of encapsulated nanoparticles could be used to repair the damage on the lower surface. They capture the interactions between the microcapsule's elastic shell and the surrounding fluids by using LBM for hydrodynamics and lattice spring model. The utilization of hydrophobic and hydrophilic species provides just one example of the possible chemistries that could be harnessed to produce the behavior described earlier. To actually model this complex behavior and establish the necessary design rules, we integrated the LBM, LSM, and a Brownian dynamics model. The computational efficiency of these mesoscale models allows us to capture the fluid-structure interactions in the system and the different temporal events, for example, the motion of the capsule and the convection and diffusion of the particles for the micromechanics of elastic solids. Their studies found that the following variables have a significant effect on the behavior of the system: the strength of the adhesive interaction between the capsule and the substrate. The rate of diffusion of the particles through the shell and the Peclet number of the flow. The team set up guidelines for designing particle-filled microcapsules that perform a 'repair and go' function and can be used to repair damage in microchannels and microfluidic devices. In the future, the underlying physics that controls the behavior of this system is not dependent on the dimensionality of the system, we anticipate that we would find qualitatively similar behavior in threedimensional systems. Studies are currently underway to extend the current model to three dimensions; since we have already employed a three-dimensional version of our integrated 'LBM/LSM' approach, the extension of the current model to three-dimensional can be carried out in a straightforward 3. X. Jia and R. A. Williams [13] With the help of emerging characterization and simulation techniques, mixed models of particle structure dissolution at the microscopic scale are described in detail using real particle shapes. The hybrid approach's software implementation includes modules for particle packing, flow calculation, and dissolution simulation. A common feature shared by all modules is their digital or lattice-based approach. The lattice Boltzmann method (LBM) is used to generate flow input for convection. Their goal was to develop a computer software design assistant to help with recipe development. Their software's advantage is the straightforwardness to incorporate structural information at a microscopic (sub-particle) level, which is becoming increasingly accessible due to advances in nondestructive measurement techniques such as X-ray micro and nanotomography. However, for real and complex particle structures, further validation case studies are needed to determine, for example, how much the digitization error affects the predicted dissolution behavior. In the future, A more complex decomposition algorithm needs to be developed to incorporate the physical and chemical mechanisms of decomposition in more detail. Conclusions LMB method is an efficient method in microfluidic related flow simulation. This paper has integrated several papers that use LBM to calculate the microfluidic motion or simulate the molecular motion. We found that the LBM has shown extraordinary potential in microfluidic computation. In the particle regulation area, LBM helps researchers effectively captured the inertial ordering of particles. And in the flow-control area, researchers can directly regulate the local heat transport at the microscale with the LBM method. In the drug delivery area, the researchers summarized a guideline of effective utilization for microcapsule carriers in nanoparticles' targeted delivery. There still exist limitations for LBM in microfluidic flow simulation. For example, the LBM method is only used for two-dimensional calculations. For future study, a three-dimensional LBM method needs to be established to obtain more accurate simulation results.
2021-09-09T20:08:09.207Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "d97b2ca4a948a9a939e2e881e5fadb39ede4f285", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/2012/1/012084", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "d97b2ca4a948a9a939e2e881e5fadb39ede4f285", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
29314008
pes2o/s2orc
v3-fos-license
Canine Mammary Osteosarcomas Spontaneous mammary tumours frequently appear in women, dogs, cats and rodents. Their origin is known only in mice, where the mouse mammary tumour virus (MMTV) gives rise to them. Sexual hormones are involved in the development of mammary tumours; in women, oestrogen is important and most studied whereas progesterone is more important in the dog, where spaying at early age prevents mammary tumours [1]. About 50% of canine mammary tumours are benign. Benign mixed tumour, composed of epithelial cells and cartilage and/or bone tissue, is a common species specific type [2]. Malignant canine mammary tumours are dominated by carcinomas, which originate from epithelial cells. Sarcomas and mixed tumours (carcinosarcomas) also appear, and the cellular origin of the tumour types is unknown [2]. The reported incidence of canine mammary sarcomas (i.e. fibrosarcomas and osteosarcomas) ranges between 3.5% [3] and 8.3% [4]. However, canine mammary osteosarcomas represented only 1% of a large series of 10 345 mammary tumours [5]. The prognosis of canine mammary sarcomas is very poor, and about 75% of them cause metastases [4,6]. In addition to the dog, sarcomas appear in the human breast. They are however less frequent, and are often diagnosed as metaplastic carcinomas or matrix producing breast carcinomas since most are positive for epithelial markers and they often behave like carcinomas e.g. by causing metastases to the lymph nodes [7,8]. Introduction Spontaneous mammary tumours frequently appear in women, dogs, cats and rodents. Their origin is known only in mice, where the mouse mammary tumour virus (MMTV) gives rise to them. Sexual hormones are involved in the development of mammary tumours; in women, oestrogen is important and most studied whereas progesterone is more important in the dog, where spaying at early age prevents mammary tumours [1]. About 50% of canine mammary tumours are benign. Benign mixed tumour, composed of epithelial cells and cartilage and/or bone tissue, is a common species specific type [2]. Malignant canine mammary tumours are dominated by carcinomas, which originate from epithelial cells. Sarcomas and mixed tumours (carcinosarcomas) also appear, and the cellular origin of the tumour types is unknown [2]. The reported incidence of canine mammary sarcomas (i.e. fibrosarcomas and osteosarcomas) ranges between 3.5% [3] and 8.3% [4]. However, canine mammary osteosarcomas represented only 1% of a large series of 10 345 mammary tumours [5]. The prognosis of canine mammary sarcomas is very poor, and about 75% of them cause metastases [4,6]. In addition to the dog, sarcomas appear in the human breast. They are however less frequent, and are often diagnosed as metaplastic carcinomas or matrix producing breast carcinomas since most are positive for epithelial markers and they often behave like carcinomas e.g. by causing metastases to the lymph nodes [7,8]. Studying metastatic canine mammary osteosarcomas can, to a certain extent, lead to a better understanding of the biology of these tumours. This article describes five representative cases of canine mammary osteosarcomas and induced tumours in nude mice. The aims of the study were to (i) describe the metastatic routes and the morphology of the metastases by careful post-mortem examination (ii) study the in vivo behaviour and morphology of the tumours formed in an experimental model of a cloned canine mammary osteosarcoma cell line. Primary tumours and post-mortem examination The primary tumours were collected during surgical treatment at the former Department of Surgery and Medicine, and the dogs were post-mortem examined at the former Department of Pathology at SLU, Uppsala, Sweden. The autopsies included histological examination of most lymph nodes i.e. axillary, superficial cervical, sternal, cranial Histological examination The tumours were fixed in 4% or 10% phosphate buffered formaldehyde and embedded in paraffin. The sections were cut at 4-5µm and stained with haematoxylin and eosin (HE). Osteosarcomas were classified according to the WHO classification [2]. In accordance with this classification, osteosarcomas that formed both cartilage and bone matrix were named combined osteosarcomas. However, this classification, as well as others [11], does not include grading of the sarcomas. Thus, in the present study the osteosarcomas were classified into two groups: low-grade malignancy (well differentiated and moderately differentiated) and high-grade malignancy (poorly differentiated), and grading was based on cell pleomorphism, mitotic index and matrix formation [12]. Experimental study The primary mammary osteosarcoma in dog No. 353 gave rise to cell line CMT-U353 B, which was cloned [13]. The five clones 1, 2, 3, 6 and 7 were chosen and 5 x 10 6 cells were subcutaneously inoculated into each of five nude mice. The tumours that formed were fixed and stained as described for the primary tumours. Results Dog No. O389 was autopsied without any previous mastectomy. A mammary combined osteosarcoma in the left (L) side caudal glands 4 to 5 was 10 x 5 x5 cm in size. The central parts were low-grade with a large amount of bone matrix and a high-grade, more cellular periphery. Another part of the primary tumour contained both cartilage and bone tissue. Tumour invasion, with bone matrix formation even within the intravascular metastasis was seen ( Figure 1A-C). Cartilage and bone also appeared in the multiple lung metastases, which varied from 1 to 4 cm in diameter. Abundant bone matrix was also seen in the lung vessel metastases (Table 1 and Figure 1D-F). In addition, several benign mammary tumours were found in the right (R) side glands 3, 4 and 5, and in L 3. Dog No. 117 was surgically treated for a malignant mixed mammary tumour 9 cm in diameter located in gland R 3 (Figure 2A-D). Followup one year post-surgery, including x-ray of the lungs, showed no signs of recurrence. Two years post-surgery the dog was in good health according to the owner. The dog was euthanized and autopsied 3.5 years post-surgery. A this stage two new mammary tumours were present in R 4 and R 5 and were diagnosed as spindle cell tumour and combined osteosarcoma respectively (Table 1 and Figures 2E and F). The lung metastases differed in morphology; some were composed by a loose matrix with central chondrocyte-like cells ( Figures 3A-C), another was formed by cartilage ( Figure 3D), bone tissue in a lung vessel metastasis ( Figure 3E) and a further metastasis consisted of dense connective tissue or osteoid ( Figure 3F). The metastases were low-grade and very few mitoses were seen. Dog No. 143 was surgically treated for a 7 x 6 x 4 cm mammary tumour located in gland L 5 that was diagnosed as a high grade combined osteosarcoma (Figures 4A and B). One year and 11 months post-surgery the dog was autopsied and metastases in several organs were found ( Table 1). The lung metastases had a lower grade than the primary tumour ( Figure 4C) whereas the kidney metastasis demonstrated less bone matrix ( Figure 4D). This was in contrast to the metastases in the mediastinal lymph nodes and myocardium, which were high grade ( Figures 4E and F). Dog No. 144 was surgically treated for a mammary tumour 14 cm in diameter located in gland L 5 that was diagnosed as a combined osteosarcoma ( Figure 5A). Only a minor focal area showed presence of chondroid cells ( Figure 5B). Three months post-surgery, the dog was autopsied and metastases in several organs were found ( Table 1). The lung metastases were low grade compared to the primary osteosarcoma, with abundant presence of bone matrix ( Figures 5C and D). The diaphragm metastases were high grade and composed of pleomorphic tumour cells that infiltrated the skeletal muscle cells ( Figure 5E). A tumour in the spleen formed by bundles of spindle cells was considered as a high-grade metastasis ( Figure 5F). Dog 353 had two mammary tumours on the right side. In the first gland (R 1) an 8x7x7 cm high-grade combined osteosarcoma was found ( Figure 6A), and in gland R 4 a simple schirrhous carcinoma, approximately 2 cm in diameter, adjacent to the nipple was found (not shown). The dog was euthanized 6 months postoperatively due to lung metastases. The lung metastases contained either both cartilage and bone (Figures 6B-F) or cartilage alone ( Figures 6G and H). The latter was also valid for metastases in the kidneys ( Figure 6I). The x-ray of the skeleton was negative, confirming that the osteosarcoma located in the mammary glands neither originated from the skeleton nor had metastasized there. Further, at autopsy the dog had a simple mammary carcinoma of tubulo-papillary type in both gland L 1 and L 2. In addition, the dog suffered from chronic nephritis, nephrolithiasis and urolithiasis. The tumours in nude mice formed by the cloned cell line CMT-U353B were, in all but one clone, osteosarcomas with no sign of Figure 7D). The only clone that did not form bone matrix grew as high-grade spindle cell tumours that infiltrated into adjacent peripheral nerves and skeletal muscles ( Figures 7E and F). Discussion All five of the mammary osteosarcomas studied were of combined type i.e. both neoplastic cartilage and bone tissue were present in the primary tumours. In some tumours, cartilage and bone were adjacently located ( Figures 1B, 2F and 6A) whereas in others, bone and cartilage were separated by a distance within the tumour ( Figures 4A and B and 5A and B). It appears as if there was a transition from cartilage to bone in some tumours i.e. that the tumour cells had transdifferentiated from chondroblasts to osteoblasts (Figures 2F and 6A). This is in contrast to the results from the experimental studies with inoculated mammary osteosarcoma cells in nude mice ( Figures 7A-D), [13] and SCID mice [14] where only bone forming osteosarcomas were observed. The reason for this is unknown and should be explored further. The morphology of the bone forming tumours, with a low-grade, bone matrix rich centre and a high-grade, more cellular rich periphery, was similar in both mice and dogs (Figures 1A and 7A). Metastases in the vessels, both in the primary tumour and in the lung metastases of cases No. O389 and No. 117, contained large amounts of bone matrix in addition to pleomorphic tumour cells ( Figure 1C, 1F and 3C). The lung metastases from the combined mammary osteosarcomas also formed both cartilage and bone ( Figure 1D, 1E, 6C and D). Some metastases were surprisingly low-graded, with an abundant presence of bone matrix and few tumour (Figures 5C and D). This widely observed but poorly understood phenomenon has been previously reported [6,15]. In general, sarcomas preferentially spread directly to the blood, whereas carcinomas spread via the regional lymph nodes [6,16,17]. In canine mammary osteosarcomas, both metastatic routes have been reported [6,9,18,19]. The reason for lymphogenic spread of canine mammary osteosarcomas is unknown, but it may be linked to the propensity of the tumour cells to invade lymph vessels. Metastasis to the lymph nodes also appears in humans, and is one reason that these tumours are named metaplastic or matrix producing carcinomas. However, it is unknown whether these differences in metastatic spread reflect different origins of these tumours i.e. if they are derived from the mammary parenchyma or stromal tissue [8]. This needs to be investigated further. Tentative cells that can become neoplastic in the mammary glands are luminal epithelial cells and basally located myoepithelial cells, both derived from the ectodermal germ layer. Hypothetically, some epithelial cells can have a different origin. Further, connective tissue of mesodermal origin surrounding the ducts and alveoli forming the intralobular and interlobular stromal tissue can also form sarcomas. Interestingly, myoepithelial cells labelled with an anti-CD10 antibody showed the presence of three different CD10 positive cell types in normal canine mammary glands [20]. Thus, there might be more than one cell type located in the basal cell layer as well. Interestingly, it has recently been reported for the first time that tumour initiating cells in human breast sarcoma cells, established from the sarcomatous part of a breast carcinosarcoma, express CD49d +/high , form spheres and give rise to breast sarcomas in NOD/SCID mice [8]. This finding could imply that human breast sarcoma is a true entity and has stem cell-like properties. The need to study human metaplastic breast carcinomas has also been highlighted recently [21]. To compare primary canine carcinomas, fibrosarcomas and osteosarcomas from mammary tumours, we carried out a gene expression study that initially showed that the tumours formed these groups in unsupervised hierarchical clustering [10]. We chose to study malignant monophasic tumours i.e. tumours that are composed of one type of tumour cell, and used Affymetrix Canine Genome 2.0 arrays with 38 000 genes. When we compared the gene expression pattern in the carcinomas compared with the sarcomas by supervised hierarchical clustering, we found a high frequency of embryonic genes in the sarcomas, among them a clear overrepresentation of genes that participate in the formation of the head, such as craniofacial tissues, teeth and nerve tissue. These interesting results clarify some of our previous findings. We then studied primary tumours [22] and cell lines established from different types of canine mammary tumours, and showed that some of the tumours expressed neurofilaments, as demonstrated by immunohistochemistry [23]. In the latter study, cells in primary mammary fibro-and osteosarcomas formed different types of mesenchymal tumours, such as spindle cell tumours, rhabdomyoid, chondroid and leiomyoma-like tumours in nude mice. Our conclusion from that study was that the tumours might originate from pluripotent stem cells. To refine the study we cloned three mammary tumour cell lines; from a carcinoma, a fibrosarcoma/spindle cell tumour and an osteosarcoma. We found a similar plasticity e.g. clones from the spindle cell tumour formed bone tumours in the mice. Further, we also found neurofilament positive cells in the primary spindle-cell tumour and osteosarcoma as well as in one experimental mouse tumour from the osteosarcoma. However, the carcinomas retained their phenotype in the mice although desmoplasia was seen [13]. Taken together, we have seen no evidence of transition between the canine mammary carcinomas and sarcomas. Rather, the sarcomas appear to be very robust with the specific characteristics described above. Epithelial to Mesenchymal Transition (EMT), which is a normal and reversible process during embryogenesis, is an explanatory model that has been related to a stem cell phenotype in breast cancer [24]. Initiation of the reverse process i.e., mesenchymal to epithelial transition (MET) can be programmed by Klf4. In breast cancer, MET is far less studied than EMT [25]. We have shown that different bone morphogenetic proteins (BMPs) were expressed in the clones from a canine mammary spindle-cell tumour and the canine mammary osteosarcoma, and particularly that BMP-6 was related to bone formation [26]. Interestingly, genes involved in the AKT/PI3K and GLI/Hedgehog signalling pathways have been demonstrated by gene expression arrays of two primary canine mammary osteosarcomas, and by using two benign canine mammary osteomas as controls [27]. In veterinary medicine, there is a general theory that the myoepithelial cells are responsible for the spindle-cell component, at least in complex canine mammary tumours [28]. Due to lack of myoepithelial cell specific markers, their role is difficult to confirm. However, it is very import to distinguish a primary mammary sarcoma from both a carcinosarcoma and from a metaplastic carcinoma, as the management is different. Thus, finding the cell of origin in mammary sarcomas is critical to understanding mammary gland tumorigenesis. In previous studies of DNA ploidy, we have shown that the mammary sarcomas are often diploid or near diploid, in contrast to the carcinomas that are most often hypodiploid or hyperdiploid [4,29]. The DNA indices are retained in the metastases [9]. Whether these findings reflect different pathogeneses between canine mammary sarcomas and carcinomas remains to be shown. Another difference between canine mammary osteosarcomas and carcinomas is the fact that only the studied osteosarcomas have a mutated p53 gene [30]. This is also valid for p 53 detected at protein level, which was only seen in the canine mammary osteosarcomas [31,32]. In conclusion, canine mammary combined osteosarcoma metastases were spread haematogenous to the lungs in three dogs and spread via the regional lymph nodes in two dogs. The metastases differed in morphology, both within and between different metastatic sites. Some of the metastases had an even lower grade than the primary tumour. Further, the metastases were of mesenchymal phenotypes although not all of them formed bone tissue. Mammary osteosarcomas are poorly understood tumours, and their pathogenesis and histogenesis are still to be ascertained.
2019-03-09T14:16:35.901Z
2014-03-03T00:00:00.000
{ "year": 2014, "sha1": "bef8c465db2efb86c9089ae417c9b414b950f36e", "oa_license": "CCBY", "oa_url": "https://www.omicsonline.org/open-access/canine-mammary-osteosarcomas-2157-7579.1000163.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "121c321c2811e7b5a528b775118ff6cb26ebbf83", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
261026598
pes2o/s2orc
v3-fos-license
Psycho-Cardiological Disease in COVID-19 Era During the coronavirus disease 2019 (COVID-19) pandemic, panic and public health responses, including self-monitored quarantine and lockdown of the city, have severely impacted mental health and caused depression or anxiety in citizens. Psycho-cardiology indicates that psychological factor plays an important role in coronary heart disease (CHD). COVID-19, depression and CHD can co-exist and deleteriously affect each other, leading to worse progression and prognosis. Delays in medical consultation and treatment have become more common than before the pandemic, inducing more cardiovascular (CV) events and sequelae. COVID-19 survivors have been identified to have more psycho-cardiological symptoms compared with non-COVID-19 controls. Undoubtedly, diet alterations and sedentary lifestyles during the pandemic will cause and aggravate psycho-cardiological diseases. Some frequently used cardiovascular drugs were found to associate with changes in depression. With the advent of the post-pandemic era, although the acute damage of the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) is gradually declining, the psycho-cardiological diseases related to the novel coronavirus are becoming increasingly prominent. So it is an important issue for us to explore the pathogenesis, clinical manifestations and corresponding preventive measures of this aspect. Introduction The coronavirus disease 2019 (COVID-19) pandemic caused by the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) is coming to its fourth year.At the time of writing, over 615 million people were infected by the virus and more than 6.5 million people have died (https: //coronavirus.jhu.edu/map.html).The extra-pulmonary impact of the disease has drawn increasing attention, and cardiovascular disease is one of the most common complications of hospitalization and death in COVID-19 patients [1]. However, COVID-19 pandemic has already been proven to impact the cardiovascular (CV) system far beyond direct damage.A series of consequences related to the pandemic, such as the panic in citizens, and public health responses, including self-monitored quarantine and lockdown of the city, have been found to increase CV risk on a wide scale, impacting both the uninfected and the survivors of COVID-19.The pandemic-related social and economic restrictions have led to economic upheaval, physical inactivity, social isolation, and mental health deterioration.All of these are recognized as CV risk factors and lead to worse outcomes [2]. Psycho-cardiology indicates that psychological factor plays an important role in coronary heart disease (CHD).A dangerous link exists among COVID-19, depression and CHD, and these elements can co-exist and deleteriously affect each other.The public health response to COVID-19, aiming at mitigating the incidence and mortality from acute infection of the pandemic, may result in a series of consequence of increasing CV risk in a much broader range, including those uninfected with SARS-CoV-2.In view of the potential harm of COVID-19 itself and its prevention and control to physical and mental health, more attention should be paid to psycho-cardiology.Thus, this review aims to make a comprehensive analysis of the interplay between COVID-19 and psycho-cardiology. COVID-19 and Depression in the Uninfected The lifetime risk of depression is 15-18%, meaning that nearly one in five people experience depression at some point in their lifetime [3].The symptoms of depression can be roughly grouped into emotional, neurobiological, and cognitive symptoms [3].Its connotation ranges from temporary discomfort to serious clinical syndromes, which can be severe, recurrent and disabling.Depression generally involves symptoms such as depressed mood, loss of interest or pleasure in activities, sleep disturbance, fatigue or impaired concentration [4].Depression is one of the most common causes of disability in developed countries and is associated with high social and healthcare costs, including direct medical costs and reduced work productivity related to functional impairment [5]. addition to patients and medical practitioners, during the COVID-19 pandemic [6][7][8][9].A meta-analysis comparing people before and after COVID-19 displayed that the prevalence of anxiety rose from 8.9% to 22.6%, while the prevalence of depression rose from 8.7% to 18.3% [10].Several major factors of COVID-19-related anxiety are specifically listed: personal health, discrimination, social health and financial distress [11,12].In addition, people with current or previous COVID-19 symptoms or being closely related to the disease can take a greater psychological hit [8]. A longitudinal observational study in England suggested that the highest levels of depression and anxiety were in the early stages of lockdown, and improvements continued with lockdown-easing measures introduced [7].Being female or young, having lower education or income levels, having pre-existing mental health conditions, and living alone or with children are all risk factors for higher levels of anxiety and depression during the lockdown [7].On the contrary, appropriate information-seeking habits, high levels of knowledge about COVID-19, information adequacy and acceptance of public health control measures were associated with lower anxiety levels [13]. Depression in the Female Self-reported stress, anxiety, depression and more severe overall psychological effects were significantly higher in women.Risk factors are more likely to intensify in women during the pandemic, including chronic environmental stress, pre-existing depression, anxiety and domestic violence [5].Furthermore, stressors related to reproductive functioning and stages are more peculiar to women than to men during the pandemic.For example, it is reported that fertility problems, fear of accidental injury and decisional stress during pregnancy, miscarriage, additional pandemicrelated worries brought by postpartum status, and intimate partner violence are aggravated to different extents [14].As a result of school closures or family members becoming unwell, women are more likely to be responsible for additional care and household chores.During the pandemic, women are more likely to be economically disadvantaged because they have lower salaries, fewer savings and more unstable employment than men [15].It is predicted that more severe disease prevalence among women than men will lead to greater gender differences, as women tend to be affected by the social and economic consequences of the pandemic. Depression in the Young School suspension and quarantine led to a sharp rise in the prevalence of psychological problems in students [16,17].A retrospective cohort study in Canada noted that by the end of 2020, the number of adolescents seeking medical care for anxiety or depression-related problems was higher than pre-pandemic levels [18].During the pandemic, middle school students are particularly prone to anxiety, while college students are depressed [16].Extreme fear is the most important risk factor leading to psychological distress, followed by short sleep time, being in the graduation grade, and living in areas seriously affected; sleep time mediates the influence of exposure factors on mental health [19]. Inappropriate Information Acquisition and Depression Excessively frequent and constant searching for health information can lead to sleep disturbances and exacerbate mental distress such as anxiety, depression, negative perceptions of health and post-traumatic stress disorder [20].Moreover, some information related to COVID-19, especially information obtained via social media, was significantly associated with anxiety [20].It might be difficult for the public to distinguish true information from false.Misinformation is a significant problem in the COVID-19 pandemic.When making important decisions regarding their lives and health, ordinary people who have limited medical knowledge usually can't tell the truth from the false.Panic purchases of food, vegetables, daily necessities, medical supplies or drugs, or even taking drugs without a prescription are related to misinformation [21].New media consumption was also positively correlated with pandemic worries.During the pandemic, misinformation about COVID-19 spread virus-related negative effects through new media, which has caused unfounded fear and anxiety.Moreover, through new media, many netizens expressed their negative emotions, such as fear, worry, tension, and anxiety.As a result, it has caused negative emotional contagion in the online community [22]. Delays in CV Care The COVID-19 pandemic has affected all aspects of medical services, and even impacted people uninfected with SARS-CoV-2, especially those with chronic diseases.Interaction between patients and the healthcare system is greatly disrupted, especially in areas with severe pandemics.Emergency treatment and hospitalization were delayed, deferred, or abbreviated in many patients, even for acute CV conditions [23,24].For instance, in the case of acute coronary syndromes (ACS) such as acute myocardial infarction (AMI), the pandemic has led to a delayed presentation to the hospital, which is associated with worse outcomes [25].Accordingly, a two-times increase in out-of-hospital cardiac arrests has been reported during the pandemic compared to years before COVID-19 outbreak [26]. Indeed, a multicenter, international research found that symptom-to-admission times were significantly increased in patients with suspected ACS and COVID-19 who accept invasive coronary angiography [27].Moreover, patients with any type of ACS present higher in-hospital mortality than before the pandemic, and cardiogenic shock is also more common [27].Patients with ST-segment elevation AMI and COVID-19 were at higher risk of compos-ite end points of in-hospital death, stroke, recurrent myocardial infarction (MI) or repeated revascularization than pre-pandemic [27].Furthermore, these patients were more likely to develop cardiogenic shock and less likely to receive invasive angiography than pre-COVID control patients [28]. During the COVID-19 pandemic, people are understandably reluctant to be admitted to hospitals.Delays in seeking care may occur when patients fear to contact with the virus, or access to emergency medical services is limited due to reduced staffing or isolation requirements.Reasons for postponement in evaluating and treating patients after arrival in the hospital include but are not limited to nucleic acid testing, procedures for using personal protective equipment, and strict environmental disinfection [29]. As delay in medical consultation and treatment becomes more common, the incidence of CV sequelae, including cardiac remodeling, heart failure, and physical disabilities, may build up in survivors of acute CV events.Some researchers called this an "impending tsunami" [30].Although most outpatient laboratories have resumed routine tests for cardiovascular disease, limited hospital access undoubtedly led to deferred CV risk-factor management.The duration of this phenomenon may be longer than we think or hope, because people have noticed that even three years after the SARS epidemic in 2003, the number of outpatient and inpatient visits of cardiovascular patients did not return to the level before the epidemic [31]. Crosstalk between Depression and CHD Several psychological factors, such as depression, anxiety and type A personality have been proven to contribute to the onset as well as affect the progression and prognosis of CHD [32,33].CHD itself can also increase the risk of depressive symptoms and disorders, which will lead to not only direct physical consequences but also psychosocial changes.Therefore, the association between depression and CHD can be described as a downward spiral in which depression and CHD mutually reinforce each other.As the COVID-19 pandemic has caused great psychological problems, it reminds scientists and doctors to concentrate on the dangerous link between depression and CHD. Recently, the first and largest study based on two large prospective cohorts of Chinese adults found that depression was associated with a significantly elevated risk of cardiovascular mortality, and the associations were independent of social factors, lifestyle factors, and health status [34].Individuals with depression durations of more than two years had apparently increased risks of developing CHD, compared to those with depression of less than one year [35].No explicit evidence revealed that depressive symptoms below threshold levels are not associated with CHD risk.The association persisted after adjusting for several known CV risk factors and attempting to eliminate the effect of reverse causality [36].Mendelian randomization provides evidence that the relationship between raised probability of depression and increased risk of CHD is causal and genetic [37,38]. It suggests that several established CV risk factors, such as systolic blood pressure, total cholesterol, highdensity lipoprotein cholesterol (HDL-C), body mass index (BMI), diabetes, smoking, alcohol abuse, and inflammatory mediators, cannot fully explain the association between depression and CHD [36].So, depression could be a significant predictor of incident CHD independently with other CHD risk factors [39].In patients with established CHD, epidemiological evidence suggests a strong relationship between anxiety or depression and angina, and increased shortness of breath or chest pain symptoms are associated with depression [40]. Former studies have identified that the effect of depression on subsequent cardiac events may be mediated by abnormalities in the immune response, platelet activation and thrombosis, mitochondrial dysfunction, neuroendocrine pathways affected by altered brain and neuronal function, autonomic nervous dysfunction, life behavior and cardiometabolic risk factors [40][41][42].Some new insights are being found regarding the interaction between depression and CHD.Strong associations among gut microbiota, depression, and CHD have been established.Therefore, intestinal microbiota may lead to the comorbidity of depression and CHD.Furthermore, endocrine signaling and miRNA are reported to contribute to the crosstalk between depression and CHD [41].In view of the widespread impact on mental health of COVID-19, these could be potential targets for intervention. Post-COVID-19 Condition People infected with COVID-19 may have durative post-infection sequelae.The phenomenon has been given various names, such as long-term COVID-19 and longrange COVID-19.Since September 2020, it has been listed as "post-COVID-19 condition" in the ICD-10 classification and manifests itself in a variety of forms.A final consensus definition is that post-COVID-19 condition occurs in individuals with a possible or confirmed SARS-CoV-2 infection, usually three months from the onset, with symptoms that last for at least two months and cannot be explained by an alternative diagnosis [43]. Post-COVID-19 Condition and Psychological Symptom Although the long-term consequences of COVID-19 remain to be studied, some COVID-19 survivors, whose physical and functional symptoms disappear after acute infection, still have problems with movement, pain or discomfort, and anxiety or depression compared with non-COVID-19 controls [44].In a cohort study, 1733 discharged patients were investigated for the consequences of COVID-19 for six months [45].23% of the patients suf-fered from anxiety or depression, and 26% suffered from sleep difficulties.In patients with severe illness, anxiety or depression are at higher risk as a serious psychological complication. SARS-CoV-2 can affect brain tissue by causing a cytokine storm, which is believed to cause neurological and psychiatric symptoms.The excessive and dysfunctional immune response of people infected with novel coronavirus leads to the elevation of various inflammatory cytokines.These cytokines are observed to elevate in patients with depression and are supposed to be a hypothetical mechanism different from social isolation and stressors.Due to the existence of SARS-CoV-2 in the brain, some biological alterations have been found, particularly the activation of microglia and cytokine signaling, which are alterations in psychiatric disorders in general [46].However, the causality between cytokines induced by COVID-19 and depression needs further research. Post-COVID-19 Condition and Chest Pain Chest pain was one of the most common symptoms of the post-COVID-19 condition, with an average duration of over 40 days [47].Another study indicates that after 60 days, 20% of cases have chest pain [48].Although the mechanism of chest pain in post-COVID-19 conditions is still unclear, some researchers concluded that prolonged chest pain might be a consequence of coronary microvascular ischaemia based on the evidence of coronary microvascular dysfunction identified by adenosine stress CV magnetic resonance (CMR) imaging [49].This could be a risk factor for future CHD.Besides, presence of depression is associated with increased reporting of shortness of breath and/or chest pain symptoms [40,50]. Psycho-Cardiology in Female Women have stronger associations between depression and CHD.They are approximately twice as likely as men to suffer from depression, and on average, somatic symptoms of depression in women are also more severe than in men, accompanied by earlier onset [40].However, women show greater vagal activity and higher vagallymediated heart rate variability, which are negatively associated with the risk and mortality of CHD.Possible pathophysiological mechanisms are ascribed to inflammatory processes, hormonal dysregulation, poorer health behavior and metabolic derangement modified by gender [53].Women tend to increase their intake of unhealthy foods, de-crease their physical activity and have poor sleep quality when they are anxious, leading to further increases in stress [5].This results in an increased risk for both depression and CHD in women, and implies that gender-specific issues need to be taken into account when it comes to psychocardiological issues during the COVID-19 pandemic. Psycho-Cardiology in Teenagers Depression in childhood and adolescence is positively correlated with inflammation.It is associated with greater concurrent levels of C-reactive protein (CRP) and interleukin-6 (IL-6), as well as the increase of IL-6 in the future [54].Conversely, elevated levels of inflammatory markers were related to future depression in teenagers.IL-6 has been identified as a potential trigger for the pathophysiology of atherosclerosis [54].Youth depression caused by the COVID-19 pandemic may be an independent risk factor for premature CHD.Increased inflammation is also associated with more severe depression in the future, which may cause a vicious circle. Diet Alteration in COVID-19 As a result of COVID-19, dietary and nutritional structures have been altered [55].It was indicated that the diet adopted during the pandemic had a higher caloric intake and worse nutrition quality than the previous model of COVID-19 [56].During the quarantine, people ate and snacked more meat, dairy, fast foods, and alcoholic beverages, but fewer vegetables, fruits and legumes [57].Remarkably, among all populations affected by COVID-19, dietary patterns and lifestyles of overweight and obese people are notably impaired.It is frequently reported that these individuals adopt more disruptive eating behaviors, consume food without hunger, and overeat frequently [57,58].The expected reduction in fresh food consumption during the lockdown, accompanied by vitamin and mineral deficiencies, is associated with various CV risk factors and appears to result in higher mortality and incidence of CHD [55,59]. Actually, adopting a Mediterranean dietary pattern, high consumption of fruits, vegetables, seafood, whole grains, nuts, and legumes, while moderate consumption of poultry, eggs, and dairy products, but only occasionally eating red meat, can reduce the burden of depression and CHD [60,61]. Sedentary Lifestyle in Quarantine During the COVID-19 pandemic, people who suspended work and stayed at home without exercise had poorer health indicators [62].COVID-19 pandemic-related lockdown has led to a sedentary lifestyle among citizens of all ages.Physical exercise is one of the universal nonpharmacological interventions used to treat people with psychological disorders.Therefore, physical activity becomes especially requisite for people to maintain physio-logical and psychological function during the quarantine [63].Regular physical exercises can postpone the age of the first stroke and improve long-term outcomes.This is critical on account of the higher prevalence and severity of COVID-19 in the old [64].Although outdoor exercises are more available and various, there are still many ways to exercise at home during the quarantine, such as yoga, meditation, Tai chi, etc. [65]. Active or passive physical inactivity and positive energy balance during quarantine could induce many health consequences, including higher total body and central fat, reduced insulin sensitivity, and inflammatory status, which are the main risk factors for metabolic syndrome (MetS).An additional risk for older adults is sarcopenia combined with obesity [66]. MetS is a cluster of metabolic abnormalities, including visceral obesity, dyslipidemia, hypertension, hyperuricemia, hyperglycemia and fatty liver.It involves a series of pathophysiological, molecular biochemical, clinical and metabolic factors, directly increasing the risk of CHD, type 2 diabetes and all-cause mortality.The pathogenesis of MetS is associated with genetic and epigenetic, abnormal glucose and lipid metabolism, insulin resistance, oxidative stress, inflammation, abnormal central neurohumoral regulation and endothelial dysfunction [67].Modern research has confirmed that a sedentary lifestyle can bring about health problems such as MetS, prothrombotic and pro-inflammatory states [66]. Even though short periods of physical inactivity gave rise to increases in TNF-α, IL-6 and CRP [68].Adipocytes, macrophages and lymphocytes of obese individuals increase the expression levels of cytokines TNF-α and IL-6 through endocrine, autocrine and paracrine ways [67].TNF-α acts locally on adipocytes, reducing insulin sensitivity through different mechanisms, increasing free fatty acids (FFA) levels by inducing lipolysis, and inhibiting adiponectin release [69].It also attenuates nitric oxidemediated vasodilation and participates in the vascular pathology of MetS, atherosclerosis and CHD [67].IL-6 makes for insulin resistance, enhances the synthesis of acute phase proteins such as CRP and fibrinogen in the liver, promotes the expression of endothelial cell adhesion molecules, and activates the renin-angiotensin system.CRP is highly correlated with MetS and diabetes, while fibrinogen induces prothrombotic status [67,69]. Regular physical activities activate several signaling pathways that contribute to maintaining the CV system's steady state.Physical exercise activates the peroxisome proliferator-activated receptor-γ coactivator 1 alpha (PGC-1α) pathway, which reduces pathological myocardial remodeling, improves hypertension, reduces cardiac apoptosis and collagen accumulation, and beneficially modulates several genes related to mitochondrial biogenesis [70].Activation of the PGC-1α pathway also helps reduce myocardial and systemic inflammation by inhibiting infiltration of macrophages, TNF-α, and inducible nitric oxide synthase, including inhibition of chemokines and cytokines in the bloodstream [71]. Physical exercise also affects angiotensin-converting enzyme 2/angiotensin-(1-7)/MAS (ACE2/Ang-(1-7)/MAS) axis, which is associated with CV pathogenesis.SARS-CoV-2 disables the positive effect of Ang 1-7 production by binding to ACE2 and entering pulmonary and other cells, inducing an imbalance between Ang II/Ang1-7 ratio and aggravating inflammatory response [72].Contrary to pathological states, activating the ACE2/Ang-(1-7)/MAS axis by physical exercise brings about anti-inflammatory and antifibrotic effects.Physical exercise can be used as a potential therapy to promote resilience, develop an optimistic mood, and improve quality of life [73]. Exercise-based cardiac rehabilitation (CR) helps to improve the physical capacity and psychological status of psycho-cardiological patients [73].During the COVID-19 pandemic, cardiac telerehabilitation was paid special attention to for acting as a supplement or substitution to traditional centre-based CR [74].Telerehabilitation was identified to improve lipid particles recently, and it may create a combined effect of multiple behavior modifications and risk reduction for psycho-cardiological patients [75]. Psycho-Cardiology in Special Population Hypertension and diabetes mellitus are all the most common co-morbidities and causes of death in patients with COVID-19 infection [76], and they are also known as risk factors for CHD.So patients with hypertension or diabetes mellitus should be brought to the forefront. Hypertensive Patients In a prospective study, compared to O blood group in hypertensive patients with COVID-19 infection, non-O patients had significantly higher values of pro-thrombotic indexes (activated pro-thrombin time, D-dimer, Von Willebrand factor and Factor VIII), higher rate of cardiac injury (13.9% vs. 29.3%)and higher mortality (8.3% vs. 19.6%)[77].Hypertension could exacerbate pro-thrombotic status, over-inflammation and endothelial dysfunction in COVID-19 patients, resulting in an increased risk of worse prognoses as cardiac injury and death [77]. The use of anti-hypertensive drugs was once in controversy.In fact, keeping taking ACE inhibitors/angiotensin receptor blockers (ARBs) to control hypertension along with tailored anti-inflammatory and immune therapies could improve clinical outcomes, and prevent worse prognosis in hypertensive patients with COVID-19 [78].ACE inhibitors and ARBs could mediate COVID-19 protection by anti-inflammatory, anti-fibrotic, and anti-thrombotic effects and improvement of lung function via upregulating ACE2 activity [78,79]. Diabetes Patients During COVID-19 infection, diabetes patients show high prevalence, severity of disease and mortality.COVID-19 pneumonia could cause thromboembolic events and reduction of lung functionality, especially in patients with diabetes [80].These events are manifestations of micro vascular endothelial dysfunction and damage, which are also thought to be risk factors for CHD.ACE2 expression (total and glycosylated forms) was upregulated in patients with worse glycemic control, resulting in myocardial injury under COVID-19 infection [81].In fact, early glycaemic control was indicated to be a suitable therapeutic option to improve prognosis in hospitalized hyper-glycaemic COVID-19 patients with or without a previous diabetes diagnosis [82].Besides, hyper-glycaemia at the time of vaccination worsened the immunological response, and increased incidence of SARS-CoV-2 breakthrough infections [83,84], and thus influenced the efficacy of vaccination. CV Drugs and Depression During the COVID-19 pandemic, particular attention should be paid to medication for psycho-cardiological disease patients.A study based on data of 5.4 million people in Denmark investigated whether CV drugs were associated with changes in depression.Continued use of angiotensin agents, calcium channel blockers (CCB), and β-blockers were associated with decreased rates of depression, whereas diuretics were not.Reduced risks of depression were found in nine drugs, including two angiotensin agents: enalapril and ramipril; three calcium antagonists: amlodipine, verapamil, and verapamil combinations; and four β-blockers: propranolol, atenolol, bisoprolol, and carvedilol.None of the antihypertensive drugs was found to increase the risk of depression [85].Previous studies have confirmed that statins help to reduce the risk of depression in people with CHD [86,87].The anti-depression effect of aspirin is still controversial, and nitrate drugs were not significantly associated with depression [88].However, a recent metaanalysis suggests that CCB, diuretics and nitrate are associated with higher risks of depression in patients with CHD and heart failure [89].This could be a research priority, given that CV drugs' effects on depression remain controversial. Conclusions The public health responses to the COVID-19 pandemic have impacted citizens on a broad scale, especially in the psychological and CV aspects.Females and the young are susceptible populations of psycho-cardiological diseases induced by COVID-19.Patients with hypertension or diabetes mellitus should especially be brought to the forefront during the pandemic.Inappropriate information acquisition, delays in CV care, post-COVID-19 condition, diet alteration and sedentary lifestyle in quarantine, however, would cause or aggravate psycho-cardiological diseases.With entering the post-pandemic era, the acute effects of SARS-CoV-2 gradually decline, so more attention should be paid to its chronic consequence, particularly psycho-cardiology.Mediterranean dietary pattern and exercise-based cardiac rehabilitation are essential for preventing and treating psycho-cardiological diseases.
2023-08-20T15:05:03.312Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "c4f1ad0764176133ebcc61acc0d0ad346def948c", "oa_license": "CCBY", "oa_url": "https://www.imrpress.com/journal/RCM/24/8/10.31083/j.rcm2408239/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "37b1ea7ba6952d1f9197f2d0f71df35814a72bcc", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [] }
55566827
pes2o/s2orc
v3-fos-license
Bacterial Filtration Using Carbon Nanotube/Antibiotic Buckypaper Membranes 1 Soft Materials Group, School of Chemistry, University of Wollongong, Wollongong, NSW 2522, Australia 2 Illawarra Health and Medical Research Institute, University of Wollongong, Wollongong, NSW 2522, Australia 3 Intelligent Polymer Research Institute, ARC Centre for Excellence for Electromaterials Science, AIIM Facility, University of Wollongong, Wollongong, NSW 2522, Australia 4 Institute of Materials Engineering, Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW 2234, Australia Introduction Guarding water supplies against contamination from pathogenic organisms remain one of the most important challenges facing society [1,2].While filtration methods are effective for removing microbial contaminants, the susceptibility of nanofiltration and reverse osmosis membranes to fouling necessitates the use of additional disinfection processes to ensure that pathogenic organisms do not enter water supplies [3,4].A further weakness of current nanofiltration membranes is they lack analyte specificity [4,5].Consequently, there is an ongoing need to develop new membrane materials [5,6]. Membranes composed of aligned arrays of carbon nanotubes (CNTs) have shown very high permeabilities towards water and gases [7,8], as well as the ability to discriminate between molecules or nanoparticles on the basis of differences in their sizes [7].They have also proven effective for removing bacteria and virus particles from water samples [9], while several other studies have shown that CNTs have antimicrobial properties [10,11].This effect was more pronounced with single-walled carbon nanotubes (SWNTs) than multi-walled carbon nanotubes (MWNTs), which was attributed in part to the greater ability of the former to physically penetrate bacterial cell walls [11].Inspired in part by these results, Brady-Estévez and coworkers studied the antibacterial properties of CNT membranes and composite materials [12,13].One of their first studies involved buckypapers made from SWNT dispersions prepared in dimethylsulfoxide [12].These materials were prepared without the assistance of a dispersant molecule and proved highly effective at removing E. coli from aqueous solutions.Metabolic and viability assays performed on the surface of the BPs after they had been used for filtration experiments showed that the majority of the bacteria that had been retained were metabolically inactive and had compromised membranes.These buckypapers also proved effective at removing viral particles from solution, highlighting their potential utility for treatment of contaminated water supplies. We recently used macrocyclic ligands to assist in the preparation of aqueous dispersions of SWNTs and buckypapers that retained the ligand molecules [14].Comparison to buckypapers prepared using Triton X-100 (Trix) as the dispersant revealed that incorporation of the macrocycles sometimes resulted in dramatic changes to the physical and morphological properties of the membranes, as well as their permeability towards water.We therefore considered that it might also be possible to use antibiotics with appropriate structural features to form CNT dispersions and buckypapers which retain antibiotic molecules and consequently display enhanced bactericidal activity.Further support for this proposal was provided by studies which showed that CNTs can remove antibiotics from aquatic environments by an adsorption mechanism [15,16].However, to date the only studies that have used antibiotics to disperse CNTs focussed on the preparation of modified electrodes containing thin films composed of MWNTs and the antibiotic ciprofloxacin (cipro; Figure 1) [17][18][19]. In this paper we report on the ability of free-standing SWNT and MWNT buckypapers containing ciprofloxacin to filter solutions containing Escherichia coli (E.coli) and the bactericidal properties of the buckypapers.In addition, the physical and morphological properties of MWNT/cipro and SWNT/cipro buckypapers are compared to each other and to those of the corresponding membranes containing the surfactant Trix, which has no significant antibacterial activity. Experimental 2.1.Reagents.All chemical reagents were used as received from suppliers, without any further purification.SWNTs produced by the HiPco process were obtained from Unidym (Lot no.P2150), while thin MWNTs prepared by a chemical vapour deposition method were obtained from Nanocyl (Lot no.081010).Ciprofloxacin hydrochloride was purchased from MP Biomedicals LLC. Preparation of Dispersions. All dispersions were prepared in Milli-Q water (18.2ΩM cm) using a SWNT or MWNT concentration of 0.1% (w/v), and either Trix or ciprofloxacin present at a concentration of 1.0% (w/v).In order to facilitate formation of dispersions, a high energy (400 W) sonication horn (probe diameter = 10 mm; Branson 450, Ultrasonics) was used with the following parameters: amplitude = 30%; pulse duration = 0.5 s; pulse delay = 0.5 s).For a typical experiment 15 mg of CNTs were dispersed in 15 mL of dispersant solution.During sonication the reaction vial was placed inside an ice/water bath to minimize increases in temperature.A sonication time of 30 min was employed to prepare all CNT dispersions used for synthesising the buckypapers described in this paper. Preparation of Buckypapers. In order to produce a small, circular buckypaper (approximate diameter 47 mm) two dispersions, prepared as described previously, were combined and added to a further 50 mL of dispersant solution (either 1.0% (w/v) Trix or cipro) and then subjected to treatment in a conventional ultrasonic bath (Unisonics; 50 Hz, 150 W) for 3 min.The solution was then diluted to a total volume of 250 mL using Milli-Q water and filtered under vacuum through a polytetrafluoroethylene (PTFE) membrane filter (5 m diameter pore size; Millipore) housed in an Aldrich glass filtration unit, by using a Vacuubrand CVC2 pump to apply a vacuum between 30 and 50 mbar. Characterisation Techniques. Absorption spectra (400-1000 nm) of all dispersions were obtained using a Cary 500 UV-vis-NIR spectrophotometer and quartz cuvettes.The dispersions were first diluted with Milli-Q water to ensure that the measured absorbances were within the optimal range of the instrument.The surface morphology of buckypapers was examined using a JEOL JSM-7500FA FESEM.Prior to analysis, the buckypapers were cut into small strips and mounted on a small, conductive stub using carbon tape.The samples exhibited sufficient electrical conductivity to be imaged without prior sputter coating.Images obtained by scanning electron microscopy (SEM) were analysed using Image Pro Plus software to obtain quantitative information concerning the average diameter of surface pores.The average surface pore diameters of the buckypapers reported in Table 1 were determined using a single buckypaper sample.Energy Dispersive X-ray (EDX) spectroscopic analysis of the surface of buckypapers was performed concurrently to obtain information about the identity of elements present. The contact angles of buckypapers were measured using the sessile drop method and a Data Physics SCA20 goniometer fitted with a digital camera.The contact angles of 2 L Milli-Q water droplets on the surfaces of the buckypapers were calculated using the accompanying Data Physics software (SCA20.1).The mean contact angle was calculated using measurements performed on a minimum of five water droplets.(BET) analysis of isotherms derived from nitrogen adsorption/desorption measurements [20].c Average internal pore diameter.d Obtained through application of Horvath-Kawazoe (HK) and Barrett, Joyner and Halenda (BJH) methods to isotherms derived from nitrogen adsorption/desorption measurements [21,22].e Average nanotube bundle diameter.f Insufficient data to enable calculation of this value.g Data taken from [14]. The electrical conductivity of buckypapers was determined using a standard two-point probe method [23].Details of the procedure employed to measure sample conductivities were described by us previously [14].Measurements were performed on three separate strips of membrane for each type of buckypaper, with the average values reported in Table 2.The mechanical properties of buckypapers were examined using a Shimadzu EZ-S universal testing device and buckypaper samples cut into small rectangular strips measuring 15 mm×3 mm.The latter were stretched using a 10 N load cell at a strain rate of 1 mm min −1 until sample failure.The tensile strength was determined as the maximum stress measured, while the ductility was the percentage elongation at breaking point.The Young's modulus and sample toughness were also determined.Values for each of the previously mentioned mechanical properties are reported in Table 2 for each of the buckypapers examined and are the average of results obtained using three different strips of membrane. Nitrogen adsorption/desorption isotherms were obtained for each type of buckypaper using a surface area analyzer (ASAP 2010 or ASAP 2020, Micromeretics) operating at 77 K and a single sample of each type of membrane.Prior to analysis, residual gas trapped within samples was removed under vacuum at 200 ∘ C. The resulting isotherms were analysed using the Horvath-Kawazoe (HK) and Barrett, Joyner and Halenda (BJH) methods to determine the distribution of small and large pores, respectively [21,22].In addition, multipoint Brunauer, Emmett and Teller (BET) analysis of the isotherms was used to calculate the specific surface areas of the samples [20]. Bacterial Filtration and Imaging Experiments.Escherichia coli JM109 was selected as the model bacterium throughout the course of this study.A single colony of E. coli was inoculated into 5 mL of Luria-Bertani (LB) broth and grown at 37 ∘ C for 16 h with shaking at 200 rpm.The overnight culture (1 mL) was used to inoculate 20 mL of prewarmed LB broth, which was subsequently incubated at 37 ∘ C with shaking until an OD 600 of 0.5 (midexponential growth phase) was obtained.For filtration experiments, 1 mL of freshly prepared cells was suspended in 50 mL sterile saline solution (0.9% (w/v) NaCl) giving a final cell concentration of c.a. 10 7 mL −1 .Prior to testing, the buckypaper membranes were sterilised by soaking in 70% ethanol and thoroughly washed with sterile saline to remove any remaining solution, and the glass filter holder and flask to be used for the filtration process were sterilized using an autoclave.Bacterial suspensions were filtered through dry buckypapers using a vacuum of approximately 200-300 mbar at room temperature (21 ∘ C).To determine the extent of removal of E. coli, a dilution series was prepared from the filtrate by plating onto LB agar and incubating overnight at 37 ∘ C. The numbers of colonies present after this period of time were then counted by direct visual inspection and converted to log removal values.Table 3 shows the average log removal values for each buckypaper after performing three separate experiments. The viability of E. coli on buckypaper membranes was examined using a live/dead assay performed in accordance with the procedure developed by Brady-Estévez et al. [12].Immediately after filtration experiments, buckypapers were either stained with propidium iodide (PI) followed by counter staining with SYTO-16 or stained with PI followed by counter staining with 4 ,6-diamidino-2-phenylindole (DAPI).Staining was performed by adding 50 L of PI solution in the dark.The stained buckypapers were then allowed to develop for c.a. 5 min before being rinsed with Milli-Q water.This process was then repeated for the counter stain, after which the buckypaper was again rinsed and stored in the dark prior to imaging using fluorescence microscopy. Quantitative analysis of the fluorescent images was performed using the area-based estimation method outlined by Kang et al. [10].This required each of the images obtained to be effectively split into two separate images showing the individual fluorescence attributable to each dye used to stain the buckypaper surface.These images were then converted to 8-bit greyscale images in which the colour intensity was converted into a 256 increment scale, with 0 corresponding to completely black and 255 to completely white.The software package used enabled the distribution of the brightness of pixels within the individual images to be determined.From the resulting curves, a threshold intensity value was chosen between 0 and 255 for all images, which allowed the subsequent production of a binary black and white image.In the latter, the white pixels were considered as representing the presence of fluorescence at a particular location on the buckypaper surface.Therefore by determining the ratio of black to white pixels in the image the percentage of the total buckypaper area that was fluorescing as a result of the presence of either live or dead bacteria could be calculated.By comparing these values for the two dyes used to stain each buckypaper, the percentage of dead bacteria could then In addition to performing image analysis of buckypaper surfaces, the filtrates obtained after filtering solutions containing E. coli using either a SWNT/cipro or an MWNT/cipro membrane were stained and imaged to determine if any bacteria had crossed these buckypapers.In order to obtain images, a sample of the filtrate (c.a. 1 mL) was centrifuged and the resulting pellet resuspended in sterile saline prior to casting onto a poly-L-lysine-coated microscope slide.The sample was then dried in air and stained with a combination of PI and DAPI as described previously. Preparation and Characterisation of Dispersions Containing Ciprofloxacin. The ability of ciprofloxacin to disperse SWNTs has not been reported previously.Figure 2(a) shows the absorption spectrum of a 3 mL aqueous sample containing 0.1% (w/v) SWNTs and 1.0% (w/v) ciprofloxacin, which had been sonicated for different periods of time.A series of features related to the so-called van Hove singularities is apparent [24], and the absorbance at all wavelengths increased as the duration of sonication was lengthened.Both observations indicate that the SWNTs were becoming increasingly dispersed in the presence of the antibiotic.By plotting the absorbance of the solution at a given wavelength as a function of sonication time (e.g., Figure 2(b)), it can be seen that treatment for 30 min was sufficient to ensure significant dispersion of the SWNTs.Similar results to these were obtained when absorption spectra were recorded for a solution containing 0.1% (w/v) MWNTs and 1.0% (w/v) ciprofloxacin, although as expected the spectra lacked features attributable to the van Hove singularities. Preparation and Characterisation of Buckypapers Containing Ciprofloxacin. Buckypapers were made by vacuum filtration of dispersions containing a total of either 30 or 90 mg of CNTs and either 1% (w/v) cipro or 1% (w/v) Trix.The thickness of all BPs was similar (50 m) regardless of their composition.Figure 3 presents SEM micrographs of the buckypapers, which reveal that their surface morphology varied depending on the dispersant and type of CNT used.Inspection of the micrographs also suggests that the diameter of the surface pores of the MWNT/Trix membrane are larger than that of the other buckypapers.This was further investigated by quantifying the diameter of the pores present on the surface of each of the four membranes using Image Pro Plus software.The results of this analysis are presented in Table 1 and confirm that the surface pores were the largest in the case of the MWNT/Trix membrane (80 ± 20 nm).However, these were only slightly larger than those present on the surface of the MWNT/cipro buckypaper (70 ± 20 nm).Inspection of the data in Table 1 also shows that the surface pores of both MWNT buckypapers are at least 2.5 times larger than those present for either SWNT membrane, suggesting that the choice of CNT is a major factor in determining surface morphology.This view is supported by an examination of the surface diameters of other SWNT buckypapers reported in our previously published study into the properties of a range of such materials.This included buckypapers synthesised from dispersions prepared using several low molecular mass dispersants including a cyclodextrin, a calixarene, a porphyrin, and a phthalocyanine [14].For each of the latter materials analysis of the average surface pores evident in SEM images using Image Pro Plus software revealed that they were <50 nm, which is smaller than that of both MWNT buckypapers examined as part of the current study. Further information about the surface area and average internal pore morphology of the BPs was obtained through analysis of nitrogen adsorption/desorption isotherms.Figure 4 shows the isotherms obtained for the SWNT/cipro, MWNT/cipro, and MWNT/Trix buckypapers.All may be categorized as general type IV isotherms that exhibit hysteresis at higher relative pressures.The isotherm for the SWNT/cipro buckypaper (Figure 4(a)) is very similar to those reported previously for other buckypapers prepared using the same type of SWNTs and low molecular mass dispersants including Trix [14].In keeping with these previous results, the extent of nitrogen adsorption and desorption is significant at all relative pressures.In contrast, the isotherms obtained for the MWNT/cipro and MWNT/Trix buckypapers (Figures 4(b) and 4(c)) show that nitrogen adsorption and desorption occur predominantly at / 0 > 0.8.This suggests that there are significant differences between the internal morphologies of SWNT and MWNT buckypapers. In order to investigate this hypothesis further, each of the isotherms in Figure 4 was subjected to analysis using the Barrett, Joyner and Halendar (BJH) and Horvath-Kawazoe (HK) methods [21,22].Analysis via the HK method afforded information on the distribution of small pores (<2 nm) within each of the membranes, while the BJH method allowed estimation of the larger pores.Combining the two sets of results yielded the pore size distribution profiles shown in Figure 5.Each set of curves show a large peak between 0.5 and 1.5 nm, which can be attributed to the pores between individual nanotubes contained within CNT bundles (interstitial pores).In contrast, significant difference can be seen between the distributions of larger pores that occur between nanotube bundles for the SWNT and MWNT buckypapers.In the case of the SWNT/cipro membrane (Figure 5(a)) a second, well-defined peak is centered at approximately 6 nm which can be attributed to the interbundle pores.For the two MWNT buckypapers, however, it is clear that the corresponding pores are significantly larger.In the case of the MWNT/Trix buckypaper (Figure 5(c)) the distribution of interbundle pores is centered at approximately 23 nm.While there is no corresponding peak in the pore distribution curve for the MWNT/cipro buckypaper (Figure 5(b)), it is still clearly evident that the maxima in the peak distribution are located at >15 nm.Numerical integration of the sets of curves in Figure 5 was performed in order to calculate the average internal pore diameter of the membranes, as well as the percentage contribution of the interbundle pores to the total free volume.The results of this analysis, along with those obtained via application of the BET method [20] to the original isotherms, are presented in Table 1. Of particular note is that the average diameter of the internal pores of the two SWNT membranes is significantly lower than that for the corresponding MWNT buckypapers, mirroring what was observed with the surface pores.Furthermore the average internal pore diameters obtained for the two SWNT membranes (4 ± 0.4 and 7 ± 0.8 nm) are generally similar to values reported recently for other buckypapers prepared using the same batch of SWNTs and low molecular mass dispersants [14].For example, the average internal pore diameter for a SWNT/Trix buckypaper was previously reported to be 4 ± 0.4 nm [14].However, it must be noted that one SWNT buckypaper in the latter study, that was prepared using phthalocyanine tetrasulfonic acid as the dispersant, was shown to possess internal pores with an average diameter of 27 ± 3 nm, which is comparable to that of the two MWNT membranes in the current study.In contrast to this, there was generally little difference between the average nanotube bundle diameter, internal pore volume, or surface area of SWNT and MWNT membranes in the current study.The one significant exception to this set of general trends was that the surface area of the SWNT/Trix buckypaper was more than two times bigger than that of any of the other materials. Evidence that ciprofloxacin had been retained in the buckypapers was obtained by microanalysis.The atomic weight percentages of nitrogen and fluorine in a sample of a SWNT/cipro membrane were 2.8% and 1.2%, respectively, while for a MWNT/cipro buckypaper these values were 2.0% and 1.0%, respectively.Both of these elements are present in ciprofloxacin but not in MWNTs or SWNTs.Further evidence in support of incorporation of antibiotic molecules in the buckypapers was obtained using energy dispersive Xray (EDX) spectroscopy.For example, the EDX spectrum (Figure 6(a)) of a MWNT/cipro membrane showed a peak with weak intensity at ∼0.65 keV which is indicative of the presence of fluorine, and is absent from the corresponding spectrum of an MWNT/Trix buckypaper (Figure 6(b)). The mechanical properties of the four buckypapers were investigated using a tensile test method, and the results obtained summarised in Table 2.Each of the mechanical properties determined from the stress-strain curves fell within a relatively narrow range of values.For example, the tensile strength of the materials was found to vary between 6 ± 2 and 20 ± 10 MPa, while the Young's moduli were in the range 0.6 ± 0.3-1.7 ± 0.3 GPa.In all cases the mechanical properties of the buckypapers were found to be similar to those reported recently for other buckypapers prepared using the same batch of SWNTs and macrocyclic ligand dispersants [14].Replacement of Trix by ciprofloxacin in both types of buckypapers generally resulted in a decrease in the mechanical properties of these materials.However, all buckypapers remained intact after being used for the bacterial filtration experiments described later, which typically lasted approximately for 1 h.This suggests that they have sufficient mechanical integrity to allow their use for multiple filtration experiments.Each of the buckypapers containing Trix or cipro was found to be hydrophilic using the sessile drop method, which gave contact angles between 41 ± 5 and 62 ± 7 ∘ .Measurement of the electrical conductivity of the buckypapers using a 2-point probe method gave values ranging from 24 ± 16 to 85 ± 2 S cm −1 , with the values for the materials prepared using SWNTs significantly larger than for those obtained using MWNTs.The conductivities obtained for the two SWNT buckypapers are comparable to those obtained for other membranes prepared using this class of CNTs and low molecular mass dispersants [14]. Bacterial Filtration Experiments. The ability of the buckypapers to remove E. coli JM109 was initially investigated by vacuum filtering (at ∼200-300 mbar) 50 mL suspensions of the bacterium (in 0.9% (w/v) NaCl) with a final concentration of c.a. 10 4 cells mL −1 .Dilution series produced from the filtrates were plated onto Luria-Bertani (LB) agar and incubated overnight at 37 ∘ C. Representative images of E. coli colonies grown from the initial bacterial suspension, and after filtering suspensions across either an MWNT/Trix or SWNT/Trix buckypaper, are shown in Figure 7.The total colony forming units (CFU) for each plate were counted and compared to the CFU for the initial bacterial suspension.The results of this analysis showed that each buckypaper removed >99% of the E. coli present in the initial suspension, demonstrating that they were highly effective for this purpose.In contrast, when the 5 m PTFE membrane used as a support for preparing the buckypapers was used to filter the same E. coli suspension, only 90% of the bacteria were removed.In order to further facilitate comparison of the relative effectiveness of the different buckypapers, the percentage recoveries () were converted into values of Log Removal using (1): Log Removal = −log 10 (100 − ) + 2. (1) Table 3 presents the values of Log Removal obtained for the various buckypapers, which suggest that buckypapers containing MWNTs were more effective for filtering E. coli than their SWNT counterparts containing the same dispersant.As anticipated, incorporation of ciprofloxacin instead of Trix into both types of buckypapers reduced the number of viable E. coli in the filtrates.In the case of experiments performed using MWNT/cipro buckypapers, complete removal of bacteria was observed for each of the three samples analysed, suggesting these membranes were the most effective for removal of E. coli. Further evidence in support of this conclusion was provided by experiments in which the filtrates obtained using MWNT/cipro and SWNT/cipro buckypapers were stained using a combination of propidium iodide (PI) and DAPI and subsequently imaged using fluorescence microscopy.Propidium iodide is internalised only by membrane compromised (i.e., dead) bacterial cells and fluoresces red when excited with high-intensity light.In contrast, DAPI is able to enter all cells and fluoresces blue upon binding to DNA when appropriately excited with light.Figure 8 shows the fluorescence microscopic images obtained of the initial E. coli suspension as well as those of a filtrate obtained after filtering an identical sample of bacteria across a SWNT/cipro buckypaper.The image of the initial bacterial sample (Figure 8(a)) shows, as expected, blue regions attributable to the presence of viable E. coli cells, as well as red regions due to cells that had died as a result of natural attrition.In contrast, the image of the filtrate obtained using an SWNT/cipro buckypaper (Figure 8(b)) shows only a small number of red areas, indicating that some dead bacteria had passed across the membrane.The image of a filtrate obtained using an MWNT/cipro buckypaper did not show either red or blue regions, indicating that no bacterial cells passed across this membrane.This is consistent with the results presented previously that were obtained by counting the CFU, which indicated that the MWNT/cipro buckypaper was the most effective for filtering E. coli. The general paucity of E. coli present in the filtrates obtained using each of the buckypapers can be attributed largely to rejection of bacteria owing to the greater size of their cells compared to the pores present on the surface and within the membranes and toxicity imparted onto the bacteria through contact with the nanotubes or dispersant molecules.The extent of inactivation of E. coli cells trapped on the surface of the buckypapers was assessed using a fluorescence-based viability assay reported previously [12].In short, the surfaces of the four membranes were stained using a combination of either PI and SYTO-16 or PI and DAPI, after they had been used to filter the same number of bacterial cells and then imaged using fluorescence microscopy.Like DAPI, SYTO-16 is able to enter all cells.However, it fluoresces green instead of blue upon binding to DNA when excited with light of the appropriate wavelength. Representative images of each of the four different types buckypapers, after they had been stained as described previously, are shown in Figure 9. Quantitative analysis of the images was performed in accordance with the method outlined by Kang et al. [10].The results of analysis of the images presented in Figure 9 are presented in Table 4. Inspection of the data shows that there was perhaps a small difference in cell killing efficiency between the two buckypapers prepared using Trix as the dispersant, with the membrane synthesised using SWNTs appearing to be slightly more effective.Of more relevance is that the results show that both the MWNT and SWNT buckypapers displayed higher bactericidal properties when ciprofloxacin was present.For example, in the case of The initial E. coli suspension was diluted 100,000x before an aliquot was cultured.In the case of the filtrate obtained using an MWNT/Trix buckypaper, no dilution was performed before an aliquot was cultured, while the filtrate obtained using a SWNT/Trix buckypaper was diluted 100x before an aliquot was cultured.In the case of the image in (c) the positions of the cultures have been identified using a black marking pen in order to assist in counting the number of colonies present.MWNT buckypapers, the percentage of dead bacteria on the membrane surface increased from 58 ± 13% for MWNT/Trix to 100% for MWNT/cipro. Conclusion In this paper we have demonstrated that ciprofloxacin can be used to assist the formation of dispersions of SWNTs.Furthermore buckypapers obtained from SWNT/cipro or MWNT/cipro dispersions retain antibiotic molecules after preparation.Analysis of SEM micrographs, and nitrogen adsorption/desorption isotherms, demonstrated that significant differences exist between the surface and internal morphologies of buckypapers prepared from MWNTs and SWNTs.In addition, the data also showed that replacing Trix as the dispersant used during buckypaper preparation by ciprofloxacin had little impact on these characteristics.Each of the four buckypapers prepared removed more than 99% of the E. coli present in an aqueous suspension.This provides evidence that free-standing buckypaper membranes can be as effective for removing microbial contaminants from water supplies as the composite CNT materials investigated previously [12,13].It was somewhat surprising that the overall bacterial filtration efficiency of MWNT buckypapers prepared using either Trix or cipro was greater than that of the corresponding SWNT membranes.This suggests that there is something inherent in the structure of MWNT buckypapers that makes them more suitable for bacterial filtration applications.Incorporation of ciprofloxacin significantly enhanced the ability of SWNT and MWNT buckypapers to impart bactericidal activity on E. coli.This demonstrates that it is possible to incorporate into buckypapers dispersant molecules with chemical and biological properties designed to improve their effectiveness for particular applications.This added functionality will only exist while the dispersant molecules are retained by the buckypaper.We therefore used absorption spectrophotmetry to monitor the leaching of ciprofloxacin from an MWNT/cipro buckypaper under conditions identical to those used for performing a bacterial filtration experiment.After one hour only 0.3 mg of ciprofloxacin had leached from the buckypaper, which is less than 1% of its total mass.Future experiments are planned which will investigate what, if any, impact the loss of ciprofloxacin has on the efficacy of buckypapers when used multiple times for bacterial filtration experiments.In addition, we also intend to examine the effectiveness of our buckypapers for filtering solutions containing larger numbers of E. coli or different types of both gramnegative and gram-positive bacteria.We believe that these studies could provide insights into the reasons behind the observed difference in bacterial filtration efficiency between MWNT and SWNT membranes presented in this paper. Figure 2 :Figure 3 : Figure 2: (a) Visible absorption spectra of a solution containing 0.1% (w/v) SWNT and 1.0% (w/v) ciprofloxacin after different periods of sonication.(b) Effect of increasing sonication time on the absorbance at 660 nm of the previous SWNT/cipro dispersion. Figure 7 : Figure 7: Images of LB plates after overnight culture of 100 L of (a) the initial E. coli suspension, (b) the filtrate obtained after passing the E. coli suspension across a MWNT/Trix membrane, and (c) the filtrate obtained after passing the E. coli suspension across a SWNT/Trix membrane.The initial E. coli suspension was diluted 100,000x before an aliquot was cultured.In the case of the filtrate obtained using an MWNT/Trix buckypaper, no dilution was performed before an aliquot was cultured, while the filtrate obtained using a SWNT/Trix buckypaper was diluted 100x before an aliquot was cultured.In the case of the image in (c) the positions of the cultures have been identified using a black marking pen in order to assist in counting the number of colonies present. Figure 8 : Figure 8: Fluorescence microscopic images of (a) an E. coli suspension in saline prior to a filtration experiment and (b) the filtrate obtained after passing an identical suspension of E. coli across a SWNT/cipro buckypaper.Both samples were stained with PI and DAPI. Table 3 : Removal of E. coli using buckypaper membranes. a Complete removal was observed for all membranes analyzed.be determined.The previous experiments were performed in triplicate for SWNT/Trix and MWNT/Trix membranes, but only once, each for the corresponding buckypapers containing ciprofloxacin. a Values determined using one sample only.
2018-12-08T09:31:45.258Z
2013-01-01T00:00:00.000
{ "year": 2013, "sha1": "95b97f2bd60cf289bc8e58e240333d2110e664c5", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jnm/2013/781212.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4d697046c365554cc8439a46a5b3fcc73d62c5ec", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
242579322
pes2o/s2orc
v3-fos-license
Associarions Between Fine Particulate Matter (PM2.5) and Childhood-Onset Systemic Lupus Erythematosus Background: Fine particulate matter (PM2.5) has been linked to induction of oxidative stress as well as pulmonary and systemic inammation. We hypothesized that ambient PM 2.5 variation would be associated with the occurrence of childhood-onset systemic lupus erythematosus (cSLE). Methods: We collected data from the Taiwan National health insurance research database and linked these data to the Taiwan Air Quality-Monitoring Database. Children <18 years old, identied from January 1, 2000 were followed up until the rst diagnosis of cSLE was made or until December 31, 2012. The daily average PM2.5 was categorized into four quartile-based groups (Q1-Q4). We measured the incidence rate, hazard ratios (HRs), and 95% condence intervals for cSLE stratied by the quartiles of PM2.5 concentration using Cox proportional hazards models adjusted for age, sex, monthly income, and urbanization. Results: Introduction A diagnosis of childhood-onset systemic lupus erythematosus (cSLE) is made when individuals aged less than 18 years develop SLE. [1] cSLE accounts for 10%-20% of all SLE cases. Compared with adultonset SLE, cSLE has a worse clinical course with signi cantly more lupus nephritis, hematological disorders, neurologic disorder, polyarthritis, mucocutaneous involvement, and photosensitivity. [2] Although the etiology of SLE remains unknown, it is multifactorial, including genetic, hormonal, immunologic, and environmental factors. Several environmental factors are reported to be associated with SLE, such as silica exposure, current cigarette smoking, exogenous estrogens, ultraviolet light, solvents, pesticides, heavy metals, and air pollution. [3] There is a growing interest in the role of air pollution on in ammatory diseases, especially concerning particulate matters (PM). Sources of PM are mostly from human activities, including tra c and industrial emissions. [4] Fine PM (with a median diameter < 2.5 mm, PM2.5) are more toxic than other inhalable particles because they can reach deeper areas of the respiratory tract and can be absorbed into the bloodstream through alveolar capillaries, resulting in a regional and even systemic, in ammatory process. [5,6] Previous studies reveal that PM2.5 may be associated with acute and chronic lower respiratory diseases, cerebrovascular diseases, ischemic heart diseases, and lung cancer. [4] Several studies have demonstrated that air pollution enhances the risk of autoimmune diseases in children. Although it is not clearly known what factors play a role in the pathogenesis of cSLE, it has been reported that exposure to SO 2 and O 3 lead to an increase in pediatric rheumatic diseases hospitalizations, and exposure to PM10, NO 2 , and CO may increase the risk of disease activity in cSLE. [6,7] Moreover, maternal exposure to tobacco and air pollutants during pregnancy is associated with cSLE. [7] Recently, a study from Brazil demonstrated that short-term exposure to both indoor and outdoor PM2.5 was associated with increases in airway in ammation and systemic in ammation in cSLE patients. [8] However, these studies only assess the exposure to PM2.5 and the disease activity and hospitalization over a short period of time. There are limited studies examining the association between PM2.5 variation and the incidence of cSLE over a long period of time. Therefore, our objective was to evaluate the effects of air pollution on the risk of developing cSLE in Taiwan from 2000-2012. Data Source The data used in the current study were sourced from the Children le, a representative subset of data that includes data from half of all children randomly selected from the year 2000 registry of bene ciaries of the Taiwan National Health Insurance Research Database (NHIRD). The NHIRD was established in March 1995 and includes detailed information, such as outpatient visits, hospital admissions, prescriptions, procedure, and diagnosis of disease, based on the International Classi cation of Diseases, Ninth Revision, and Clinical Modi cation (ICD-9-CM), from 99% of the 23 million enrollees in Taiwan (http://www.nhi.gov.tw/english/index.aspx). The data were analyzed anonymously. This study has been approved by the Institute Review Board of China Medical University Hospital (CRREC-103-048) and complies with the principles outlined in the Helsinki Declaration. Study population, outcome of interest, endpoints, and confounding factors We identi ed children < 18 years old from January 1, 2000, to December 31, 2012. Children who had missing data and were diagnosed with SLE before the baseline were excluded. SLE was de ned by at least 3 records of ICD-9-CM code 710.0 made in any diagnosis eld during the inpatient or ambulatory claim process, as our outcome of interest. The Taiwan National Health Insurance (NHI) has classi ed SLE as a catastrophic illness, and the diagnosis of SLE must be con rmed by a board-certi ed specialist and be reviewed and approved by the Taiwan NHI. All participants were followed from baseline until the diagnosis of SLE was made, or patients withdrew from the NHI, or until December 31, 2012. In this study, the mean standard deviation (SD) follow-up duration in SLE patients was 11.2 (2.32) years. The confounding factors were age, sex, urbanization level of residence, and monthly income. Urbanization level was de ned based on population density and was strati ed into four levels, from the highest density (Level 1) to the lowest density (Level 4). Monthly income was classi ed into 4 groups; < NT$14,400, NT$14,400-18,300, NT$18,301-21,000, and > NT$21,000. Exposure measurement The Taiwan Statistical analysis The demographic categories in the present study included age, sex, urbanization level of the residential area, and the daily average of exposure to air pollutants. To test the distributed difference among daily average concentrations of PM2.5 by quartile and urbanization, a χ2 test was used. The Kaplan-Meier method was used to estimate the proportion of study subjects who did not suffer from SLE during the follow-up period, among the different quartiles of PM2.5 level. The incidence density rate of cSLE (per 100,000 person-years) was counted by each quartile of daily average concentrations of PM2.5. A Cox proportional hazard regression was used to estimate the hazard ratios (HRs) and 95% con dence intervals (CIs) for SLE in the Q2-Q4 levels for air pollutant concentration, compared to the lowest one (Q1). A multivariable model was adjusted for age, sex, monthly income, and urbanization. All analyses were performed using SAS 9.3 (SAS Institute Inc, Cary, NC) and the Statistical Package for the Social Science (Version 15.1; SPSS Inc, Chicago, IL). All statistical results were considered statistically signi cant when 2-tailed P values were < 0.05. Results A total of 394 children (0.16%) were newly diagnosed with SLE among a cohort of 244607 children from January 1, 2001 to December 31, 2012. The demographic factors of the study subjects are shown in Table 1. The mean age of participants was 6.09 years (SD, 2.99), and the proportion of boys and girls was 51.8% and 48.2%, respectively. In the present study population, more children lived in higher population density areas (65.3%). *Chi-square test The urbanization level was categorized by the population density of the residential area into 4 levels, with level 1 as the most urbanized and level 4 as the least urbanized. The daily average air pollutant concentrations were categorized into 4 groups based on quartiles for each air pollutant. The incidence rate for SLE increased with PM2.5 exposure concentration, from 4.7 (Q1) to 21.9 (Q4) per 100,000 person-years ( Table 3). The Kaplan-Meier plots ( Fig. 1) with PM2.5 concentration strati ed by quartile showed that patients exposed to higher PM2.5 concentrations had a higher accumulative incidence of SLE than did those exposed to lower PM2.5 concentrations during the 12-year observation period. In the multivariable Cox proportional hazard regression, the adjusted HR for SLE increased with the PM2.5 exposure concentrations from 2.74 to 4.23 compared with that for those exposed to the corresponding concentrations in the Q1 level (Table 3). Discussion This is the rst large population study to evaluate the exposure of ambient PM2.5 and the occurrence of cSLE over a long period of time. This longitudinal study showed that higher PM2.5 exposure concentrations increased the incidence rate of cSLE in Taiwanese children and suggests that ambient PM2.5 exposures may be a trigger for the development of cSLE. Seventy years ago, the historic smog disaster, the 1948 Donora smog, killed 20 people and caused respiratory problems for 6,000 out of the 14,000 people living in Donora. [9] Since then, interest has increased regarding the harmful effects of air pollution. In 1963, the Clean Air Act was established and was last amended in 1990; it requires the Environmental Protection Agency (EPA) to set National Ambient Air Quality Standards (NAAQS) for pollutants considered harmful to public health and the environment. [10] The World Health Organization (WHO) also challenged governments around the world to improve air quality in their cities to protect peoples' health. [11] However, from the report of WHO, there are still approximately 4.2 million deaths resulting from exposure to ambient air pollution and an additional 3.8 million deaths resulting from exposure to household air pollution, every year. Moreover, several model projections indicate that the contribution of outdoor air pollution to premature mortality could double by 2050. [4] Air pollutants can be found anywhere in the air, both outdoors and indoors. Typically, the environment contains a mixture of gaseous and particulate pollutants. [12] Most air pollutants originate from human activities and emissions of ambient air pollution from regional sources may travel long distances across national borders. [13] To protect air quality in the US, the EPA has mandated air quality standards called NAAQS for the following six air pollutants: ozone (O 3 ), lead (Pb), total suspended particulates (TSP) including PM2.5 and PM10, carbon monoxide (CO), sulfur dioxide (SO 2 ), and nitrogen oxides. These six air pollutants are called "criteria pollutants". [14] An increasing number of epidemiological studies have demonstrated that exposure to air pollutants has harmful effects on cardiovascular and respiratory morbidity and mortality, particularly in children. [15][16][17][18] Children are known to have more adverse health effects to air pollution because of their higher minute ventilation, immature immune system, tendency to spend more time outdoors, and the continuing development of their lungs. [17][18][19][20] PM 2.5 causes more of a burden than other air pollutants because these particles are composed of sulfates, metals, and other toxic substances that are adsorbed into their molecules. [19] The physical and chemical composition and size of airborne particulate matter vary widely with time and space. [20,21] The airborne particulate matter originates from sources such as transportation-related emissions, road/soil dust, biomass burning, and agricultural activities which enter the atmosphere by anthropogenic and natural pathways. [22] PM 2.5 are more toxic because they can reach deeper areas of the respiratory tract and can be absorbed into the bloodstream, resulting in local and systemic in ammation. Exposed to excessive PM 2.5 results in numerous diseases such as asthma, chronic bronchitis, cancer, cardiovascular disease, diabetes, and premature death. [4,[23][24][25][26] For every 10 µg per cubic meter in PM 2.5, all-cause mortality increases by 7.3%. [27] The associations between air pollution and immune-in ammatory responses have been noticed. Exposure to air pollution may cause major autoimmune diseases such as systemic lupus erythematosus (SLE), rheumatoid arthritis (RA), multiple sclerosis (MS), and type 1 diabetes mellitus (T1DM). [28] Exposure to particulate matter (PM 10 ), sulfur dioxide, nitrogen dioxide (NO 2 ), ozone, and carbon monoxide was found to be associated with high disease activity in juvenile-onset SLE. [6] A recent Taiwanese study discovered a positive association of NO 2 exposure with the development of SLE in adults. [29] A recent Brazilian study revealed that exposure to inhalable ne particles increases airway in ammation and systemic in ammation in cSLE patients. Although the exact mode of onset and disease progression of SLE remains elusive, the urban-rural difference in prevalence, clustering of disease prevalence around polluted regions, and low concordance rates among monozygotic twins with SLE (around 24%) indicate that environment has a strong impact on SLE. [30] Experimental data strongly suggest that a complex interaction between the exposome (or environmental in uences) and genome (genetic material) produce epigenetic changes (epigenome) that can alter the expression of genetic material and lead to the development of SLE in susceptible individuals. [30] Our study has some limitations. First, since air pollution is a dynamic mixture of different toxicants from natural and anthropogenic sources, including PM, O3, CO, SO2, nitrogen oxides (NOx), and so on, [17] monitoring the concentration of PM2.5 exposure does not fully eliminate the co-effects of mixed air pollutants. Second, since the monitoring stations are xed outdoors, they may not re ect the true exposure level to air pollutants in patients. Third, since this is a retrospective study, we cannot control important confounders such as genetic factors, family history of autoimmune disease, eating habits, leisure activity, sun protection habits, attitudes, body surface area, and cigarette smoking. Conclusions In conclusion, exposure to PM2.5 is a risk factor for developing cSLE. Although further studies are required to con rm these associations, our study suggests that awareness, education, and appropriate public policy for better air quality will result in a lower incidence of cSLE and will improve public health. The data were analyzed anonymously and informed consent is not applicable. This study has been approved by the Institute Review Board of China Medical University Hospital (CRREC-103-048) and complies with the principles outlined in the Helsinki Declaration. Consent for publication: This manuscript is an original article that has not been previously published and will not be submitted to any other journal. All the authors have read this manuscript and agree that the work is ready for submission, and accept responsibility for the manuscript's contents. Availability of data and materials: Data available on request due to privacy/ethical restrictions. Competing interests: None Figure 1 Kaplan-Meier plot of incidence of Childhood-Onset Systemic Lupus Erythematosus (cumulative incidence rates) in patients with PM2.5 concentration strati ed by quartile.
2020-10-28T19:16:13.859Z
2020-10-21T00:00:00.000
{ "year": 2020, "sha1": "5803ec21ebd10f5e996e808f059dbd9da6d1f691", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-93982/v1.pdf?c=1622936373000", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "d64945c258439e32a2a89fecb554cc49fecbf662", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [] }
67858377
pes2o/s2orc
v3-fos-license
The Proliferative and Apoptotic Landscape of Basal-like Breast Cancer Basal-like breast cancer (BLBC) is an aggressive molecular subtype that represents up to 15% of breast cancers. It occurs in younger patients, and typically shows rapid development of locoregional and distant metastasis, resulting in a relatively high mortality rate. Its defining features are that it is positive for basal cytokeratins and, epidermal growth factor receptor and/or c-Kit. Problematically, it is typically negative for the estrogen receptor and human epidermal growth factor receptor 2 (HER2), which means that it is unsuitable for either hormone therapy or targeted HER2 therapy. As a result, there are few therapeutic options for BLBC, and a major priority is to define molecular subgroups of BLBC that could be targeted therapeutically. In this review, we focus on the highly proliferative and anti-apoptotic phenotype of BLBC with the goal of defining potential therapeutic avenues, which could take advantage of these aspects of tumor development. Basal-Like Breast Cancers Are a Clinical Challenge Breast carcinomas are a leading cause of cancer mortality and morbidity worldwide with approximately 2.1 million diagnoses estimated in 2018 [1]. Molecular phenotyping based on gene expression profiling has revealed great heterogeneity among breast cancers. Several distinct molecular subtypes, each associated with different clinical outcomes, have been identified by array and RNA-seq studies, and include: Luminal A and B, ERBB2 overexpression (the gene for the HER2/Neu protein), and normal breast-like and basal-like breast cancers (BLBCs) [2,3]. BLBCs do not generally express ESR1 (the gene encoding the estrogen receptor (ER)) or PGR (the gene encoding progesterone receptor (PR)) and frequently lack ERBB2 expression, but do express basal cytokeratins (CK), KRT5 and KRT6 [4]. Unfortunately, the general lack of hormone and HER2 receptors makes this breast cancer subtype unsuitable and unresponsive to endocrine and HER2-targeted therapies, such as tamoxifen, aromatase inhibitors, and trastuzamab. BLBC accounts for up to 15% of breast tumors and is commonly diagnosed in pre-menopausal women under the age of 40, women of African descent, and carriers with defects in the familial breast cancer gene, BRCA1 [5]. The BLBC subtype is characterized by a shorter survival following progression to metastatic disease compared to luminal subsets. Standard care for patients with BLBC includes surgery followed by post-operative (adjuvant) radiotherapy and chemotherapies (e.g., anthracycline and taxane regimens), often with severe side effects that impact quality of life (reviewed elsewhere [6,7]). Unfortunately, these tumors have a high risk of recurrence via the cancers, 10-15% have a triple-negative phenotype, and represent 50% of all breast cancer deaths [16]. TNBC is not a specific subtype based on a positive distinctive marker, and as a result, confusion arises when it is assumed to be so. The immunohistochemical definition of TNBC is often used interchangeably with the gene expression based definition of BLBC, but comparative studies show not all TNBCs have basal-like patterns of gene expression, with a 75% overlap in these definitions [17] (Figure 1). For the purposes of this review, when defining in vitro models of BLBC and TNBC, we have used the molecular classification described by Prat et al. [18]. Identifying Key Hallmarks of BLBC that May Yield New Targets for Therapy The poor survival profile and inability of BLBC to respond to anti-estrogen or anti-HER2 therapies has led researchers to search for BLBC driver mechanisms that could be targeted with new therapeutic regimes. Hampering this search is the fact that the genomic characterization of BLBC has revealed at least five subsets with basal-like characteristics [27] and significant heterogeneity with multiple potential driver genes and therapeutic vulnerabilities [28]. Through use of the current immunohistochemistry-based classification system for BLBC it will be difficult to implement many of these findings. Molecular markers that are validated in other cancer types will be the exception to this, for example, programmed death-ligand 1 (PD-L1) expression to indicate the application of immunotherapy. Approximately 19% of BLBCs express PD-L1 on associated immune cells [29], and in the recent IMpassion130 trial of advanced TNBC, patients treated with atezolizumab (a PD-L1 antibody) in combination with nab-paclitaxel had a progression free survival of 7.5 months compared to 5 months with nab-paclitaxel alone [30]. While this small subset of BLBC patients may benefit from immunotherapy in the near future, the majority of patients will still receive standard of care chemotherapy. The complexity of the genomic landscape of BLBC suggests that an important step forward could be to target common phenotypes within BLBC rather than specific molecular aberrations. BLBC has the highest proliferative index of all the breast cancers [3,31], presenting with the highest percentage of Ki67 staining [32,33] and a high mitotic count [34]. This heightened proliferative phenotype is associated with the rapid presentation and growth of BLBC [35]. Excess proliferation can be counteracted by increased apoptosis. Apoptotic cells and heightened caspase-3 activity are commonly detected in BLBC [36,37], but BLBCs have acquired the ability to subvert cell death by engaging numerous anti-apoptotic protective mechanisms. Two of the markers that can be used to identify and define BLBC, αB-crystallin and EGFR, have, ultimately, pro-survival outcomes. αB-crystallin suppresses the pro-apoptotic protease caspase-3, resulting in resistance to mitochondrial-dependent cell death [23]. Similarly, EGFR plays a role in resistance to cell death as its downstream target, the enzyme, phosphatidylinositol-3-kinase (PI3K), is frequently activated in BLBC [38]. The PI3K/AKT/mammalian target of rapamycin (mTOR) pathway directly promotes survival via modulation of the key members of In this review, we analyze the growing body of literature that describes the disrupted proliferative and apoptotic pathways in BLBC, highlighting some of the most recent biomarker and molecular studies. We further discuss the potential to apply newly approved therapies that target the cell cycle (e.g., CDK4/6 inhibitors) and apoptosis (e.g., BH3 mimetics). Proliferative Landscape Cell proliferation is a tightly regulated process that is essential for the growth, development, and regeneration of eukaryotic organisms [40]. Unrestrained cellular proliferation is a fundamental feature of carcinogenesis, which manifests as changes to the regulation of the core machinery that drives proliferation, the cell cycle [41]. The cell cycle consists of two distinct phases: interphase, which comprises G1, S, and G2 phases, and mitosis (M phase), where cell division occurs. Cell cycle progression is regulated by serine/threonine cyclin-dependent kinases (CDKs) [42] that are activated by cyclins specific to various phases of the cell cycle ( Figure 2). Cell cycle entry and proliferation is initiated in the G1 phase when CDK4 and CDK6 form heterodimers with D-type cyclins, phosphorylating and inactivating the retinoblastoma (RB) tumor suppressor protein [43][44][45]. The cyclin D-CDK4/6 complex activates E2F transcription factors, promoting the expression of E-type cyclins, which then dimerize with CDK2. Cyclin E-CDK2 complexes further phosphorylate RB as well as other factors essential for DNA synthesis (S phase) [46,47]. During the later stages of DNA replication, CDK2 is activated by cyclin A to facilitate transition into the G2 phase. CDK1 subsequently forms a complex with A-type cyclins at the end of interphase to facilitate the onset of mitosis. Cyclin A is degraded following nuclear envelope breakdown in prophase, promoting the formation of cyclin B-CDK1 complexes that are responsible for driving cells through mitosis [48]. Each phase of the cell cycle is regulated by cyclin-dependent kinases (CDKs), their regulatory protein partners (cyclins) and CDK inhibitors. Many proteins in the G 1 /S phase transition of the cell cycle are specifically dysregulated in BLBC. Orange upward arrows indicate an increase in expression; orange downward arrows indicate a decrease in expression. Deregulated Cell Cycle Control in BLBC BLBC has an increased mitotic index and high rate of proliferation [49], and a major cause of this is the disruption of RB to allow cells to progress without impediment from G 1 to S phase. RB loss is prevalent across all ER-tumors [50,51] and is particularly high in BLBC, where up to 76% of tumors show loss of heterozygosity (LOH) of the RB1 gene encoding RB [52]. RB1 LOH probably acts in concert with the methylation of RB, which is a frequent event across all TNBC [53], and RB1 LOH in BLBC is associated with worse prognosis [52]. The p16 inhibitor, which selectively inhibits CDK4 and CDK6, is also expressed at high levels in BLBC with RB loss [54]. This is because cells with a compromised RB function initiate a negative feedback loop to inhibit CDK4 with increased expression of the p16 inhibitor. The loss of RB is implicated as an early step in BLBC as lesions of ductal carcinoma in situ often progress to basal-like breast cancer coincident with high p16 and Ki67 [54]. CCND1 (encoding cyclin D1) is often not amplified and is at lower levels in BLBC [54], consistent with cyclin D1 not being required for phosphorylation of RB [55]; however, CCND3 (cyclin D3) is frequently amplified [56], and residual disease following chemotherapy of a basal-enriched cohort of TNBC shows amplification of CCND1, CCND2, and CCND3, suggesting that the cyclin D family provides a survival advantage in drug-resistant cancers [57]. Both CDK4 [58] and CDK6 [56,59] are amplified and overexpressed in BLBC, and each is associated with poor overall survival [59]. The high expression of the components of the cyclin D-CDK4/6 complex and their association with poor prognosis is surprising given the canonical role of CDK4 and CDK6 in RB phosphorylation. This may be explained by the discovery of other critical targets for CDK4/6 in breast cancer growth and metastasis. CDK4/6 inhibition in BLBC reduces the CD44+/CD24+ self-renewing population and blocks tumorsphere formation [58], it reduces glucose metabolism [60], and CDK4/6 phosphorylates deubiquitinating enzyme 3 (DUB3) to promote SNAIL-mediated epithelial to mesenchymal transition and metastasis [61]. As well as the loss of RB or increase in CDK4/6 activity, cells find other mechanisms to amplify G 1 /S transition. For example, the E2F5 transcription factor, which is normally released after RB phosphorylation, is significantly up-regulated by amplification in a subset of BLBC, and is associated with an increase in Ki67 staining and shorter disease-free survival [62]. The cyclin E-CDK2 complex contributes to RB phosphorylation as well as promotes the initiation of DNA replication, and multiple mechanisms converge to up-regulate this activity in BLBC. High cyclin E1 expression is characteristic of BLBC, with 26% of BLBCs presenting with elevated levels [63], and other breast cancer subtypes showing relatively low expression [64]. Increased expression may be driven by gene amplification [65] and loss of the cyclin E1 degrader and tumor suppressor protein, F-box and WD repeat domain-containing 7 (FBW7) [66]. High CDK2 activity is also driven by the loss of the CDK2/4/6 inhibitor protein, p27, or gain of S phase kinase associated protein 2 (SKP2), which targets p27 for degradation. Both p27 loss and SKP2 gain are common features of BLBC [63], and linked to poor prognosis [63,67]. Overall, these changes implicate CDK2 activity as an independent driving feature of BLBC, and this is strengthened by the observation that mouse mammary tumors that develop from mouse mammary tumor virus with constitutive Cdk2 expression, have basal-like features [68]. Other enzymes involved in the transition from G 1 /S to G 2 phase are also elevated in BLBC. The CDC25 dual specificity phosphatases, which function to activate the CDKs, are expressed at high levels in BLBC, and they are predictive of poor prognosis in breast cancer as a whole [69]. Loss of RB correlates with increased CDC25 expression, suggesting that CDC25 up-regulation occurs downstream of RB loss [69]. Topoisomerase IIα, which is critical to DNA replication by cutting coiled DNA, is elevated in BLBC, although it is not predictive of outcome [70,71]. Finally, the c-MYC oncogene, which integrates signaling responses to up-regulate cell cycle progression, is expressed at high levels in BLBC [57], and a signature of genes regulated by c-MYC are also strongly associated with the BLBC phenotype [72]. The high proliferative and mitotic indices of BLBC increase replication stress and mitotic defects, leading to an increase in DNA damage [73]. BLBCs also have a high frequency of BRCA1 mutation and promoter methylation [74], which exacerbates DNA damage by disabling homologous recombination. A consequence of this is that DNA damage checkpoints are dysregulated in BLBC to protect the cells from excessive cell death. Ataxia telangiectasia and RAD3-related protein (ATR), ataxia telangiectasia-mutated protein kinase (ATM), checkpoint kinase 1 (CHEK1), checkpoint kinase 2 (CHEK2), and G 2 checkpoint kinase (WEE1) inhibit cell cycle progression into S phase and mitosis following DNA damage, and BLBC often has high CHEK1 [75,76] and CHEK2 [77], presumably giving rise to increased sensitivity to both single-stranded and double-stranded breaks. Lastly, the potent tumor suppressor, p53, which causes G 1 arrest in response to DNA damage or aberrant oncogene signaling [78], is mutated in 44-82% of BLBC, resulting in abnormal cell proliferation and decreased cell death. The G 2 /M axis of BLBC also shows significant deregulation, but there is not a consistent association between up-regulated G 2 /M activity and prognosis. This perhaps reflects the generally elevated proliferative capacity of these cancers downstream of core dysregulation by either RB loss or CDK2 activity gain at the G 1 /S transition. The master CDK of G 2 /M, CDK1, is amplified in BLBC [79], but has no relationship with prognosis (C.E.C., personal communication). Mitotic genes that are increased in BLBC and are associated with good prognosis are BUB1, PDZ associated kinase, and NIMA [80], which are involved in centrosome separation and mitotic checkpoints. Conversely, MASTL, a master kinase regulator of mitosis that ensures timely inactivation of CDK1, is high in BLBC and is associated with poor prognosis [81]. Consistent with these observations, knockdown of MASTL will enhance the action of some chemotherapies [82], but not anti-mitotic chemotherapies. b-MYB, which regulates cyclin B1 among other G 2 /M genes, is also elevated in BLBC and is associated with poor prognosis [83]. Overall, BLBC presents with a heterogeneous array of cell cycle defects, but with common themes. The G 1 /S restriction point is side-stepped in these cancers either through depletion of RB or elevation of E2F/CDK2 activity. This leads to a greatly heightened S phase entry, which manifests as increased S phase activity and G 2 /M progression, which is enabled by an array of changes along those axes. Targeting BLBC via the Cell Cycle Chemotherapies have been generally effective in BLBCs as they primarily target highly proliferative cells. Anthracyclines (e.g., doxorubicin; Table 1) target G 1 /S phase of the cell cycle either by preventing DNA and RNA synthesis through intercalation with the DNA or RNA [84]; inhibiting Topoisomerase II to induce DNA damage [85]; generating free oxygen radicals that damage DNA, proteins, and cell membranes [84]; or provoking histone eviction from chromatin, leading to activation of the DNA damage repair pathways or apoptosis [86]. The G 2 /M axis is targeted by taxanes (docetaxel and paclitaxel; Table 1), which disrupt microtubule de-polymerization by reversibly binding to tubulin, resulting in stable microtubules, defects in spindle assembly, chromosomal segregation, and cell division [87,88]. This interferes with mitosis and delays the spindle assembly checkpoint which activates apoptosis [89]. While these treatments are highly effective in BLBC, the use of chemotherapies are associated with extensive cytotoxicity to non-malignant, proliferating cells, presenting as alopecia, nausea, cardiotoxicity [84,90], and neurotoxicity [91]. CDK4/6 inhibitors (palbociclib, ribociclib, and abemaciclib; Table 1) target the G 1 /S transition to produce a cytostatic, anti-proliferative effect in cancer cells. This has led to the successful transition of CDK4/6 inhibitors into the clinic for ER+ breast cancers where it is used in combination with endocrine therapy [92]. This is generally believed to be reliant on an intact RB axis [50], which has been a deterrent to the development of CDK4/6 inhibitor therapy for TNBC or BLBC. Despite this, about 50% of BLBC tumors do present with intact RB [56]. In addition, the CDK4/6 inhibitor, abemaciclib, has demonstrated anti-tumor activity in in vitro models of RB+ TNBC, including BLBC models [93]. In pre-clinical models, CDK4/6 inhibitor therapy has been found to synergize with PI3K/AKT/mTOR inhibitors in HCC-38 TNBC cells [60], which have a strong basal signature [18]. In addition, more recent findings have shown that the non-canonical targets of CDK4/6 activity are important in tumorigenesis, including cancer cell self-renewal, glucose metabolism, and metastasis [58,60,61]. Two studies have suggested that CDK4/6 inhibitor therapy is best targeted at the "luminal androgen receptor" subgroup of TNBC based on its RB status [56,94], but the inhibition of DUB3-mediated metastasis by CDK4/6 inhibition appears to be specifically associated with BLBC [61], as DUB3 can drive a basal-like phenotype [61]. Several clinical trials are assessing CDK4/6 inhibition across breast cancers, allowing for future assessment of applicability in BLBC. NCT03130439 (ClinicalTrials.gov) is assessing abemaciclib as a standalone agent in metastatic RB+ TNBC. Two trials are assessing paclitaxel in combination with palbociclib [NCT01320592] or ribociclib [NCT02599363] in RB+ metastatic breast cancer irrespective of hormone receptor status, and NCT03756090 is assessing the combination of palbociclib with paclitaxel, cyclophosphomide (an alkylating agent), and epirubicin (an anthracycline). The combination of CDK4/6 inhibitors with anti-mitotic therapies, such as taxanes and platinums, has shown promise in pre-clinical models [95], including BLBC models [93], but CDK4/6 inhibitors may potentially antagonize G 1 /S-based therapies, such as anthracyclines [95]. Likewise, co-targeting of G 1 /S and G 2 /M via the combination of alkylating agents with anthracyclines and taxanes showed no benefit in patients [96]. Collectively, these studies highlight the need to apply a degree of caution when considering combination CDK4/6 inhibitor and cytotoxic regimens that rely heavily on cell proliferation for their cytostatic and cytotoxic effects [97]. Inhibition of other cell cycle CDKs has also shown some pre-clinical promise for BLBC. The pan-CDK inhibitor, dinaciclib, inhibits the cell cycle, promoting activity of CDK9 to reduce cyclin B1 expression and cause G 2 /M arrest [98], and it is synthetically lethal in MYC-amplified BLBC, causing both proliferative arrest and apoptosis [99]. The high expression of cyclin E1 in~26% of BLBC also raises the possibility of CDK2 inhibition to target these cancers [67]. Currently, specific CDK2 inhibitors are not available, although some pan-CDK inhibitors, such as SNS032 and CYC065, do target CDK2 with a higher affinity and have progressed to Phase I clinical trials [100]. Cell cycle inhibition can also be accomplished through cell cycle checkpoint inhibitors (Table 1), providing another potential avenue for BLBC treatment. BLBC has high rates of p53 deficiency, which makes cells highly sensitive to the G 2 checkpoint that is mediated by WEE1 [101]. WEE1 inhibition forces S phase arrested cells directly into mitosis without completing DNA synthesis, resulting in highly abnormal mitoses and apoptosis. This effect can be exacerbated in TNBC models through the use of CDK2 or ATR inhibition to further increase S phase arrest and increase replication stress [101,102]. Essential mitotic kinases may also provide future targets. For example, polo-like kinase is highly expressed in BLBC, and can be targeted by volasertib [103], which shows some effect in solid tumors, including breast cancer [104]. Apoptosis: An Essential Process in Healthy Tissues Apoptosis is a form of programmed cell death that is necessary for tissue homeostasis, regulation of the immune response, embryogenesis, and the destruction of ageing and dying cells [133][134][135]. It is characterized by a set of distinct morphological characteristics that include membrane blebbing, nuclear and DNA fragmentation, chromatin condensation, and cellular shrinkage [136]. The two main forms of apoptosis leading to caspase activation are the intrinsic (mitochondrial-dependent) pathway and the extrinsic (mitochondrial-independent) pathway [137,138] (Figure 3). The extrinsic pathway is activated by ligand-receptor interactions in the tumor necrosis factor (TNF) superfamily of death receptors containing a death domain that activates caspase-8 at the cell's surface. The intrinsic pathway is activated by various external stimuli, including growth factor deprivation, stress, ultraviolet radiation, or oncogene activation [139], and is modulated by the BCL-2 family of proteins. The intrinsic pathway is characterized by a cascade of events that lead to increased mitochondria permeability and release of cytochrome C, which binds to apoptotic protease activating factor 1 (APAF-1), leading to the formation of the mature and activated apoptosome that binds and activates the initiator, caspase-9 [136]. Activated caspase-9 then signals the cleavage and activation of caspases-3, 6, and 7, cellular proteases that lead to the destruction of cellular contents. A classic feature of this form of cell death is that it is a non-inflammatory process that produces smaller membrane bound apoptotic bodies that are engulfed by resident phagocytic cells [136]. Apoptosis: An Essential Process in Healthy Tissues Apoptosis is a form of programmed cell death that is necessary for tissue homeostasis, regulation of the immune response, embryogenesis, and the destruction of ageing and dying cells [133][134][135]. It is characterized by a set of distinct morphological characteristics that include membrane blebbing, nuclear and DNA fragmentation, chromatin condensation, and cellular shrinkage [136]. The two main forms of apoptosis leading to caspase activation are the intrinsic (mitochondrial-dependent) pathway and the extrinsic (mitochondrial-independent) pathway [137,138] (Figure 3). The extrinsic pathway is activated by ligand-receptor interactions in the tumor necrosis factor (TNF) superfamily of death receptors containing a death domain that activates caspase-8 at the cell's surface. The intrinsic pathway is activated by various external stimuli, including growth factor deprivation, stress, ultraviolet radiation, or oncogene activation [139], and is modulated by the BCL-2 family of proteins. The intrinsic pathway is characterized by a cascade of events that lead to increased mitochondria permeability and release of cytochrome C, which binds to apoptotic protease activating factor 1 (APAF-1), leading to the formation of the mature and activated apoptosome that binds and activates the initiator, caspase-9 [136]. Activated caspase-9 then signals the cleavage and activation of caspases-3, 6, and 7, cellular proteases that lead to the destruction of cellular contents. A classic feature of this form of cell death is that it is a non-inflammatory process that produces smaller membrane bound apoptotic bodies that are engulfed by resident phagocytic cells [136]. The BCL-2 Family and the Intrinsic Apoptotic Pathway The anti-apoptotic BCL-2 and its related family members (e.g., B-cell leukemia-extra large (BCL-XL), B-cell-like protein 2 (BCL-W), myeloid cell leukemia 1 (MCL-1), and B-cell leukemia 2-related protein A1 (BFL-1/A1)) all contain four BCL-2 homology (BH) domains (BH1-4) with BH domains 1-3 forming a hydrophobic pocket that mediates binding to other BH3-only containing family member proteins (e.g., BID, BAD, BIM, PUMA, NOXA, and tBID). The BH4 domain is highly conserved among the pro-survival members and is essential for providing cell survival signals [140]. The BH3-only proteins are pro-apoptotic, activated in response to cellular stresses (e.g., cytokine deprivation, cytotoxic insult, oncogenic activation), and suppress the actions of the anti-apoptotic proteins resulting in cell death [140][141][142]. BH3-only proteins also directly bind to and activate the pro-apoptotic effectors, BCL-2 homologous antagonist killer (BAK) and BCL-2 associated protein X (BAX), via the binding pocket formed with BH domains 1-3. Once activated, BAX/BAK localize to the mitochondrial surface where they change confirmation, oligomerize, and form pores in the mitochondrial membrane, resulting in loss of permeability and cytochrome release [143]. Crosstalk between the intrinsic and extrinsic apoptotic pathways occurs via the activation of caspase-8 and the cleavage of tBID. The cellular decision to live or die is tightly controlled and one of the most potent inhibitors of cell death is the X-linked inhibitor of apoptosis protein (XIAP), belonging to the family of inhibitor of apoptosis proteins (IAPs) that include Survivin, and cellular (c)-IAP1 and c-IAP2. XIAP directly interacts with caspases-3 and 7 and prevents their activity, resulting in cellular survival (reviewed in [144]). c-IAP1 and 2 inhibit the output of the extrinsic apoptotic pathway via regulation of TNF alpha signaling [145]. Conversely, regulation of IAP activity occurs via DIABLO/second mitochondria-derived activator of caspase (Smac), which are released from the mitochondria subsequent to cytochrome C release in response to cytotoxic stress potentiating cell death [146]. Defects in the ability to execute apoptosis can result from deregulated cell signaling pathways [136], leading to cell survival, a fundamental feature underlying every aspect of carcinogenesis. Dysregulation of Apoptosis in BLBC The pro-survival proteins are induced by multiple growth factor and cytokine signaling pathways that promote cell survival [147], and deregulation of these pathways is a common event in BLBC. The BCL-2 family of proteins can themselves be deregulated, with multiple genetic and proteomic aberrations reported in cancer cells. Some studies have shown that resistance occurs via up-regulation of pro-survival members of the BCL-2 family, including BCL-2 and MCL-1, although BCL-2 is commonly associated with ER positivity and better prognosis in breast cancer, being a favorable prognostic marker [148,149]. The exception is that high BCL-2 levels may be important in drug resistance in BLBC, as it is a significant independent predictor of poor outcome in BLBC patients treated with anthracycline-based adjuvant chemotherapy [150]. The anti-apoptotic protein, MCL-1, is more widely expressed and is present in most BLBCs at varying levels [151,152]. MCL-1 has been shown to have an important role in BLBC cell survival and carcinogenesis [149,153], and also plays a strong role in mediating the survival and chemotherapeutic sensitivity of BLBC and TNBC models [154,155]. MCL-1 protein is normally proteosomally degraded via the ubiquitin ligases, FBW7 and MULE/ARF-BP, leading to its short half-life [156]. Interestingly, dysregulation of FBW7 has been shown to be a prognostic biomarker in breast cancer particularly for ER-and BLBCs with the lowest levels of FBW7 found in these tumor subtypes [157]. Additionally, MCL-1 complexes with MULE are only transiently found in breast cancer cells and this has been shown to play a significant role in increased MCL-1 protein stability [158]. High levels of MCL-1 protein are found in breast cancers independent of the subtype [152], but BLBC has the highest MCL-1 expression, and up to 20% of BLBCs are characterized by genomic amplifications of MCL-1 [159]. Importantly, a genome wide sensitivity screen reported that BLBCs are largely dependent on proteasome function via proteasomal mediated regulation of the BH3-only protein, NOXA [153], and NOXA preferentially binds to MCL-1 [160]. There is now good evidence suggesting that MCL-1 mediates basal breast cancer cell survival and therapeutic resistance, with several studies showing the importance of this protein in therapeutic resistance where its suppression or antagonism is important for re-sensitization to cytotoxic therapies [146,[161][162][163]. High levels of BCL-2 and MCL-1 in drug-resistant BLBC may indicate those cells that are 'primed for death' whereby surviving cancer cells with high levels of pro-survival proteins are intrinsically resistant to a wide range of chemotherapeutics, but are, in fact, poised for death when exposed to an agent that competitively antagonizes the elevated pro-survival protein [164]. This likely relies upon the interactions of the pro-survival proteins with the BH3 proteins and apoptotic effectors, which are often coincidentally elevated in cancer cells. This has led some researchers to develop a 'BH3 profiling' assay that may be useful to predict dependence on the BCL-2 family of proteins for survival and chemotherapeutic resistance [165]. It remains to be seen whether BH3 profiling could be used to determine dependence on the BCL-2 family for BLBC; nevertheless, the presence of BCL-2/BIM complexes were shown to be important for sensitivity to the BCL-2 antagonist, ABT-737, in patient-derived breast cancer xenografts with basal-like characteristics [152]. Interestingly, an analysis of the METABRIC dataset also showed that high levels of MCL-1 mRNA expression predicted better overall survival in treated HER2+ and BLBCs. Conversely, in untreated cases, high expression of MCL-1 mRNA predicted poor outcome [149]. These data suggest that high levels of MCL-1, like BCL-2, could predict those BLBCs addicted to MCL-1 for survival, yet are poised and ready to respond to therapy, either by cytotoxic therapies or targeted treatments. As discussed above, αB-crystallin is a defining feature of a large proportion of BLBCs [166]. Overexpression of αB-crystallin promotes epidermal growth factor and anchorage independent growth of immortalized mammary epithelial cells and is associated with poor breast cancer specific survival and resistance to neo-adjuvant chemotherapy [167,168]. Small molecule antagonism of αB-crystallin can reduce tumor growth and invasiveness of breast cancer cells in part via its functions to repair misfolded vascular endothelial growth factor signaling [169] (Table 2). However, αB-crystallin also directly modulates the output of the intrinsic apoptotic pathway, where serine-59 phosphorylated αB-crystallin was shown to directly interact with BCL-2 and prevent its translocation to the mitochondria [170]. αB-crystallin overexpression also suppresses the release of cytochrome C from the mitochondria in part via its effects on BCL-2 [171]. Thus, either targeting the pro-survival effects of αB-crystallin directly or subverting its effects on BCL-2 mediated cell survival may be an effective strategy in BLBC. The tumor suppressor and transcription factor, TP53, is mutated in approximately 50% of human cancers, leading to increased levels of inactive p53 in cancer cells [172] that has consequences for sustained cellular survival [173]. Importantly, cancers harboring a TP53 mutation often have intact apoptotic pathway components, providing a therapeutic opportunity independent of a TP53 driver mutation, particularly as many cytotoxic drugs primarily act via p53 to induce apoptosis, which can be a point of resistance [174]. p53 controls the transcription of BH3-only proteins, such as PUMA, NOXA, and BAX, in response to cytotoxic stress [175]. There has been substantial work in developing small molecule compounds targeting the mutant p53, such as the MDM2 inhibitors (MI-773301 and Nutlins; Table 2), which restrict MDM2 suppression of p53 function [176]. There are, however, still unresolved issues surrounding the development of p53 as a therapeutic target [177], with diseases, such as breast cancer, not benefiting from p53 inhibition, possibly in part due to the vast heterogeneity of the disease as well as the limited efficacy of targeting transcription factor function. In a p53 mutant context, targeting members of the BCL-2 family may provide a better option for those patients with defective or deregulated p53 function by bypassing this network. Targeting Survival in BLBC The essential role of programmed cell death in the carcinogenesis and acquisition of therapeutic resistance has generated great interest in developing drugs that inhibit survival, either via suppressing the output of signaling pathways leading to survival, or indirectly via antagonizing the actions of pro-apoptotic proteins. Indirect inhibition of cell survival can be achieved via suppression of the pathways commonly altered in BLBC. For example, the PI3K/AKT/mTOR pathway is the most commonly deregulated pathway among breast cancers [9], and an extensive effort has been made to manipulate the output of this pathway in BLBC (reviewed recently in [178]). PI3K is also important for MCL-1 and BCL2L11 mRNA transcription, whereas mTOR is important for MCL-1 protein translation [179]. Thus, it is no surprise that manipulating the PI3K/AKT signaling axis in BLBC in vitro models has been shown to be effective particularly in combination with the EGFR inhibitor, gefitinib [180] ( Table 2). The potent IAP inhibitory and apoptosis-promoting functions of Smac have led to the discovery and development of Smac mimetics for the therapeutic targeting of cancer (reviewed elsewhere [181]). Although not widely studied in BLBC yet, Smac mimetics can induce death in basal inflammatory breast cancer cell lines and increase the apoptotic potential of the death receptor ligand, TRAIL [182]. Interestingly, Smac and Protein Kinase C delta (PKCδ) interact in basal-like and luminal breast cancer cells, but are dissociated after taxane cytotoxic treatment [183]. Further, activation of PKC synergizes with the Smac mimetic LBW242 in BLBC cell lines [184]. TRAIL agonists have been also shown to induce apoptosis of BLBC cells. For example, the TRAIL agonist, drozitumab, a DR5-specific TRAIL receptor agonist, has been shown to preferentially kill basal and mesenchymal TNBC cell lines [185]. These early pre-clinical studies provide evidence for the potential of targeting the extrinsic apoptotic pathway in BLBC. As many cancers are also characterized by aberrant activation of intracellular kinase signaling and p53 [186], it is now thought that targeting direct mediators of the cell survival machinery may serve to improve cytostatic effects of oncogenic kinase inhibitors [187]. Several different classes of antagonists have been developed and studied and include peptides, peptide mimetics, and small molecule antagonists (natural and synthetic) with varying affinity and binding profiles for individual BCL-2 pro-survival proteins [188]. Much of the research has focused on small molecules that mimic the actions of the BH3-only pro-survival proteins (BH3 mimetics), which bind to the hydrophobic binding cleft of pro-survival proteins [189] with highly specific binding preferences for individual family members [160], minimizing systemic toxicities in tissues that depend on the BCL-2 family for survival. Greater improvements in crystallography and drug design have now led to a suite of small molecular inhibitors and BH3 mimetics with greater specificity for their targets. These include WEHI-539 targeting BCL-XL [190] and ABT-199/venetoclax targeting BCL-2 ( Table 2). The development of specific antagonists of the BCL-2 family has been challenging, mainly due to the high homology of the BH domains and the incomplete understanding of how these proteins interact in specific cancer contexts [186], but antagonists of the BCL-2 pro-survival pathway have been developed. Venetoclax/ABT-199 (Venclexta) is the first BCL-2 antagonist approved for use in the United States, European Union, and Australia for chronic lymphoid leukemia and small lymphocytic lymphoma. High BCL-2 is normally associated with ER+ breast cancer [151] and the investigation of the efficacy of ABT-199 in clinical trials focused on ER+ disease and showed a preliminary clinical benefit rate of 69% [191]. However, ABT-199 has now been shown to sensitize TNBC xenografts in vivo to doxorubicin [192]. It remains to be determined whether BCL-2 inhibition is effective in the approximately 10% of patients with BCL-2+ BLBC, building on the promise shown in pre-clinical models of BLBC. In contrast to BCL-2, most triple-negative and basal-like tumors express MCL-1 at varying levels [193], and the presence of MCL-1 is a barrier to sensitivity to BCL-2/BCL-XL inhibition [193]. Recently, there has been extensive research into the development of BH3 mimetics that antagonize MCL-1 [194]. The indole-2-carboxylic acid, A-1210477, developed by AbbVie was first reported in 2015 and demonstrated a high affinity for MCL-1 as a competitive antagonist of MCL-1/BIM. This compound induced cell death as a single agent and synergized with ABT-263 to produce apoptosis in multiple myeloma and small lung cancer cell lines that were dependent on MCL-1 for cell survival [195]. Like earlier work antagonizing BCL-2 and BCL-XL, MCL-1 antagonism can result in cardiac failure, mitochondrial dysfunction, and other systemic toxicities that may be due to on-target hematopoietic toxicities and off-target effects on BCL-XL [194,196,197]. The Servier compound, S63845, showed even greater specificity and more potent activity than existing BH3 mimetics against MCL-1 and could induce cell death in MCL-1 amplified cell lines dependent on MCL-1 for survival. Importantly, S63845 is well tolerated in vivo, with efficacy against several BLBC cell lines [198]. Inhibiting MCL-1 in BLBC models can suppress metastatic progression [149] and increase TNBC sensitivity to cytotoxic therapy [148,149,199]. It is yet to be determined whether MCL-1 antagonism is effective in clinical trials of BLBCs, but there are currently two Phase I clinical trials of MCL-1 BH3 mimetics in patients with multiple myeloma, a disease with a MCL-1 dependent etiology [200]. Combination Targeting of Proliferation and Survival Chemotherapies that target critical aspects of cell proliferation, including damaging DNA, rely heavily on cell cycle checkpoints to detect these errors and trigger apoptosis [91,217]. Thus, in BLBC, a subtype of cancer with enhanced proliferation and anti-apoptotic mechanisms, the co-targeting of proliferation and apoptosis could prove particularly successful (Figure 4). Additionally, it has been recognized that cancers with p53 mutation, such as BLBC, often have intact apoptotic cascades, providing another point of weakness [174]. Conclusions The current strategies to treat BLBC are non-specific and need to be refined, and a detailed understanding of the molecular mechanisms underpinning this subtype of breast cancer is essential for the introduction of treatment regimens to improve survival. The existing standard of care for BLBC is highly reliant on chemotherapy, and honing in on the underlying mechanisms of these drugs may provide better specificity in the treatment of BLBC. Proliferation is a core pathway targeted by chemotherapy, but without the balanced targeting of cell survival, it is probably not sufficient to merely target cell cycle pathways. This is exemplified by the rapid tumor recurrence that is experienced in BLBC. Of BLBC patients, 38% recur with metastatic disease at an average of 2.3 years, in comparison to ER+ breast cancer, which has a 24% recurrence rate occurring at an average of 4.4 years [225]. Combining anti-proliferative and pro-apoptotic therapies in BLBC is relatively underdeveloped, but the studies to date on combination therapies are highly promising, as many different combinations across these two cancer hallmarks show potential synergy. However, the complex interplay between drugs is a serious consideration in designing new therapies for BLBC. Some targeted therapies benefit from sequential administration in order to access cells in their most vulnerable state, and this is exemplified in a study showing the effectiveness of staggered administration of EGFR inhibitors prior to anthracyclines in BLBC models, as well as other models with high EGFR activity [226]. Finally, an understanding of the heterogeneity within BLBC may also be critical in designing personalized strategies for patients which take advantage of the different proliferative and apoptotic alterations within their cancers. For example, the high rate of BRCA1 deficiency among BLBC [9] Taxane chemotherapy (docetaxel and paclitaxel) targeted at the G 2 /M cell cycle axis are the standard of care for BLBC, and these have been tested in combination with BH3 mimetics to determine if this will improve efficacy. Docetaxel in combination with the BH3 mimetic, ABT-737, results in a significant improvement in animal survival and tumor growth in BLBC models, but single agent ABT-737 was ineffective at producing apoptosis [152]. Combinations of Smac mimetics and BH3 mimetics were able to increase paclitaxel efficacy in BLBC cell lines, and BH3 mimetics could also re-sensitize paclitaxel-resistant cells to paclitaxel [218]. In TNBC, which broadly overlaps with BLBC, there have also been successes in combining BH3 mimetics with taxanes. The MCL-1 inhibitor, S63845, synergized with docetaxel in a TNBC patient-derived xenograft model to decrease tumor growth [199], and ABT-263 (navitoclax), which targets BCL-2, BCL-XL, and BCL-W, showed synergy with docetaxel in a TNBC cell line [219]. The Smac mimetic, LCL161, also showed promise in a clinical trial testing its combination with paclitaxel in TNBC, where the 30% of patients with an IAP survival signature were responsive to the drug [207]. Consequently, taxane-based therapies appear to combine effectively with a range of anti-apoptotic drugs. This activity could potentially be optimized by tailoring therapy in subsets of BLBC with specific apoptotic defects, for example, MCL-1 amplified cases. The other chemotherapies routinely used in BLBC, such as anthracyclines and cyclophosphamide, have for the most part not been tested in combination with pro-apoptotic targeted therapies. A recent study showed that anthracyclines, such as doxorubicin, could synergize with BH3 mimetics in cancers that are "addicted" to BCL-2 family members [220]. Preliminary studies combining cyclophosphamide with a low specificity BH3 mimetic also showed promise in in vitro and in vivo models of B-cell lymphoma [221]. More specific cell cycle inhibitors are yet to be trialed in combination with pro-apoptotic drugs in BLBC. The highly specific CDK4/6 inhibitors have, however, demonstrated pro-apoptotic effects in other cancer types. RB+ non-small cell lung cancer cell lines treated with CDK4/6 inhibitors demonstrated suppression of survival signaling that occurred simultaneously with high Smac and caspase-3 activity [222]. CDK2 inhibition can also be highly effective in inducing apoptosis as it down-regulates the MCL-1 protein, leading to synergy with BH3 mimetics in various cancer cell line models [223]. Pan-CDK inhibitors that target CDK1 and CDK9 show particular promise in BLBC as they can directly induce both proliferative arrest and apoptosis. In MYC amplified TNBC and BLBC models, dinaciclib (CDK1/CDK2/CDK5/CDK9 inhibitor) treatment led to growth arrest, but also enhanced apoptosis through the up-regulation of BH3 protein BIM downstream of MYC [99]. This suggests that a pan-CDK inhibitor may prove effective even in the absence of MYC amplification if it is combined with a BH3 mimetic. This strategy has already proven effective in pre-clinical studies of myeloma cells where the pan-CDK flavopiridol in combination with obatoclax led to decreased proliferation while preventing MCL-1 pro-survival signals and promoting the release of pro-apoptotic BIM [224]. A Phase I trial is now commencing for the CDK2/CDK9 inhibitor, CYC065, in combination with venatoclax, the BCL-2 inhibitor, in relapsed or refractory chronic myeloid leukemia [NCT03739554]. Conclusions The current strategies to treat BLBC are non-specific and need to be refined, and a detailed understanding of the molecular mechanisms underpinning this subtype of breast cancer is essential for the introduction of treatment regimens to improve survival. The existing standard of care for BLBC is highly reliant on chemotherapy, and honing in on the underlying mechanisms of these drugs may provide better specificity in the treatment of BLBC. Proliferation is a core pathway targeted by chemotherapy, but without the balanced targeting of cell survival, it is probably not sufficient to merely target cell cycle pathways. This is exemplified by the rapid tumor recurrence that is experienced in BLBC. Of BLBC patients, 38% recur with metastatic disease at an average of 2.3 years, in comparison to ER+ breast cancer, which has a 24% recurrence rate occurring at an average of 4.4 years [225]. Combining anti-proliferative and pro-apoptotic therapies in BLBC is relatively underdeveloped, but the studies to date on combination therapies are highly promising, as many different combinations across these two cancer hallmarks show potential synergy. However, the complex interplay between drugs is a serious consideration in designing new therapies for BLBC. Some targeted therapies benefit from sequential administration in order to access cells in their most vulnerable state, and this is exemplified in a study showing the effectiveness of staggered administration of EGFR inhibitors prior to anthracyclines in BLBC models, as well as other models with high EGFR activity [226]. Finally, an understanding of the heterogeneity within BLBC may also be critical in designing personalized strategies for patients which take advantage of the different proliferative and apoptotic alterations within their cancers. For example, the high rate of BRCA1 deficiency among BLBC [9] opens the possibility of combination therapy with poly (ADP-ribose) polymerase (PARP) inhibitors. PARP inhibitors already have proven efficacy as monotherapies for BRCA1-/advanced breast cancer, and multiple trials are now assessing possible combinations with cell cycle-based chemotherapies for TNBC (reviewed in [227]). Another recent example is the addition of immunotherapy to nab-paclitaxel in the IMpassion130 trial of TNBC, leading to a significant benefit to those patients with PD-L1 expressing immune cells [30]. Multiple other vulnerabilities, such as proteosomal dependency, NF-κB pathway activation, and BET domain inhibitor sensitivity, have been identified within some BLBC models [28]. As biomarker and clinical tools for these pathways are further developed, these may too prove effective in BLBC, especially when applied in combination with drugs that target the core proliferative and apoptotic pathways.
2019-03-08T14:07:04.652Z
2019-02-01T00:00:00.000
{ "year": 2019, "sha1": "f8b6beaad7d0b64a2a9a9648ba4fe29ec77304b0", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/20/3/667/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f8b6beaad7d0b64a2a9a9648ba4fe29ec77304b0", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
126042980
pes2o/s2orc
v3-fos-license
Change in Internal Energy and Enthalpy of Spinning Black Holes in XRBs and AGN Using Spin Parameter a * = 0 . 9 In the present research work, we have proposed a model for the change in the internal energy and enthalpy of the spinning black holes using first law of black holes in the case of spin parameter 0.9 a = and calculated their values in XRBs and AGN. Introduction Classically, the black hole created after the death of red giant star is the perfect absorber like a black body and does not emit anything; their temperature is absolute zero.However, in quantum theory, black holes emit Hawking radiation with a perfect thermal spectrum [1] [2] [3].According to the GR, the black hole is a solution of Einstein's gravitational field equations in the absence of matter that describes the space time around a gravitationally collapsed star [4].Mahto et al. proposed a model for the change in the internal energy and enthalpy of the black holes using first law of black holes which showed that the change in internal energy and enthalpy was the manifestations of same thing at constant pressure and volume [5].This work is extended in the case of spin parameter 1 2 a * = with the calculations for the change in the internal energy and enthalpy of the different test of spinning black holes in XRBs [6]. In the present work, we have proposed a model for the change in the internal energy and enthalpy of the spinning black holes using first law of black holes for the case of spin parameter 0.9 a * = and calculated their values in XRBs and AGN. Method The change in internal energy and enthalpy of black holes with corresponding change in the radius of the event horizon of black holes is given by [6] The change in internal energy and enthalpy of spinning black holes will have different values in compared with that of non-spinning black holes, because the surface gravity of a black hole is given by the Kerr solution [7]. The surface gravity ( ) κ can be thought of roughly as the acceleration at ho- rizon of black hole and it has the same role in the black hole mechanics as the temperature in the ordinary laws of thermodynamics [4].According to the zeroth law of classical black hole mechanics, the surface gravity ( ) κ of a black hole is constant on horizon and the surface gravity tends to zero when the magnitude of charge of a black holes becomes equal to the mass of black holes [8]. Wang and Ding-Xiong have shown that the angular velocity ( ) Ω evolves in a non-monotonous way in the case of thin disk-pure-accretion attaining a maximum at 0.994 a * = and turns out to depend on the radial gradient of p Ω near the BH horizon [9].One black hole at the heart of galaxy NGC1365 is turning at 84% of the speed of light.It has reached the cosmic speed limit and cannot spin any faster without revealing its singularity [10]. For convenience, let us assume 0.9. Putting the above value in Equation ( 1) The n eq (7) gives the change in internal energy as well as enthalpy with re- spect to corresponding change in the radius of the event horizon in terms of the mass and event horizon of spinning black holes. Data in Support of Mass of the Sun and Black Holes Mass of sun ( )  for super massive black holes in Galactic nuclei [11].The other data in the support of black holes can be seen in the references [12] [13] [14] [15]. Results and Discussion In the present work, we have derived an expression for the change in the internal energy and enthalpy of the spinning black holes taking an account the first law of black hole mechanics for the case of spin parameter 0.9 a * = and calculated their values in XRBs and AGN and plotted the graphs as per Figure 1 & Figure 2. For the angular spin ( ) Table 1.Change in enthalpy and internal energy of spinning black holes in XRBs using spin parameter 0.9 a * = .internal energy with change in mass/event horizon of the black holes. Equation (7) shows that the change in enthalpy and internal energy with change in mass/event horizon of the black holes is directly proportional to the shows that the change in internal energy/enthalpy for all the spinning black holes remain constant i.e. δH = δU = constant.This means that it follows the principle of conservation of enthalpy and internal energy of spinning black holes just like the principle of conservation of energy. Conclusions 1) For the angular spin ( ) 0.9 a * = , the change in internal energy and enthalpy of spinning black holes in terms of mass and event horizon as: 2) The change in enthalpy and internal energy calculated with the help of above equation as given in the conclusion (1) for each black hole candidate in both categories of XRBs and AGN are exactly the same, showing the constant change in enthalpy and internal energy equal to 1.1244 × 10 −28 joule. 3) This agrees with the principle of conservation of the enthalpy and internal energy just like the principle of conservation of the energy. 4) The enthalpy and internal energy have the same role as the energy in the case of spinning black holes. 5) The enthalpy and internal energy of spinning black holes are the manifestation of the same thing. 30 1 . 99 10 kg M = ×  , [11].There are two categories of black holes classified on the basis of their masses clearly very distinct from each other, with very different masses ~5 20 M M − for stellar -mass black holes in X-ray bi- Figure 1 . Figure 1.The graph plotted between the change in enthalpy/internal energy of spinning black holes with corresponding change in the radius of event horizon in XRBs. Figure 2 . Figure 2. The graph plotted between the change in enthalpy/internal energy of spinning black holes with corresponding change in the radius of event horizon in AGN. event horizon and inversely proportional to mass of the black holes.Hence these two factors mass (M) and the event horizon (R bh ) adjust themselves in such a way that they give the constant values for the change in energy (δH) of the black holes in each case.Figure1& Figure2show the graph plotted between the mass of different BHs and corresponding the change in enthalpy and internal energy with change in mass/event horizon of the black holes in AGN of the black holes.In addition to this explanation, while observing the ratio of the radius of the event horizon and mass of the black hole for each case, gives con-stant values existing either in XRBs or AGN.The fourth column of the table Table 2 . Change in enthalpy and internal energy of spinning black holes in AGN using
2019-04-05T19:37:49.288Z
2017-01-25T00:00:00.000
{ "year": 2017, "sha1": "b9c75cf07863168557c2b75649bf1ba187c17e9d", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=75069", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "b9c75cf07863168557c2b75649bf1ba187c17e9d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
258605103
pes2o/s2orc
v3-fos-license
CLOTH BAG OBJECT DETECTION USING THE YOLO ALGORITHM (YOU ONLY SEE ONCE) V5 —The use of plastic in modern life is increasing rapidly, causing the number of people who use plastic to increase, one of which is when shopping. The function of plastic bags as packaging for luggage is not comparable to the impact caused by plastic waste in the years to come. Plastic bags take a long time, even hundreds to thousands of years, to completely decompose. In order to support the government's program to reduce the use of plastic bags, this study will discuss how to detect cloth bags as a substitute for plastic bags. In this research, a system will be implemented to detect the use of cloth bags with Roboflow and Yolo v5. After carrying out all stages of the research, it can be concluded that the goodie bag detection model has been successfully created. The detection model was created using the YOLOV5 algorithm. The dataset used consists of 102 goodie bag images. The process model uses 100 epochs with the training result mAP@0.5 is 89.8%. So, in other words, it can be said that YOLO v5 can detect goodie bags very well. INTRODUCTION The use of plastic in modern life is increasing rapidly, causing the level of human dependence on plastic to be higher, one of which is when shopping.In Indonesia, traders usually use plastic bags to accommodate consumer shopping for goods.Even though it is known that plastic bags will increase plastic waste (Sardon & Dove, 2018).Plastic waste is a material that is difficult to decompose, so it can damage the environment. Many people use plastic bags because plastic is a packaging material or container that is practical and looks clean, easy to get, durable, and cheap.The function of plastic bags as wrapping for luggage is not comparable to the effects caused by plastic waste for years to come (Zulkifley et al., 2014).Plastic bags take a long time, even hundreds of years, to completely decompose. In order to support the government's program to reduce the use of plastic bags is the right step and has a very significant impact on various parties.This is the first step for researchers to implement cloth bags to reduce plastic waste.Goodie bags and tote bags as shopping or daily necessities can also be used repeatedly.In addition, bags are made of cloth, plastic, cardboard, and foam art, usually used as gifts, keepsakes, celebrations, and for other household needs.The users are all age groups with various purposes. Today the world is in the digital age.An era where almost every aspect of human life is closely related to computing technology.With the development of knowledge, humans continue to develop technology to help lighten the work.In computer vision, several problems include object detection and image classification.Object detection has recently become one of the most exciting fields in computer vision and artificial intelligence.Object detection is a computer technology related to computer vision and image processing related to detecting an object in a digital image in color and object shape (Nahdi Saubari, 2019). Based on the description above, this research will implement a system to detect the use of cloth bags with Roboflow and yolo v5 (Liunanda et al., 2020).Moreover, the hope is that this research will be able to properly detect the use of Goodie bags and Tote bags as shopping bags or daily necessities so that later the information can be helpful for those who need it. MATERIALS AND METHODS The first step in this study was to collect a dataset from images of cloth bags.The dataset is the core of all processes.This happens because the entire subsequent process is determined by the quality and quantity of the data set that has been collected.Datasets can be collected from various sources.Currently, it is very effective to collect datasets from the internet or take photos via smartphones (Kumari et al., 2020). Annotation is the second step in the whole research process.Annotations are performed on each dataset image with an object image.Annotations are done by drawing one by one using a "bounding box" box.This box will tell the engine about the part of the object being searched for and the part being ignored. The next step is preprocessing and augmentation of the dataset.This step is needed to improve the image quality so that it meets the standards required by the machine in "learning."The image will be rotated, cropped, and color changed to make the model more effective and accurate.Preprocessing and augmentation are applied to the entire dataset.So it is possible to add datasets after this process is carried out. The training was carried out to design machines to recognize objects that have been annotated, preprocessed, and augmented.In this study, the training was conducted using yahoo YOLO v5.Training in YOLO v5 requires several parameters, including image width and age.Epoch is the number of times the model is curious.So, the higher the assumption, the better the model will be. The training process can be done repeatedly to produce a suitable model. YOLO (You Only Look Once) is a real-time object detection algorithm (Lu et al., 2019).The basic idea behind the method is to divide the input image into a grid of cells and then, for each cell, predict the probability of the presence of an object in that cell and the bounding box coordinates for any object present.YOLO uses a single convolutional neural network (CNN) to perform object classification and bounding box regression, making it more efficient than other object detection methods that typically use two separate networks. The architecture of YOLO consists of a sequence of convolutional and max-pooling layers (Zhao et al., 2020), followed by several fully connected layers.The network's output is a tensor of shape (S, S, (B x 5 + C)), where S is the number of grid cells, B is the number of bounding boxes per cell, and C is the number of objects classes.Each cell in the grid produces B-bounding box predictions and C-class predictions, along with a confidence score for each bounding box (Liunanda et al., 2020). The critical innovation of YOLO is its ability to make predictions at multiple scales by using anchor boxes of different sizes (Hurtik et al., 2022).This allows the network to detect objects of different sizes in the same image and helps improve the localization accuracy of the bounding boxes (Liunanda et al., 2020). YOLO is a speedy method for object detection, and it can process up to 45 frames per second (Adou et al., 2019) on a standard GPU.It is also relatively simple to train and use, making it a popular choice for many real-time object detection applications. After the training on the dataset is completed, the model that has been formed needs to be tested on a random image to prove whether the model has successfully detected the object.The research is considered complete if the model has succeeded in detecting the object.If not, the training process will be repeated with different parameters (Agroui et al., 2017). Furthermore, the model's performance can be achieved by several methods: Recall, Precision, F1, Intersection over Union, mean Average Precision, and Accuracy.The recall method can be obtained by calculating the ratio of the total number of positive samples with the correct classification results compared to the total number of positive samples (Fadilla et al., 2011).Recall with a high score indicates that the class is known correctly (little FN).Equation 1 While the Mean Average Precision (mAP) (Kumar & Srivastava, 2020) method is the average value of Average Precision (AP) which will form a metric evaluation to measure the performance of an object detection algorithm in order to measure the accuracy of the available models, a trial will be carried out with random images from the testing dataset, and analysis will be carried out using the following equation 4: RESULTS AND DISCUSSION The dataset used in this study is all images of Goodie bags and Tote bags, while the sample in this study is images of Goodie bags and Tote bags with a total of 70 images.Images were obtained through camera capture and from google images.The images are then collected in one folder, as shown in Figure 1.The labeling begins by creating a bounding box on the object with the goodie bag and toot bag in the image.The bounding box is then given a class name, "cloth bag," to know that this section is the object in question. Figure 4. Preprocessing and Augmentation Process After all the data is annotated, the following process is preprocessing and augmentation.The annotation process aims to properly bind each labeled object even if it resizes, rotates, or.In this research, autoorient, resize, and grayscale are used.Automatic orientation ensures that images are stored on disk in the same way that applications open them.Resize creates a consistent size for the image (in this case, a more minor to speed up the training process). Grayscale converts an RGB color image with three channels into a black & white image (grayscale) with one channel (Oktavianto & Purboyo, 2018). The augmentation process used in this research is random flip, 90-degree random rotation, and 10 to 30-degree random rotation.The preprocessing and augmentation processes resulted in as many as 102 new images for the dataset, with the training compositions set increased to as many as 79 images, the validation set of 18 as many pictures, and the testing set of 5 images. The final step in creating a data set is to convert the data set into a format suitable for the training process in Yolov5, as shown in figure 5.In Figure 6, training on the dataset is carried out after all images in the dataset have annotations of the object class to be detected.In performing training, several parameters are needed.The first parameter to be used is the image size, which is 416 pixels.Determining the image size to speed up the training process.Second is an epoch, an epoch is the number of iterations of data training, and after that, the batch is the number of data to be learned in each iteration.In other cases, adjustments must be made depending on the data processing machine's capabilities and available time.The evaluation stage is carried out by comparing the ground truth box and prediction box.The ground truth box is obtained from a dataset that has been labeled using the roboflow tool.The prediction box is the result of detection from the model that has been created.The dataset that has been labeled as ground truth can be seen in Figure 8 below: The model of the training process creates the prediction box.In the corner of the box, there is a number that shows how confident the model predicts the image it is processing.If the number given by the model is close to 1 or 100%, then the model believes that the object is a cloth bag. Cloth Bag Object Detection … The box loss (also known as the "coordinate loss") is a term that explicitly measures the localization loss.It penalizes the model when the predicted bounding box coordinates are inaccurate, encouraging it to learn to predict more accurate bounding boxes. In YOLO, the objective loss and box loss are used together to optimize the model's performance on the object detection task.The model's parameters are adjusted during training to minimize both loss functions to improve the model's ability to predict both the class labels and bounding box coordinates of objects in an image. It can be seen in Figure 10 that our results show that the objective loss value decreased significantly after 100 epochs of training.This suggests that the model could learn the patterns in the training data and improve its performance over time.The optimization algorithm also appears to be working effectively, adjusting the model's parameters to minimize the loss.These findings suggest that our model can fit the training data well and make accurate predictions.Further research must confirm these results and explore potential limitations or generalizability to other datasets. During training, we observed that both Precision and mean average Precision (mAP) increased significantly after 100 epochs.Precision measures the ability of the model to correctly classify objects as positive or negative, while mAP measures the overall accuracy of the object detection task by taking into account both the Precision and the Recall of the model (Zhang et al., 2020). The improvement in Precision and mAP indicates that the model is becoming more accurate in identifying and classifying objects in the images.This is a positive sign, as it suggests that the model is learning the patterns in the training data and can generalize to new, unseen data. CONCLUSION After completing all stages from designing, testing, and analyzing, it can be concluded that the goodie bag detection model has been successfully created.The detection model was created using the YOLOV5 algorithm.The dataset used consists of 102 goodie bag images.The process model using 100 epochs with training results mAP@0.5 is 89.8%.So in other words, it can be said that YOLO v5 can detect goodie bags very well. It is worth noting that the increase in Precision and mAP may not be linear throughout training.There may be fluctuations in the metric values, especially if the model is being trained using mini-batches or if the learning rate is not set optimally.However, the model will likely improve if the overall trend is upward. Further research is needed to confirm these findings and explore potential limitations or factors that may have contributed to the increase in Precision and mAP.It will also be interesting to see if these improvements are sustained on test datasets and real-world applications." 𝐹1 is as follows: = + ................................................................. (1) Cloth Bag Object Detection … The following method is Precision.The value of Precision is calculated by dividing the total number of positive samples with a correct classification result by the total number of positive samples predicted.Equation 2 as follows: = + ......................................................... (2) Information: True Positive (TP) = actual value positive and predicted positive value True Negative (TN) = actual value negative and predicted negative value False Positive (FP) = the actual value is negative but predicted to be positive False Negative (FN) = the actual value is positive but is predicted to be negativeThe condition when Recall is high and Precision is low means most of the positive instances are correctly recognized (low FN), but there are still a lot of False Positives (high FP)(Redmon et al., 2016).Meanwhile, if the recall conditions are low and the Precision is high, the model loses many positive samples (high FN) with a few false positive values (low FP).The following method is F1 Score.The FI score is calculated by comparing the average Precision and recallsweighted as in equation 3 (IoU) is an evaluation metric to measure the accuracy of object detectors in a data set.IoU can be used as long as it has a ground-truth bounding box on the object and a prediction dataset bounding box on the dataset object.IoU can be calculated by comparing the ground-truth bounding box with the predicted bounding box in the model that has been made. Figure 1 . Figure 1.Goodie Bag and Tote Bag Images The next step is to annotate all the images that have been collected.To perform annotations, researchers used a tool, namely Roboflow Annotate, to add boxes around Goodie bags and Tote bags that did not have them.Figures 2 and 3 are examples of the annotation process on images in the dataset. Figure 5 . Figure 5. Export dataset according to YOLO v5 format Training, Validation, and Testing Figure 7 . Figure 7. Image for Validation After the dataset training process is complete, the next step is to carry out the validation process.The process aims to assess the performance of the training model.Validation uses 18% of the images in the dataset.These settings have been saved in the YAML file.Validation is used to evaluate the performance model.The existing model will be compared with the validation dataset to measure the level of accuracy.The results can be seen from the MAP value displayed in the evaluation process with the command shown in Figure 8. Figure 8 . Figure 8. Basic TruthsIf the number is close to 0, the model is not sure that the object is a cloth bag.The results of the box prediction can be seen in Figure9below: Figure 9 . Figure 9. Prediction Results Box After 100 epochs of training, the resulting model is detected with the following specifications:
2023-05-11T15:09:43.046Z
2023-02-07T00:00:00.000
{ "year": 2023, "sha1": "d9fee93ffd9cbd0727b509645eca354a3f3339ff", "oa_license": "CCBYNC", "oa_url": "https://ejournal.nusamandiri.ac.id/index.php/pilar/article/download/3019/1006", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "95f862fda79cd493a9f9cc25ed8be4de4f20d4c4", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
269848554
pes2o/s2orc
v3-fos-license
Fine-Grained Cross-Modal Semantic Consistency in Natural Conservation Image Data from a Multi-Task Perspective Fine-grained representation is fundamental to species classification based on deep learning, and in this context, cross-modal contrastive learning is an effective method. The diversity of species coupled with the inherent contextual ambiguity of natural language poses a primary challenge in the cross-modal representation alignment of conservation area image data. Integrating cross-modal retrieval tasks with generation tasks contributes to cross-modal representation alignment based on contextual understanding. However, during the contrastive learning process, apart from learning the differences in the data itself, a pair of encoders inevitably learns the differences caused by encoder fluctuations. The latter leads to convergence shortcuts, resulting in poor representation quality and an inaccurate reflection of the similarity relationships between samples in the original dataset within the shared space of features. To achieve fine-grained cross-modal representation alignment, we first propose a residual attention network to enhance consistency during momentum updates in cross-modal encoders. Building upon this, we propose momentum encoding from a multi-task perspective as a bridge for cross-modal information, effectively improving cross-modal mutual information, representation quality, and optimizing the distribution of feature points within the cross-modal shared semantic space. By acquiring momentum encoding queues for cross-modal semantic understanding through multi-tasking, we align ambiguous natural language representations around the invariant image features of factual information, alleviating contextual ambiguity and enhancing model robustness. Experimental validation shows that our proposed multi-task perspective of cross-modal momentum encoders outperforms similar models on standardized image classification tasks and image–text cross-modal retrieval tasks on public datasets by up to 8% on the leaderboard, demonstrating the effectiveness of the proposed method. Qualitative experiments on our self-built conservation area image–text paired dataset show that our proposed method accurately performs cross-modal retrieval and generation tasks among 8142 species, proving its effectiveness on fine-grained cross-modal image–text conservation area image datasets. Introduction Neuro-networks function as parameterized databases, typically driven by specific tasks, with each network dedicated to fulfilling a corresponding task.However, there are instances where our requirements transcend single-task boundaries.Consider the context of rapidly accumulating natural conservation area image data.We seek not only to retrieve a single image but also to attach essential descriptions when summoning an image.Furthermore, we aspire to employ textual descriptions as queries to sift through our image repository, locating images that align with our specific needs.This scenario necessitates simultaneous engagement with two tasks: cross-modal image-text retrieval and image captioning. As these data accumulate over time, the volume becomes formidable.For example, the Snapshot Serengeti Project at Serengeti National Park, Tanzania deployed hundreds of camera traps to understand the dynamics of African animal species.From 2010 to 2013, the project collected 3.2 million images from 225 camera traps [1].And it was found to be very costly to manually process the images and add annotation labels, given such a large amount of data.The project carried out by Ref. [2] required thousands of technical volunteers to work for 2-3 months to annotate image data.With the improvement in camera manufacturing technology, each camera deployed in the field can record more than 40,000 photos per day due to a single trigger event [3], and many camera traps have been deployed in related projects.Refs.[4,5] deployed hundreds of camera traps in their project. Refs. [6,7] deployed about 50 cameras at water sources in natural conservation areas and recorded more than 800,000 wildlife images within a few weeks. When we resort to two separate models to independently address these tasks, we encounter suboptimal outcomes.Specifically, the images retrieved through descriptive text queries may not align with the descriptive text generated by the model for the same image.In other words, these two models exhibit inconsistent encoding and decoding for the same data.Can we train a model that maintains consistency during both encoding and decoding, all while meeting the task requirements, thus mitigating semantic ambiguity within our cross-modal parameterized database? To address this, we propose a multi-task model for joint training in cross-modal image-text retrieval and image captioning.Through the collaborative optimization of parameters, we achieve cross-module information sharing, thereby facilitating semanticconsistency encoding and decoding modeling.Post-training, the encoder and decoder can be independently employed to perform cross-modal image-text retrieval and imagecaptioning tasks while maintaining semantic consistency between the two tasks.This is made possible because our model is constructed upon a foundation of shared semanticconsistency representation space.Of course, the prerequisite is the construction of a dataset aligning with our specific needs and the judicious design of the model's structure.For ease of exposition, we name the proposed method ReCap (Retrieval and Captioning). As illustrated in Figure 1, we are able to retrieve corresponding images from the dataset using a customized textual input and subsequently generate descriptive text for the retrieved images.In this paper, our objective is to preserve semantic consistency in the context of fine-grained visual features and rich textual descriptions by jointly training a retriever and a captioner.(3) introducing a method for information transfer through collaborative parameter solving within a multi-task module; and (4) presenting a technique for cross-modal alignment and semantic consistency preservation based on a shared representation space for crossmodal tasks. Related Work The cross-modal semantic consistency between images and text in our research is primarily achieved through the model design and joint training of two tasks: cross-modal retrieval and image captioning.The essence of this approach lies in the optimization of the cross-modal shared space embedding of images and text.On one hand, optimization is performed from the perspective of cross-modal alignment between image and text entities.On the other hand, the model needs to reorganize tokens related to the input image representation in the shared space in an autoregressive manner and output them in natural language, thereby achieving semantic consistency between image and text descriptions at a broader and deeper semantic level.The encoder and decoder constitute the core modules of our designed model, involving popular techniques in cross-modal alignment and cross-modal representation fusion.Subsequently, the literature review will delve into both cross-modal representation alignment and cross-modal representation fusion. Cross-Modal Alignment Currently, research on the cross-modal alignment of image and text representations is predominantly centered around contrastive learning methods.These studies achieve the embedding and alignment of image and text representations in a shared cross-modal space by training encoders separately for each modality using a contrastive learning loss.ConVIRT [8] demonstrates the potential of contrastive objectives to learn image representations from text.Inspired by ConVIRT, CLIP [9] performs pre-training on a dataset containing four billion image-text pairs and has become a milestone of visionlanguage models with excellent cross-modal representation.CLIP4Clip [10] demonstrates the CLIP model with high performance in cross-modal retrieval.ALIGN [11] performs pre-training on massive noisy web data.The above methods all use contrastive loss, which is the most effective loss for cross-modal alignment [12][13][14][15]. Intuitively, performing cross-modal contrastive learning by treating corresponding visual and textual entities as inputs to image and text encoders, respectively, can achieve better cross-modal alignment.Therefore, some research works in this domain utilize object detection models as visual unit extractors.The extracted target pixel regions are then fed to the image encoder for contrastive learning with the text encoder, enhancing the performance of cross-modal representations.Often, these studies require the integration of a pre-trained object detection model at the front end of the visual data input [16][17][18].An intuitive approach is to align the visual features of the region where the object is located with the label.For example, Oscar [19] uses Faster R-CNN [20] to detect the object in the image and then aligns it with the word embeddings of the object tags.However, they are not suitable for fine-grained cross-modal alignment, as the object tags are too limited to align the vision features suitably.With a properly designed prompt, CLIP can be used for open-vocabulary classification, which solves the problem of limited object tags.ViLD [21] designed an open vocabulary object detection model by knowledge distillation from the CLIP.Ref. [22] achieved language-driven zero-shot semantic segmentation by directly using the representation of CLIP.Groupvit [23] implements unsupervised image segmentation by using the text representation of CLIP as a pseudo label. Contrastive learning with dual encoders, while excelling in cross-modal retrieval tasks involving images and text, encounters challenges in adapting to fine-grained cross-modal retrieval tasks with natural conservation images due to the following reasons.First, certain species' visual features in natural conservation images exhibit high intra-class and interclass similarities, resulting in dense distributions of these highly similar representations in the shared space.This necessitates encoders with finer discriminative capabilities.Second, these encoders, trained on image-text pair datasets using contrastive learning, are often constrained by the representation of text descriptions alone and struggle to adapt well to cross-modal retrieval tasks where the semantics are similar but the expression methods differ. Cross-Modal Fusion With the successful application of the transformer [24] architecture in the fields of natural language processing, computer vision, and multi-modal, ViLT [25] proposes a transformer-based multi-modal encoder which focuses on cross-modal feature fusion, and takes the masked language modeling loss [26] for visual embedding as future work.This work has been achieved by VL-BEiT [27] after VIT [28] and MAE [29].From then on, a big convergence of language, vision, and multi-modal pretraining has emerged.BLIP [30] proposes a new vision-language pre-training framework that transfers flexibly to both vision-language understanding and generation tasks.The multi-way transformer proposed by BEiT-V3 [31] has achieved state-of-the-art transfer performance in both vision and vision-language tasks.FLIP [32], which is called Fast Language-Image Pre-training, presents a simple and more efficient method for training CLIP by dropping a part of masked tokens.VLMo [33] jointly learns a dual encoder and a fusion encoder with a modular Transformer network.Coca [34] is a minimalist design to pre-train an image-text encoderdecoder foundation model jointly with contrastive loss and captioning loss like CLIP and SimVLM [35], respectively. Cross-modal feature fusion is not suitable for cross-modal retrieval tasks due to the lack of effective optimization for unimodal encoders.However, when applied to imagecaptioning tasks for the same input image, this method generates descriptions that share the same semantics but have different expressions.This indicates that such methods contribute to solving cross-modal semantic consistency.Our research goal is to explore the joint application of cross-modal feature fusion and cross-modal feature alignment, aiming to leverage their respective strengths and compensate for weaknesses, fostering mutual enhancement.This objective is emphasized in the Method section for in-depth discussion. Design Concept and Proposed Methodology The overarching design strategy is to develop and train a pair of image-text encoders that extract representations with cross-modal semantic consistency, and the feature point distribution in the shared space accurately reflects contextual relevance.Based on this strategy, we designed a pair of encoders for cross-modal contrastive learning, consisting of an image encoder and a text encoder.After considering computational costs and performance trade-offs, we chose to obtain a pair of encoders through distillation that can be freely modified according to the experimental requirements (refer to the Appendix A.1 for detailed information).To promote cross-modal semantic consistency, we introduced the method of momentum encoding.However, the input data for cross-modal momentum encoding come from different modalities and lack mutual information, making it challenging to maintain consistency.To address this issue, we adopted a multi-task perspective and utilized a residual attention network to fully integrate representations from both modalities before outputting the momentum encoding queue.Finally, we trained the cross-modal encoder using a contrastive learning approach with the obtained momentum encoding queue to achieve fine-grained cross-modal semantic consistent representations.The overall architecture of the proposed method is illustrated in Figure 2. Before introducing the cross-modal momentum encoder, we first present the residual attention network and the design of the objective function. Residual Attention Neuro-Network Based on Reference [36], we designed a residual attention network as illustrated in Figure 3.For detailed derivation of its input and output, please refer to Appendix A.2.The primary training objective for Cross-modal Res-Att is masked language modeling (MLM).In this context, let us denote a caption as CnP and the set of randomly masked positions as M CnP .The MLM loss can be formally defined as follows: where Captioner Training Objectives The captioner is an autoregressive language generation model which operates on the principle of predicting the next token based on the input sequence and previously generated tokens.For example, given an initial input feature sequence F k = F k 1 , . . .F k u and the first token generated, denoted as C k 1 , the objective function is log p θ (C k 1 |F k ), and the generation process for , and so on.Therefore, the objective function of an autoregressive language generation model is represented by Equation ( 2): where θ represents the trainable parameters, and the input sequence F k can be visual features, language features, or a combination of both. Image-Text Contrastive Loss Function Following [8], the image-text contrastive learning (ITC) formulates the loss function according to InfoNCE [37].Let T denote a certain species class embedding and V denote its visual embedding, then we have the embedding pair (V, T).We use (V i , T i ) to denote the i-th pair of positive samples and (V i , T j )j ̸ = i to denote a pair of negative samples.The ITC training objective of ReCap consists of two loss functions to make the distance of the positive pair closer than the negative one in the embedding space.Since ITC is asymmetric for each modality, it needs to be computed separately from both directions for images and text.The contrastive loss for the i-th pair in the image → text direction: where sim(•) is the cosine similarity, i.e., sim(a, b) = a ⊤ b/(∥a∥∥b∥), and τ is a temperature parameter.Similarly, we formulate the text → image loss as: Finally, the training objective is a weighted sum: where λ ∈ [0, 1] is a hyperparameter weight, and N is the batch size. Cross-Modal Momentum Encoder In reference to the MoCo momentum encoding [12], we propose an offline encoder training method.Due to the high compression and ambiguity of textual information, compared to visual information, which is sparse, many detailed visual features are overwhelmed by dense textual information during cross-modal learning.To address this, we employ a residual attention network to repeatedly fuse visual features with textual features in a residual manner, increasing the proportion of visual information in the deep neuronetwork's forward channel.This enhances visual information redundancy to mitigate the drowning of sparse visual information during fusion with textual information.Additionally, because images contain factual information and exhibit invariance, aligning variable and ambiguous linguistic features around factual information contributes to eliminating linguistic feature ambiguity in context during cross-modal alignment.Consequently, this results in semantic consistency embedding, with visual information as the clustering center in the cross-modal representation space. Momentum Encoder In brief, the principle of the momentum encoder is that the training of the encoder in unsupervised learning can be simplified as a look-up table problem.In other words, an encoded query should have high similarity to its corresponding key and low similarity to other keys.This simplifies the entire process to minimizing the contrastive loss.During the solving process, contrastive learning requires a queue containing keys for both positive and negative samples to look up for queries.To maintain the consistency of encoding for positive and negative samples in the queue, the momentum encoder is employed. The encoding update rule for the momentum encoder is shown in Equation (6), where the momentum parameter m ∈ [0, 1) is used.The query encoding θ q is updated based on gradient back propagation, while the key encoding θ k is updated using momentum.Typically, m takes a value greater than 0.9, which is equivalent to taking a moving average of the encoding updates.The slow-changing momentum encoder reduces the difference between the encoding of positive and negative samples in the queue, thereby improving the cross-task transfer performance of the encoder optimization process based on momentum in contrastive learning: Offline Cross-Module Information Propagation The cross-module joint solving of parameters constitutes the inter-module propagation of information.Deep learning models are essentially parameterized databases, with relationships among data implicitly encoded within the model's parameters.Therefore, cross-module operations on parameters represent the propagation of information across modules. Firstly, as illustrated in Figure 4, we feed the image-text paired dataset to the unimodal encoders, obtaining image encodings (Ve0, Ve1, Ve2 . ..) through the ITC loss.Subsequently, as depicted in Figure 5, we feed (Ve0, Ve1, Ve2 . ..) to the Res-Att module.Based on the momentum encoding method proposed in Section 3.4.1, a cross-modal momentum encoding queue is obtained using the joint loss function shown in Equation (7).Specifically, we obtain a visual momentum encoding queue (Vm0, Vm1, Vm2 . ..) and a language momentum encoding queue (Tm0, Tm1, Tm2 . ..).Then, as shown in Figure 6, we feed back the momentum encodings to update the unimodal encoders: The loss function for the visual encoder in this training stage is represented by Equation (8), while the loss function for the language encoder is represented by Equation ( 9).The overall objective function is depicted by Equation (10): where λ ∈ [0, 1] is the hyperparameter weight, and N is the batch size.Repeating these steps forms a closed loop for cross-modal momentum encoder training, which can be conducted offline.It should be noted that our proposed offline training method needs to be accompanied by the decoupling of the momentum encoding queue we adopted.This decoupling allows for independent settings of batch size and the length of the momentum encoding queue.For instance, during training, we used a batch size of 32 and a queue length of 4096.This enabled us to contrast more negative samples, facilitating the model to learn representations closer to the domain distribution.The length of the queue can be adjusted based on computational resources.In summary, the decoupling + offline strategy balances computational resources and model performance. Why Contrastive Learning and Momentum Encoding The objective of contrastive learning is specifically to distinguish between positive and negative samples.If the encodings of positive and negative samples come from different encoders or different training stages of the same encoder, the model may learn more about the differences between the encoders rather than the differences between the data.To ensure a fair comparison between positive and negative samples and optimize the features extracted by the encoder, consistency in the encoding of positive and negative samples needs to be maintained in a long queue.For example, in our model training, the length of the momentum encoding queue is set to 4096.Essentially, contrastive learning treats each sample as a multi-class classification task, thereby enhancing the flexibility of embedding cross-modal contextual information in a shared space.However, due to the inherent diversity and ambiguity of natural language expressions, ambiguity is inevitable.This challenge is particularly pronounced in image-text paired datasets, where the same image can be interpreted from various perspectives, leading to significantly different language descriptions with varying semantics.Therefore, there are significant challenges to achieving cross-modal semantic consistency in representation.From a model structure perspective, cross-modal representation is determined by the encoder, and the compression of data information by the encoder inevitably leads to information loss.This requires a balance between encoding efficiency and encoder performance. Image captioning is a standardized task for cross-modal understanding, where the model generates corresponding language descriptions based on input image representations.The task inherently involves calculating the similarity between image and text representations, necessitating a shared semantic space for image-text cross-modal semantic alignment, similar to cross-modal retrieval tasks.In other words, sharing a semantic space is a fundamental prerequisite for both cross-modal generation and cross-modal retrieval tasks.When the factual information and diversity/ambiguity of natural language descriptions in images are projected into a shared semantic space, the goal is to enhance the mutual information between the two modal representations.As the mutual information between modal representations increases in this space, the performance of cross-modal retrieval and cross-modal generation models based on this representation space improves.To optimize cross-modal representation and shared space embedding for the captioner's cross-modal understanding, we propose a multi-task perspective involving the joint training of image-captioning and cross-modal retrieval tasks.This approach ensures primary consistency in the shared representation space between the two tasks, thus facilitating improved cross-modal mutual information.If the two tasks are trained separately, although they may project into the same-dimensional space, they contain different information without an information-sharing process between the modalities, thus failing to effectively reduce discrepancies between the modalities.To establish an information channel, we employ dual momentum encoders.However, directly comparing the image momentum encoder and the text momentum encoder through contrastive learning faces challenges in ensuring cross-modal semantic consistency due to the different data properties between the two modalities.To synchronously and consistently update the momentum encodings of both modalities across modes, we propose using a residual attention network as a channel for cross-modal information exchange.Considering the sparsity of image data and the abstract nature of language data, we ensure that sparse data contribute proportionately to the information during the deep network's feedforward process by using image features as residuals.Through cross-modal information fusion and momentum encoding, we obtain momentum encoding queues with higher cross-modal mutual information, resulting in better performance of the image-text encoder in cross-modal semantic consistency. Experiments Our method is named ReCap (Retrieval and Captioning).In this section, we primarily validate the effectiveness of our proposed method on standardization tasks using public datasets.Specifically, these tasks encompass image captioning and image-text retrieval on the COCO dataset, classification tasks on the iNaturalist2018 dataset, as well as imagecaptioning and image-text retrieval tasks on the iNaturalist2018 dataset. Dataset Settings We utilized the Karpathy split [38] of the MSCOCO dataset [39], comprising 123,000 images, with each image accompanied by five sentences as annotations.The iNaturalist2018 dataset comprises 8142 distinct species, each serving as an individual image classification category.It encompasses a total of 437,513 training images and 24,426 validation images.As this dataset initially lacked caption annotations, we conducted a comprehensive annotation effort, providing five sentences of description for each image.Furthermore, we annotated both the common name and the Latin name for each species.The specific process of enhancing the INaturalist2018 dataset is detailed in Appendix A.3.In Table 1, we present some examples of our annotated data. Images Captions Two geese are walking on the shore of a pond. A bunch of yellow flowers are sitting in a field. A Catasticta nimbice is sitting on an Ageratum houstonianum in the sun. An Aepyceros melampus grazing in a field. Implementation Details We utilized eight NVIDIA 3090 24G GPUs for the image-text encoder contrastive learning training process, with a queue length set to 4096 and a momentum parameter of 0.995.We employed the AdamW optimizer with a decay weight set to 0.02.The learning rate was the warm-up set to 1 × 10 −4 for the first 1000 iterations and decayed in a cosine function manner to 1 × 10 −5 for the subsequent iterations.The total training duration for the model was approximately 127 h. Evaluation Metrics The image caption model employs four widely recognized evaluation metrics, namely, BLEU (Bilingual Evaluation Understudy) [40], METEOR (Metric for Evaluation of Translation with Explicit ORdering) [41], CIDEr (Consensus-based Image Description Evaluation) [42], and SPICE (Semantic Propositional Image-Captioning Evaluation) [43].Among these, BLEU4 segments sentences into four-word chunks to gauge the descriptive accuracy of the model-generated captions.METEOR, building on the foundations of BLEU, addresses the issue of excessive word matching while emphasizing word recall and precision. CIDEr, primarily applied in the domain of image description, employs TF-IDF (Term Frequency-Inverse Document Frequency) to weigh each sentence fragment.It encodes the frequency (E r ) of a fragment in the reference description and the frequency (E c ) in the generated description.Subsequently, it computes the similarity between E r and E c to generate an evaluation score for the model.SPICE, on the other hand, is an evaluation metric based on scene graphs and semantic concepts.It assesses the extent to which the model-generated description aligns with the entities, attributes, and relationships present in the image. The image classification task on iNaturalist has only one label for each picture, denoted as g i .The result predicted by the model is denoted as p i , and the error rate is where and the total score is score = 1 Experiment Project Selection The core idea of our proposed method is briefly summarized as follows.Firstly, through the joint training of cross-modal retrieval and image-captioning tasks, we obtain a momentum-encoded queue with a contextual understanding of image-text pairs.This serves as an information bridge to train a cross-modal image encoder and a cross-modal text encoder using contrastive learning methods.This pair of encoders forms the basis for cross-modal fine-grained semantic consistency, as they determine the extraction and embedding of representations of various modal data into a shared cross-modal semantic space distribution.After training, our model yields an image encoder, a text encoder, and a captioner, which are the three key modules of ReCap.Due to the absence of a standardized task on a common dataset that can comprehensively evaluate our proposed method, we selected several standardized tasks on public datasets to individually test the performance of the three key modules of ReCap.Conducting experiments on standardized tasks on public datasets facilitates comparison with state-of-the-art (SOTA) methods on leaderboards, which, on the one hand, validates the effectiveness of the proposed method and, on the other hand, allows for a level measurement through comparison.Specifically, the experimental section validates the effectiveness of the captioner through the imagecaptioning task on the MSCOCO dataset as shown in Table 2.The effectiveness of the image encoder and text encoder's cross-modal representations is verified through cross-modal retrieval tasks as shown in Table 3.The effectiveness of the image encoder is validated through the image classification task on the iNaturalist 2018 dataset as shown in Table 4. Additionally, Tables 2-4 in the experimental section reflect the proposed method's multitask perspective.Table 3. Quantitative analysis of cross-modal retrieval on MSCOCO dataset (%). Evaluation on the MSCOCO Dataset We trained models on the MSCOCO dataset to perform image captioning and imagetext retrieval tasks in order to validate the effectiveness of the proposed method.Table 2 presents the performance comparison of ReCap with state-of-the-art models in the context of image captioning.Here, B4 denotes BLEU-4, C represents CIDEr, M stands for METEOR, and S corresponds to SPICE.Further details are provided in Section 4.3.Table 3 illustrates the performance comparison of ReCap in image-text retrieval tasks against high-level models.Here, I2T denotes image-to-text retrieval, while T2I represents text-to-image retrieval.R@1, R@5, and R@10 respectively indicate recall rates for the top 1, top 5, and top 10 retrieval recommendations.The experimental results demonstrate that ReCap outperforms several state-of-the-art models, thereby validating the efficacy of the proposed method. Based on the comparative data in Table 2, it is evident that ReCap demonstrates improved performance compared to others.Taking the scores in the B4 column as an example, the ReCap score is increased by nearly seven points.This improvement can be attributed to two main enhancements: firstly, the incorporation of an open vocabulary, meaning there is no restriction on the number of categories; and secondly, the Res-Att network excels in the fusion of cross-modal features, effectively emulating the representation style of the dataset.This results in a higher overlap between the generated captions and the ground truth. As shown in Table 3, in the retrieval task of image to text, the R@1 score exhibits an improvement of approximately 8 to 14 percentage points compared to others.In the text-toimage retrieval task, there is an improvement of approximately 1 to 12 percentage points compared to others.This indicates a significant effect of the proposed method in the crossmodal alignment of image and text features.The improvement in text-to-image retrieval performance is relatively challenging due to the high information compression in textual data and the sparse nature of image data.When calculating mutual information, the same textual representation often exhibits similarity to a larger number of image representations.For instance, different models of cars appearing in images with similar backgrounds would have high similarity.To effectively differentiate between the brand and model of cars in the image, a finer-grained cross-modal alignment is required for text-to-image retrieval.Therefore, adopting an open vocabulary approach during the training of the image encoder is essential, as it avoids the limitations to a finite set of categories and proves crucial in the cross-modal modeling tasks involving image and text. Evaluation on the iNaturalist Dataset In accordance with the introduction, the motivation behind this study is to address the need for the cross-modal processing of vast quantities of imagery data from natural conservation.In order to assess the cross-modal alignment of the model's representations between images and text, we opted to employ the image classification task on the iNatural-ist2018 dataset.This section's experiments were conducted independently using the image encoder and text encoder.Notably, the image encoder was originally designed without a classification head.To achieve classification, we employed a method that involves compar-ing the representations output by the image encoder with the prompt encodings generated by the text encoder. The format of the prompts used is 'a photo of <category>', where 'category' corresponds to the category names in the dataset.In other words, for as many categories as there are in the dataset, there are corresponding prompts.In essence, our image classification approach assigns an image to the category with the highest similarity to its image representation.Specific experimental results are presented in Table 4.The experimental outcomes demonstrate that ReCap outperforms several state-of-the-art models, thereby confirming the the proposed method's cross-modal alignment capability between image features and textual representations for species. As shown in Table 4, ReCap demonstrates a performance improvement of approximately 1 to 7 percentage points compared to others.This indicates that our proposed method, employing an open vocabulary approach, is capable of handling image classification tasks on the iNaturalist Dataset.The experimental results not only affirm the effectiveness of our method in cross-modal representation alignment but also validate the feasibility of applying this approach to open vocabulary image classification tasks. Evaluation on the NACID Dataset After verifying the effectiveness of the above, the model was trained on the NACID dataset Appendix A.3 and the two tasks of image captioning and image-text retrieval were evaluated.The model performance scores are shown in Tables 5 and 6.Through all the experimental results, it can be seen that the model has the ability to perform image captioning and image-text retrieval on the enhanced INaturalist2018 image-text pair dataset, which verifies the effectiveness of the ReCap model proposed in this paper. Qualitative Evaluation Next, we conducted qualitative experiments on cross-modal retrieval and generation using the NaCID test set.Additionally, to validate the effectiveness of the proposed method on natural protected area image datasets, we selected three image datasets from natural protected areas for zero-shot experiments. The top 5 results for text-to-image retrieval are illustrated in Figure 7.Both non-target images and target images contain relevant content related to grassland and the target species.From the perspective of our application, we seek relatively open-ended retrieval results.This approach allows the model to continuously improve through small-sample learning in real-world applications.If the model were confined to strict one-to-one retrieval, it would lack practical utility. As shown in Table 7, the captions generated by the model align well with the content of the test images, and the species names are consistent with the Latin names used in the training set.This intuitively demonstrates the model's learning capability in the domain of image-text cross-modal alignment.In the fourth prediction, the bear species (Ursus arctos horribilis) occurred 24 times in the training set, but there were no caption annotations for "cubs" in the training data prior to GPT-2 fine-tuning.This underscores the importance of pre-existing knowledge within NLP models for image-captioning tasks, as it can provide additional information that is subsequently expressed in the form of generated descriptions.In the context of our approach, aligning image representations cross-modally in the pretrained NLP decoder representation space leverages the rich knowledge of the NLP decoder for a deeper understanding of the images.There are some red Castilleja indivisa in the grass. A Libellula quadrimaculata is flying over the water. A Ursus arctos horribilis and her cubs on a green field. We conducted zero-shot experiments using three datasets related to natural conservations; refer to Appendix A Table A2.The experimental procedure was as follows: Firstly, we designed sentences resembling "A photo of <species>" based on the dataset content.Subsequently, we performed text-to-image retrieval with these sentences and provided the retrieved images to the captioner for generating descriptive text.The experimental results are presented in Table 8.The experimental results indicate that the species names on the retrieval side, the species within the images, and the species names on the generation side are all consistent.This observation underscores that the features extracted by the image encoder and text encoder are aligned, and the semantics of the encoder and decoder are in harmony, visually demonstrating the model's capabilities in cross-modal alignment and semantic consistency between text and images.Examining the generated captions reveals the decoder's capacity for systematic descriptions of foreground and background elements.This is a result of the combined influence of the model's prior knowledge and fine-tuning.A close-up of an Aglais io is sitting on top of a flower. Ablation Study The results of the ablation experiments are presented in Table 9.In the table, the term "C+C" indicates a direct connection between the encoder and captioner, where the visual representations generated by the encoder are used as input for the captioner."C+R+C" signifies the bridging of encoder and captioner through the Res-Att module.From the experimental results in the "C+C" row, it can be observed that the I2T and T2I performance on both datasets is relatively consistent, maintaining an average level.In comparison to the performance of ReCap, there is a slight decrease in T2I, while I2T and image captioning exhibit more substantial performance degradation.This suggests that when the encoder and decoder operate independently, the model's performance heavily relies on the knowledge inherited from pre-trained models and the training process.However, without a channel for information transfer between them, they cannot leverage distinct task perspectives from each other to enhance each other's performance. Looking at the experimental results in the "C+R+C" row, there is a noticeable improvement in the performance of image captioning compared to the "C+C" row.This indicates that after a finer-grained cross-modal alignment of image and text representations at the micro-level, it becomes more favorable for the captioner to generate descriptions for images.It is evident that the Res-Att module significantly contributes to the optimization of cross-modal representation alignment and the refinement of shared semantic space embedding for text and images. ReCap and the "C+R+C" configuration only differ in the presence of a momentum feedback loop in their model structures.From the experimental results, it is evident that there are overall performance improvements in the model, particularly in the I2T and image-captioning tasks.This suggests that the feedback information on the decoding side significantly aids in enhancing the performance of the encoder, resulting in substantial gains in the cross-modal alignment of image and text representations. The improvement in image-captioning performance further illustrates that, after optimizing the encoder's performance, it is possible to further enhance the decoder's performance.From the perspective of data propagation, the encoder is at the front end, and the captioner is at the back end.With the addition of momentum feedback and Res-Att-based cross-modal fusion, the two form a feedback loop for mutual optimization. Conclusions The image-text representation initially undergoes coarse alignment through the encoder, followed by fine-grained alignment by the decoding side consisting of Res-Att and the captioner.Subsequently, the encoder is momentum updated based on the decoding side information, forming feedback from the decoding side to the encoding side, enhancing the quality of both the encoder and caption generation.The essence of this process lies in the sharing of a semantic space, where the decoder imparts its understanding of embedding similarities and categorization to the encoder.These insights are propagated to the encoder's network parameters through momentum-based backpropagation.Furthermore, contrastive learning on the encoding side plays a crucial role.As mentioned earlier, the classification in contrastive learning is open-ended, with as many categories as there are samples.Such a classification method has no upper limit on granularity, compelling the encoder to learn subtle distinctions among samples as much as possible.Achieving this solely from the encoding side would be information bottlenecked, and this is where feedback from the decoding side effectively bridges the information gap.Experimental results also confirm the contribution of prior knowledge in the decoder during this process.In summary, the feedback from the decoding side, the prior knowledge in the decoder, and momentum updates collectively enhance the quality of feature extraction in the encoder.All of this coalesces into a shared semantic space embedding for the encoder-decoder, where both entities possess a shared and aligned embedding space, embodying the essence of semantic consistency. The performance of both cross-modal retrieval in image-text pairs and generative models fundamentally depends on the quality of shared space embeddings.The main contribution of our proposed method lies in the effective fusion of the advantages of both tasks in the cross-modal shared space embedding of images and text through thoughtful model design.This approach is particularly suitable for scenarios where there are strict alignment requirements between the objects in the image and the vocabulary in the text.Moreover, it demands that the model can further associate the input image representation with a more extensive and semantically rich textual description along a longer logical chain.Our proposed method is well suited for such scenarios. Figure 1 . Figure 1.An application instance of the ReCap model.The contributions of this work include (1) the creation of a dataset of image-text pairs for natural conservation; (2) proposing a combined offline and online training approach;(3) introducing a method for information transfer through collaborative parameter solving Figure 2 . Figure 2.An overview of training ReCap for cross-modal semantics consistency. Figure 7 . Figure 7. Examples of text-to-image retrieval on validation dataset. Table 1 . Samples of nature conservation image-text pair dataset. Table 7 . Examples sentences generated by ReCap for test images. Table 8 . Examples of ReCap zero-shot retrieval and captioning. A pink Phoenicopterus ruber standing in the water. Table 9 . Ablation study of ReCap on the MSCOCO and iNaturalist 2018 datasets.
2024-05-19T15:54:32.136Z
2024-05-01T00:00:00.000
{ "year": 2024, "sha1": "a83753b459d98598d9a5d9f4fc15ea73ae46bbcc", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/24/10/3130/pdf?version=1715702130", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2d44e1a98c1b9d78af9f5d9de13d56150557e9c7", "s2fieldsofstudy": [ "Computer Science", "Environmental Science" ], "extfieldsofstudy": [] }
201803058
pes2o/s2orc
v3-fos-license
A thin-film extensional flow model for biofilm expansion by sliding motility In the presence of glycoproteins, bacterial and yeast biofilms are hypothesized to expand by sliding motility. This involves a sheet of cells spreading as a unit, facilitated by cell proliferation and weak adhesion to the substratum. In this paper, we derive an extensional flow model for biofilm expansion by sliding motility to test this hypothesis. We model the biofilm as a two-phase (living cells and an extracellular matrix) viscous fluid mixture, and model nutrient depletion and uptake from the substratum. Applying the thin-film approximation simplifies the model, and reduces it to one-dimensional axisymmetric form. Comparison with Saccharomyces cerevisiae mat formation experiments reveals good agreement between experimental expansion speed and numerical solutions to the model with O(1) parameters estimated from experiments. This confirms that sliding motility is a possible mechanism for yeast biofilm expansion. Having established the biological relevance of the model, we then demonstrate how the model parameters affect expansion speed, enabling us to predict biofilm expansion for different experimental conditions. Finally, we show that our model can explain the ridge formation observed in some biofilms. This is especially true if surface tension is low, as hypothesized for sliding motility. In the presence of glycoproteins, bacterial and yeast biofilms are hypothesized to expand by sliding motility. This involves a sheet of cells spreading as a unit, facilitated by cell proliferation and weak adhesion to the substratum. In this paper, we derive an extensional flow model for biofilm expansion by sliding motility to test this hypothesis. We model the biofilm as a two-phase (living cells and an extracellular matrix) viscous fluid mixture, and model nutrient depletion and uptake from the substratum. Applying the thin-film approximation simplifies the model, and reduces it to one-dimensional axisymmetric form. Comparison with Saccharomyces cerevisiae mat formation experiments reveals good agreement between experimental expansion speed and numerical solutions to the model with O(1) parameters estimated from experiments. This confirms that sliding motility is a possible mechanism for yeast biofilm expansion. Having established the biological relevance of the model, we then demonstrate how the model parameters affect expansion speed, enabling us to predict biofilm expansion for different experimental conditions. Finally, we show that our model can explain the ridge formation observed in some biofilms. This is especially true if surface tension is low, as hypothesized for sliding motility. Introduction Micro-organisms can form colonies with fascinating and complex spatio-temporal patterns. As these colonies are readily grown in experiments, bacteria and fungi are often used as model organisms to investigate the mechanisms of pattern formation in large collections of cells. Identifying the contributions of different candidate mechanisms to the self-organization process is an important problem in developmental biology [1]. For example, Turing [2] and Keller & Segel [3] famously showed that heterogeneous patterns can develop from a homogeneous initial state as a result of reaction and diffusion of chemicals. Murray [1] proposed a more general mechanochemical theory, where chemical signals combine with mechanical interactions between cells and their environment to give rise to spatial patterns. As these mechanisms can interact in a complex manner, pattern formation in micro-organisms continues to be an active field of research. Reynolds & Fink [4] showed that the bakers' yeast Saccharomyces cerevisiae can form mats when grown on semi-solid agar. These mats consist of cells embedded in a self-produced extracellular matrix (ECM), and established S. cerevisiae as a useful model organism for fungal biofilm formation. We previously showed that a minimal reaction-diffusion model for nutrient-limited growth alone could reproduce the floral pattern observed in mat formation experiments [5]. However, experimental observations also led Reynolds & Fink [4] to hypothesize that yeast biofilms expand by sliding motility. This involves a sheet of cells spreading as a unit due to the expansive forces of cell growth [6], and reduced friction between the cells and substratum [7], and is not considered in previous models. In this work, we use a combination of mathematical modelling and experiments to investigate the extent to which sliding motility contributes to yeast biofilm formation. In §1a,b, we review the existing literature on yeast biofilms and the mathematical modelling thereof. In §2, we derive a two-phase (living cells and the ECM) mathematical model for biofilm expansion. We then exploit the thin biofilm geometry to obtain a one-dimensional, radially symmetric thin-film approximation to the general model in §3. We compute numerical solutions to the thin-film model in §4, and show that it can reproduce the expansion speed observed in experiments. We confirm that cell proliferation drives expansion in sliding motility, and demonstrate how the movement, uptake and consumption of nutrients affect expansion speed. We close the paper in §5, concluding that sliding motility is a plausible mechanism for biofilm formation in yeast. (a) Biological background A biofilm is a slimy community of micro-organisms existing on a surface, in which cells adhere to each other and reside within a self-produced ECM. An estimated 80% of bacteria in nature exists in biofilm colonies [8]. For this reason, they have been described as the 'oldest, most successful and widespread form of life on Earth' [9], and have attracted significant research attention. Our main objective is to better understand the mechanisms of yeast biofilm expansion. Yeasts are single-cell fungal organisms that have well-known everyday uses, for example, in baking and brewing. However, yeast species such as the pathogenic Candida albicans, often form biofilms on indwelling medical devices [10]. These biofilms are a leading cause of infections in clinical settings, and can be up to 2000 times more resistant to anti-fungal agents than planktonic cells [8]. Inability to remove fungal biofilms can lead to candidiasis, which is an invasive disease estimated to affect around 0.2% of the population per year. Due to its high resistance to treatment, candidiasis has a mortality rate of 30-40% in immunocompromised people [11]. However, despite these significant impacts on human health, fungal biofilms are much less widely studied than bacterial biofilms [12]. The ECM is a distinguishing feature of biofilms. It consists of water, which forms up to 97% of matrix material [13], and various extracellular polymeric substances (EPS). Although the composition and function of the ECM may differ between species, it provides biofilm colonies with several advantages over planktonic cells, as summarized by Flemming & Wingender [9]. For yeast biofilms specifically, the ECM has been observed to assist the transportation of nutrients [14], and prevent penetration of harmful external substances [15]. The ECM also influences biofilm rheology. Although biofilms are viscoelastic in general, on time scales longer than the order of minutes they tend to behave as viscous fluids [9,16,17]. The budding yeast Saccharomyces cerevisiae has emerged as a useful model for fungal biofilm growth in cell biology research [4]. A major advantage of using S. cerevisiae in experiments is that its genome has been sequenced [18], and a wide variety of genetic tools such as mutant libraries are available. As it is closely related to C. albicans [19], it has assumed an important role in the identification of new targets for anti-fungal therapy [4,20]. Furthermore, as a eukaryotic organism its basic cellular processes also have a lot in common with human cells [20]. Due to this, S. cerevisiae has also been used as a model for understanding the division of cancer cells [12]. Reynolds & Fink [4] were the first to perform mat formation experiments with S. cerevisiae, and similar methods have been used in subsequent studies [5,12,21]. In these experiments, yeast cells are inoculated on semi-solid (0.3%) agar plates. They initially form a thin round biofilm, which over time expands and forms a complex mat structure, characterized by petal-like features at its edge. This transition is illustrated in figure 1. A notable finding of Reynolds & Fink [4] is that the glycoprotein Flo11p is required for mat formation. Similar glycopeptidolipids are prerequisites for biofilm formation in Mycobacterium smegmatis. This is because they increase cell surface hydrophobicity, which results in weak adhesion between the biofilm and substratum [7]. Furthermore, S. cerevisiae cells are nonmotile [22], making them unable to respond actively to nutrient or chemical gradients. Reynolds and Fink subsequently hypothesized that sliding motility, which is a form of passive growth, is the driving mechanism of yeast biofilm formation. Recent studies on bacterial biofilms have also revealed that osmotic swelling is another potential mechanism for biofilm expansion [23,24]. This requires production of EPS, which creates an osmotic pressure difference between the biofilm and environment. The biofilm then physically expands by taking up water from the agar [23]. The extent to which sliding motility and osmotic swelling contribute to expansion depends on the microbial species and environment [24]. For example, in some bacterial biofilms including Bacillus subtilis, in which ECM fraction is commonly 50-90% [25] and can be as high as 95-98% [13,26], osmotic swelling is the primary mechanism [23]. By contrast, we observe that ECM fraction is approximately 10% in S. cerevisiae mats, suggesting that cell proliferation and sliding motility will play a larger role. However, no detailed study into whether sliding motility is the mechanism of yeast biofilm expansion has been performed. Investigating this is the subject of our paper. (b) Previous models of biofilm formation Owing to their ubiquity and importance to infections, biofilms have attracted significant attention in the applied mathematics community. Previous models have incorporated a wide variety of approaches (see Mattei et al. [25] for a comprehensive recent review). These include agent-based or hybrid models [27][28][29], and reaction-diffusion systems [5,12,[30][31][32], both of which model the spread of cells, and movement and consumption of nutrients. However, a limitation of both of these approaches is that it is difficult to include the effect of colony mechanics, such as extracellular fluid flow. As modelling sliding motility requires considering the ECM mechanics, we restrict our attention here to models that incorporate the extracellular fluid. In the literature, many authors include external fluid flow when modelling biofilm growth. A common approach is to consider biofilms immersed in a liquid culture medium, growing perpendicular to non-reactive, impermeable substrata. These models then incorporate the hydrodynamics of bulk fluid in the medium [33][34][35]. We focus primarily on another promising approach, in which biofilm constituents are themselves treated as fluids [36,37]. Under this framework, biofilms are typically modelled as multi-phase mixtures of cells, EPS and external liquid [17,26,[38][39][40][41][42][43]. Applying conservation of mass and momentum for each fluid phase then enables the mechanics of each fluid, and interactions between phases, to be taken into account. We aim to model S. cerevisiae mat formation experiments, which involve a biofilm spreading radially by sliding motility, with nutrients supplied from the agar substratum. The radius of these yeast biofilms significantly exceeds their height, which makes thin-film models well suited to this problem. In most previous models that adopt the thin-film approximation in multi-phase fluid models, the authors derive a fourth-order generalized lubrication equation for the evolution of the biofilm height [22,23,[44][45][46][47][48][49][50]. These models can then incorporate additional features, to investigate the effects of nutrient supply [44,45,47,50,51], osmotic swelling [23,47,50], quorum sensing [46] and surface forces [47][48][49] on biofilm growth. However, a common feature of these models is that the derivation of a generalized lubrication equation requires the assumption of strong adhesion between the biofilm and substratum. As a result, flow is driven by a large pressure that must be balanced with a comparatively large surface tension. By contrast, sliding motility involves increased cell surface hydrophobicity, and hence weak adhesion between the biofilm and agar. Modelling S. cerevisiae mat formation therefore requires an alternative approach. The model of Ward & King [46] is of particular interest to the problem of biofilm expansion by sliding motility. They treat a bacterial biofilm as a multi-phase mixture of cells and water, and use an extensional flow thin-film reduction to derive a model for the early time spread of the colony. This approach assumes weak adhesion between the biofilm and substratum and is therefore well-suited to modelling the sliding motility mechanism. However, in their model the biofilm is immersed in a nutrient-rich liquid culture medium. This is unlike S. cerevisiae mats, which receive nutrients from the agar substratum; their ability to spread therefore depends on the supply of a depleting nutrient, which is also relevant to biofilm growth in nature or in a human host [52]. Ward & King [46] also only consider early biofilm development, and thus neglect ECM production and spatio-temporal variation in the cell volume fraction, which become important on the time scale of our experiments. Furthermore, multi-phase fluid models have also only previously been applied to bacterial biofilms, rather than the fungal biofilms considered here. Based on these considerations, we aim to extend the thin-film model of Ward & King [46], to model S. cerevisiae mat formation experiments. Mathematical model We consider growth of a yeast biofilm in cylindrical co-ordinates (r, θ, z), and assume radial symmetry from the outset. The biofilm occupies the region 0 < r < S(t) and 0 < z < h(r, t), where the leading edge of the biofilm S(t) is termed the contact line, and h(r, t) represents the biofilm-air interface, which is a free surface. We define H b and R b to be the characteristic height and radius of biofilm growth, respectively. The biofilm grows on a substratum, which has depth H s and is assumed rigid. A sketch of the problem domain, which closely resembles that of Ward & King [46], is shown in figure 2. We adopt a macroscopic continuum model, and treat the biofilm as a mixture of two viscous fluid phases. These are a living cell phase denoted with the subscript n, and an ECM phase denoted with the subscript m. We define the volume fractions of living cells and ECM to be φ n (r, z, t) and φ m (r, z, t), respectively, and assume that the mixture contains no voids, that is In defining these volume fractions, we note that it is not possible for both species to occupy the same space simultaneously. Throughout this work, we implicitly assume that an appropriate averaging process has taken place, and do not discuss the details here. We direct the reader to the paper by Drew [53] for further information. A novelty of our approach is that we combine a thin-film extensional flow model for sliding motility with nutrient uptake from a depleting supply in the substratum. To enable this, we introduce g s (r, z, t), the nutrient concentration in the substratum defined for −H s < z < 0, and g b (r, z, t), the nutrient concentration in the biofilm, defined for 0 < z < h(r, t) and 0 < r < S(t). After deriving the governing equations, we impose the initial and boundary conditions required to close the model in §2b. Nutrients can enter the biofilm across the biofilm-substratum interface, at which point they become available for consumption by the cells. This, combined with boundary conditions for the fluid flow, completes our description of sliding motility in biofilms. (a) Governing equations We derive the governing equations of our general model using conservation of mass and momentum. For the mass balances, we assume that the density of each fluid phase is constant, and that the mass flux of each phase is entirely advective. The mass balance equations then read where u α = (u rα , u zα ) for α = n, m, are the fluid velocities. The J α terms represent the net volumetric source of phase α. For these terms, we adapt the bilinear forms used in Tam et al. [5] to include cell death. Assuming that dead cells immediately become part of the ECM, we write where ψ n is the cell production rate, ψ m is the ECM production rate and ψ d is the cell death rate, all of which are constant. In (2.3), cell death is proportional to cell density only, while production of both living cells and ECM increases with local cell density and nutrient concentration. This is consistent with experimental observations, which show that cellular components and ECM are both formed by catabolism of cellular synthesized glucose [54]. Despite not being considered here, our model also retains the possibility of incorporating more complicated mechanisms, for example, ECM production regulated by quorum sensing [39]. We assume that nutrients disperse by diffusion in the substratum, and by both diffusion and advection with extracellular fluid inside the biofilm. As in Tam et al. [5], we assume that the rate at which nutrients are consumed is proportional to the local density of cells and nutrients. The mass balance equations for the nutrients in the substratum and biofilm, respectively, then read where D s and D b are the nutrient diffusivities in the substratum and biofilm, respectively, and η is the maximum nutrient consumption rate. Since the biofilm spreads as a unit in sliding motility, we follow O'Dea et al. [55] in assuming strong interphase drag between the cells and the ECM, so that both phases move with the same velocity u n = u m = u. Then, for simplicity we assume that the cells and ECM have the same dynamic viscosity μ, so effectively the mixture can be treated as a single viscous fluid. We denote the stress tensor for the mixture by σ , and since inertial effects are negligible (Re 1) on the time and length scales of biofilm growth, it satisfies the momentum balance equation Owing to cell proliferation and death, and ECM production, the stress components for the mixture will include terms involving ∇ · u, which commonly vanish. In cylindrical geometry, the relevant components of the stress tensor are where p is the pressure [56]. Note that we have invoked Stokes' hypothesis, giving the standard coefficient −2μ/3 for the divergence terms in (2.7) [17,46,57,58]. Substituting (2.7) into (2.6), we find that the momentum balances in the r-and z-directions, respectively, are Given appropriate initial and boundary conditions, these momentum balance equations (2.8), together with the mass balance equations (2.2), (2.4), (2.5), define a closed system of governing equations for the fluid pressure, fluid velocity and nutrient concentrations. (b) Initial and boundary conditions To close the system of governing equations, we require initial and boundary conditions for all of the physical variables. When constructing the general model, we will leave the initial conditions arbitrary. We obtain the first boundary condition by noting that nutrient cannot pass through the base of the substratum. As the substratum is assumed rigid, the no-flux condition is When the cells are plated, there is no nutrient in the biofilm. Therefore, the nutrient concentration is initially discontinuous across the biofilm-substratum interface. To enable cell proliferation and expansion, the biofilm takes up nutrients from the substratum. We assume that the flux of nutrients across the biofilm-substratum interface is proportional to the local concentration difference, and expect that in general consumption of nutrients in the biofilm will sustain this 7 In equations (2.10), the constant Q is the nutrient mass transfer coefficient, which indicates the permeability of the biofilm. To obtain a condition for the fluid velocity on the biofilm-substratum interface, we use the hypothesis that sliding motility increases surface hydrophobicity, causing weak adhesion between the biofilm and substratum [6]. To model this, we impose zero tangential stress on the biofilm-substratum interface instead of the more common no-slip condition. The boundary condition readst wheret is any unit tangent vector, andn is the unit outward normal vector. For the boundary conditions on the free surface, we first observe that nutrient cannot pass through the biofilm-air interface. This yields the no-flux condition (2.12) On each fluid phase, we also impose the kinematic condition which states that fluid particles on the free surface must remain there. Finally, we obtain stress boundary conditions by noting that a free surface is subject to zero tangential stress, and normal stress that is proportional to its local curvature. In general, these conditions read where γ is the surface tension coefficient, and κ = ∇ ·n, for the free surface normal vector n = (−h r , 1)/(1 + h 2 r ) −1/2 (where subscripts here denote partial differentiation), is the mean free surface curvature. This completes the boundary conditions associated with the model. Extensional flow thin-film approximation In this section, we use a thin-film approximation to obtain a simplified approximation to the model derived in §2. A key observation is that the radius of a biofilm significantly exceeds both its height and the depth of the substratum. This allows us to assume that the aspect ratio In §3a, we non-dimensionalize the governing equations with this in mind. The choice of scaling regime depends on the physics most relevant to the problem. For sliding motility in which surface tension is reduced [6], it is appropriate to model the biofilm as an extensional flow, which was considered by Ward & King [46]. In §3b,c, we adopt this approach, and use a thin-film approximation to simplify the governing equations and boundary conditions considerably. We then propose parameter values and source terms in §3d, yielding a one-dimensional axisymmetric model that we can compare with experimental results. (a) Scaling and non-dimensionalization To non-dimensionalize the equations, we use the initial biofilm radius, R b , as the length scale, and scale time by the cell production rate, ψ n , and initial nutrient concentration, G. The scaled Under this scaling, the governing equations (2.2), (2.4), (2.5) and (2.8) become, after dropping hats and eliminating φ m by summing (2.2) over both phases and applying (2.1) where we have introduced the dimensionless constants In (3.3), Ψ m and Ψ d are the dimensionless ECM production and cell death rates, respectively. The parameter D is the coefficient of diffusion for nutrients in the substratum, scaled by the cell production rate and biofilm radius. The Péclet number, Pe, is the ratio of the rates of advective transport to diffusive transport within the biofilm. The parameter Υ is the dimensionless nutrient consumption rate. We scale Υ differently to the corresponding term in Ward & King [46]. In their model, the biofilm was immersed in a nutrient-rich liquid culture medium, and hence they balanced nutrient consumption with diffusion in the z-direction. By contrast, S. cerevisiae mats grow on a nutrient-limited thin substratum, making it appropriate to balance nutrient consumption with the temporal derivative and in-plane advection and diffusion. Applying the same scaling (3.1), the dimensionless boundary conditions are where κ * is the dimensionless mean free surface curvature. The dimensionless parameters are all assumed to be O (1). The mass transfer parameters Q s and Q b are the nutrient depletion rate (from the substratum), and nutrient uptake rate (by the biofilm), respectively. The dimensionless surface tension coefficient (or inverse capillary number), γ * , is the ratio of surface tension forces to viscous forces. Equations (3.2), and the boundary conditions (3.4), then complete the dimensionless extensional flow model, on which we apply the thin-film reduction. (b) Thin-film equations We now use a thin-film approximation to simplify the dimensionless extensional flow model derived in §3a. This involves expanding the dependent variables in powers of ε 2 and so on, where series for p, u r , u z , g s and g b take the same form as (3.6b). Substituting (3.6) into the dimensionless governing equations (3.2), at leading order we obtain and where the rightmost term in (3.8d) incorporates κ * = ∇ 2 h 0 , which is the leading-order local free surface curvature. Equations (3.7c), (3.7d) and the associated boundary conditions (3.8a), (3.8b) demonstrate that g s 0 , g b 0 and u r0 are independent of z, as is characteristic of extensional flows [59]. In a similar way to, for example King & Oliver [60], we exploit this by integrating the governing equations with respect to z across the biofilm depth to derive a one-dimensional closed system of equations for the leading-order variables. First, we introduce the depth-averaged cell volume fraction Integration of (3.7a), (3.7b) with respect to z then yields, after application of Leibniz's integral rule in (3.7b) where subtracting (3.10a) from (3.10b) gives To obtain equations for the leading-order nutrient concentrations, we need to consider the higher-order correction terms to the governing equations (3.2c) and (3.2d). Upon substituting the expansions (3.6), the O(1) balances are and (3.12b) Using (3.4a), (3.4b) and (3.4e), we can also obtain higher-order corrections to the boundary conditions, giving and Integrating (3.12a) and (3.12b) with respect to z across the substratum and biofilm depth, respectively, and applying the boundary conditions (3.13), we obtain and Pe h 0 14b) for 0 < r < S(t). We also need to take into account that the nutrient concentration in the substratum can be non-zero outside of the biofilm domain. Outside of the biofilm, the nutrient will disperse (3.15) where R = R p /R b , and R p is the radius of the Petri dish. We then seek a solution for g s 0 such that the nutrient concentration and its first spatial derivative are both continuous at the contact line. Equations (3.14) and (3.15) then constitute the leading-order nutrient balance equations for our thin-film model. Finally, we consider the higher-order correction term in the radial momentum equation (3.2e) to obtain equations for the leading-order radial velocity. Using the conservation of mass equation (3.2a) to simplify, the relevant term is Similarly, the higher-order corrections to the boundary conditions (3.4c), (3.4f) are To evaluate (3.16), we need to solve for the pressure p 0 . As u r0 is independent of z, integration of (3.7e) with respect to z yields, after applying (3.8d) and using (3.7a) Now, integrating (3.16) with respect to z across the biofilm depth, and applying the boundary conditions (3.17), we obtain Equations (3.10a), (3.11), (3.14), (3.15) and (3.19) then form a closed system for the leading-order biofilm height, (depth-averaged) cell volume fraction, nutrient concentrations and radial fluid velocity. These equations form our one-dimensional, thin-film extensional flow model. (c) Initial and boundary conditions We use experimental observations to propose initial and boundary conditions for the onedimensional axisymmetric model. The experiments and procedure used in this work are described by Tam et al. [5]. In the experiments, the Petri dish is initially filled uniformly with nutrient, and a small droplet containing cells and fluid is inoculated in the centre of the dish using a pipette. The fluid in the droplet is rapidly absorbed into the agar substratum, leaving a thin layer of cells, which we assume adopts a parabolic profile. Experiments of C. albicans show that extracellular material only emerges in mature biofilm [61], hence we assume the biofilm is initially made up of cells only. Appropriate initial conditions are therefore S(0) = 1, h 0 (r, 0) = H 0 1 − r 2 ,φ n0 (r, 0) = 1, g s 0 (r, 0) = 1 and g b 0 (r, 0) = 0, (3.20) where H 0 is the initial biofilm height, which we expect to be O(ε). In specifying (3.20), we note that we have chosen the characteristic length scales to be the initial biofilm height and radius, and scale both nutrient concentrations by the initial concentration in the substratum. For the boundary conditions, we first assume that the biofilm and nutrient concentration are radially symmetric, and that the centre of the biofilm is fixed. This yields the conditions In addition, we know that the contact line position S(t) evolves according to the local fluid velocity, that is dS dt = u r0 (S(t), t) . (3.22) To close the one-dimensional axisymmetric model, we now require an additional boundary condition for each of the nutrient concentrations, and the fluid velocity. For the nutrient concentration in the substratum, it is natural to impose the no-flux condition at the boundary of the Petri dish. Regarding the nutrient concentration in the biofilm, we note that the leading edge of the biofilm is rounded by a meniscus, where the height changes over a region in r with O(ε) size [62]. This meniscus is not captured under the original thin-film scaling. With this in mind, close to the contact line we consider a re-scaling of the original variables With this scaling, the leading-order balance for the flux boundary condition (2.12) becomes (dropping daggers) At the contact line, the left-hand side of (3.25) vanishes due to (3.8a), and in general h 0 can depend on r. The boundary condition on the biofilm nutrient concentration is therefore To close the momentum equation (3.19), we impose that the biofilm experiences zero radial stress at the contact line, that is σ rr (S(t), t) = 0. Using (3.18) to eliminate the pressure, we find that Integrating (3.27) over the biofilm depth, or noting thatφ n0 → φ n0 as h → 0, we then obtain (d) Parameters To obtain a set of parameters to use when comparing the model with S. cerevisiae mat formation experiments, we require estimates for all dimensional quantities in (3.1), (3.3) and (3.5). To assist with this, we first set Ψ m = 1/9 to ensure thatφ n will approach 0.9, as is consistent with experimental observation. For comparison purposes, we also set Ψ d = 0, as cell death rate is difficult to measure, and images from the end of the experiments show that the proportion of dead cells is low. Furthermore, as reduced surface tension is a characteristic of sliding motility [6], we initially consider γ * = 0. The experimental design then enables us to estimate all other dimensional parameters, with the exception of ψ n and η, which we subsequently fit to experimental data. We then obtain the dimensionless parameters listed in Results and discussion In this section, we compare the thin-film extensional flow model derived in §3 with experimental data, and then investigate the dependence of the parameters on the speed of biofilm expansion. To achieve this, we undertake the numerical solution of (3.10a), (3.11), (3.14), (3.15) and ( table 1 confirms that sliding motility can reproduce experimental results. In §4b,c, we then vary the parameters, including cell death rate and surface tension coefficient, to predict the expansion speed and biofilm shape in different conditions. (a) Numerical solutions and comparison with experiments We use a front-fixing method [67] to solve the one-dimensional axisymmetric model. This involves introducing the new variables so that the biofilm always inhabits ξ ∈ [0, 1], and the interval ξ o ∈ [0, 1] represents the remainder of the Petri dish not occupied by the biofilm. We then use a Crank-Nicolson scheme to discretize the model. For all nonlinear terms, we linearize using data from the previous time step. At each time step, we solve the governing equations in the same order as they are derived in §3b. When solving for the nutrient concentration in the substratum, we use data from the previous time step as an initial guess for g s (S, t) at the current time step. We then solve both (3.14a) and (3.15), and use Newton's method to correct the initial guess, and ensure that the first spatial derivative of g s is continuous at r = S(t), which corresponds to ξ = 1 and ξ o = 0. We compute solutions using an equispaced grid with ξ = ξ o = 1.25 × 10 −4 and t ≈ 1 × 10 −4 , which ensures adequate convergence with grid spacing and time step size. Further details on the numerical method are provided in the electronic supplementary material. We compute solutions for the parameters given in table 1 to facilitate comparison with experiments. There is good agreement between the numerical contact line position and the measured radius of the S. cerevisiae mats, as shown in figure 3a. Unlike the reaction-diffusion model of Tam et al. [5], figure 3b shows that the extensional flow model produces a nonconstant expansion speed. The velocity profile resembles the experimental B. subtilis biofilms of Srinivasan et al. [50], featuring an initial period of acceleration followed by a deceleration. A likely explanation of the acceleration observed early in biofilm growth is that cells initially proliferate in nutrient-rich conditions. With abundant nutrients, both existing and newly produced cells are able to proliferate, accelerating expansion. However, as time passes nutrients become depleted in the centre of the colony, as shown in figures 3c,d. When this occurs, cell proliferation is mostly confined to the leading edge (figure 3f ), which slows the expansion of the colony. This phenomenon also dictates the shape a biofilm attains as it expands. As figure 3e shows, our model predicts that the biofilm will expand vertically and radially when nutrients are abundant. When nutrients deplete and growth is concentrated near the leading edge, the biofilm stops thickening and can only expand radially. The model even predicts that the height at the centre of the biofilm will begin to decrease slightly, as the advection of mass with the fluid exceeds the net production rate. species and environmental conditions. To predict biofilm growth by sliding motility in a range of experimental conditions, we compute numerical solutions for 5 days of growth. For each set of solutions, we use the default parameters given in table 1, and vary one parameter at a time over a realistic range. This allows us to isolate the effect of each parameter on biofilm size, and consequently expansion speed. Of the dimensionless parameters, we found that the Petri dish size R and surface tension coefficient γ * had negligible effect on the biofilm size. Results for other dimensionless parameters and the cell production rate, ψ n , are shown in figure 4. A vast range of behaviour is possible while keeping dimensionless parameters within one order of unity. Figure 4a,b describes how fluid production and cell death affect expansion speed. As expected, higher rates of fluid (either living cells or ECM) production result in larger biofilms. However, unlike the production of ECM, the production of new cells facilitates increased cell proliferation in the future, and therefore cell production rate is a stronger determinant of size than ECM production rate. This verifies that expansion in sliding motility is mostly driven by cell proliferation. In addition, figure 4b shows that increasing the cell death rate decreases biofilm size, which is expected as fewer living cells are subsequently available to proliferate. The remaining plots in figure 4 show how the dimensionless parameters affect expansion speed. The effect of nutrient movement and consumption is revealed in figure 4c. Increasing the nutrient diffusion coefficient D will result in more uniform nutrient concentrations across the Petri dish than seen in figure 3c,d. This promotes thickening of the biofilm as opposed to radial expansion. In addition, increasing the nutrient consumption rate Υ results in larger quantities of nutrient being required to produce a new cell, thereby slowing expansion. The Péclet number indicates how readily nutrients advect radially with the extracellular fluid. Larger values of Pe increase nutrient supply to the proliferating rim, enabling faster expansion. However, the slender biofilm and substratum geometries are such that nutrient availability close to the leading edge depends more strongly on uptake from the substratum than advection in the biofilm. Therefore, the Péclet number has a weaker effect on expansion speed than the nutrient depletion and uptake rates, as figure 4d illustrates. Larger values of nutrient depletion rate Q s decrease nutrient access to the cells, which slows expansion. Conversely, increasing nutrient uptake rate Q b aids cell production, as more nutrients become available for consumption. A common theme in all of these results is that expansion speed depends on the ability of cells close to the leading edge to consume nutrient and proliferate. The results presented here are relevant to clinical settings, where expansion speed correlates with the invasiveness of infection. Our model describes environmental conditions that result in decreased expansion speed. (c) Predicting biofilm shape: ridge formation and surface tension In addition to the size, our model also predicts the shape a growing biofilm will attain. Although not observed in S. cerevisiae mat formation experiments, some bacterial biofilms [50] and yeast colony biofilms [54] develop a ridge structure close to the leading edge. To observe ridge formation in our model, we compute a numerical solution with the experimental parameters given in table 1, except with D = 1.5, Υ = 10 and Pe = 10. Compared to the experimental parameters, this combination of decreased nutrient diffusion, and increased nutrient consumption and advection leads to faster nutrient depletion behind the proliferating rim. Cell proliferation then becomes concentrated close to the leading edge, which in conjunction with increased advection of mass outwards from the biofilm centre, creates the noticeable ridge seen in figure 5a. To quantify ridge formation, we compute the normalized ridge height I r (t) = (max h(r, t))/h(0, t) in the new numerical solution, and compare with the experimental case. Figure 5b shows the normalized ridge height increasing faster than the base solution with experimental parameters. Although we do not investigate the mechanisms of ridge formation in detail, our model shows that interplay between sliding motility and nutrient-limited growth can initiate ridge formation. Importantly, this can occur without the need to invoke other mechanisms such as osmotic swelling or mechanical blistering. Finally, we investigate the effect that non-zero surface tension would have on the biofilm shape. This surface tension represents the strength of cell-cell adhesion at the free surface, which we assumed weak when comparing with experiments. To investigate its effect, we compute numerical solutions with the parameters as in figure 5, while varying the surface tension coefficient over the range γ * ∈ [0, 2]. These results are shown in figure 6. We observe that increasing the surface tension coefficient reduces the extent of the ridge, and that γ * = 2 is sufficient to prevent ridge formation. As surface tension appears only in the momentum equation (3.19) and boundary condition (3.28), we expect the fluid velocity profile to explain this behaviour. Figure 6c shows that increasing γ * decreases the radial velocity near the centre of the biofilm. This decreases movement of fluid and nutrients towards the leading edge of the biofilm, thereby inhibiting ridge formation. However, we do not observe ridge formation in S. cerevisiae mat formation experiments nor the solution with experimental parameters ( figure 3). This supports the hypothesis of low cell-cell adhesion in sliding motility, and justifies setting γ * = 0 when comparing the model with experiments. Summary In this paper, we developed a mathematical model to better understand how mechanics affect yeast biofilm expansion. We were particularly interested in the role of sliding motility and nutrient limitation, features hypothesized to be relevant to mat formation experiments of the budding yeast S. cerevisiae. To investigate this, we derived a general multi-phase model for biofilm expansion, treating the biofilm as a mixture of living cells and extracellular fluid. We systematically reduced the model to a one-dimensional axisymmetric form by employing an extensional flow thin-film reduction. By computing numerical solutions, we showed that the thin-film model could reproduce the expansion speed of S. cerevisiae mat biofilms. We then confirmed the hypothesis that cell production rate is the strongest determinant of biofilm size in sliding motility. By varying model parameters, we showed that increasing the ability for cells close to the leading edge to consume nutrients and proliferate promotes faster expansion. This can be achieved by decreasing the rates of nutrient diffusion, consumption and depletion, or by increasing the nutrient uptake rate. Finally, we showed that sliding motility is a possible explanation for the ridge formation observed in bacterial or yeast colony biofilms. We found that surface tension slows the movement of cells and nutrients towards the biofilm rim, and thus inhibits ridge formation. Our model confirms that sliding motility is a plausible mechanism for yeast biofilm expansion, and offers a way of quantitatively predicting biofilm growth for other microbial species and environmental conditions. In addition to these results, our model offers an opportunity to study further biological questions. For example, there are potential links between the characteristic floral morphology of S. cerevisiae mats and the stability of solutions to azimuthal perturbations. This provides one avenue for further investigation. Depending on the desired application, the general model also retains the possibility of investigating different mechanisms. For example, the model could be re-scaled to investigate expansion driven by strong adhesion and increased surface tension, rather than sliding motility. A more detailed model could also consider the agar substratum as viscoelastic, rather than solid. We could then impose continuity of shear stress at biofilmsubstratum interface, instead of the zero tangential stress assumed here. The model can also incorporate more complicated cell production mechanisms, for example, ECM production regulated by quorum sensing. It is also possible to include more complicated mechanical behaviour, for example, biofilm viscoelasticity or expansion driven by osmotic swelling. We intend to tackle some of these scenarios in future work, to shed further light on the mechanisms governing biofilm expansion.
2019-09-04T14:24:29.515Z
2019-09-01T00:00:00.000
{ "year": 2019, "sha1": "0daf26c49e2fa5f711a8a4938069a8bbf92e97be", "oa_license": "CCBY", "oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rspa.2019.0175", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "73e6d83888d13d8e9efc2f6af0ad612658cb92fa", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
59367231
pes2o/s2orc
v3-fos-license
Comparison of RANS and LES in the Prediction of Airflow Field over Steep Complex Terrain The present study compared the prediction accuracy of the three CFD software packages for simulating airflow around a three-dimensional, isolated hill with a steep slope: 1) WindSim (turbulence model: RNG k-ε RANS), 2) Meteodyn WT (turbulence model: k-L RANS), which are the leading commercially available CFD software packages in the wind power industry and 3) RIAM-COMPACT (turbulence model: standard Smagorinsky LES), which has been developed by the lead author of the present paper. Distinct differences in the airflow patterns were identified in the vicinity of the isolated hill (especially downstream of the hill) between the RANS results and the LES results. No reverse flow region (vortex region) characterized by negative wind velocities was identified downstream of the isolated hill in the result from the simulation with WindSim (RNG k-ε RANS) and Meteodyn WT (k-L RANS). In the case of the simulation with RIAM-COMPACT natural terrain version (standard Smagorinsky LES), a reverse flow region (vortex region) characterized by negative wind velocities clearly forms. Next, an example of wind risk (terrain-induced turbulence) diagnostics was presented for a large-scale wind farm in China. The vertical profiles of the streamwise (x) wind velocity do not follow the so-called power law wind profile; a large velocity deficit can be seen between the hub center and the lower end of the swept area in the case of the LES calculation (RIAM-COMPACT). Introduction We have developed an unsteady and non-linear wind synopsis simulator called RIAM-COMPACT (Research Institute for Applied Mechanics, Kyushu Univer-sity, Computational Prediction of Airflow over Complex Terrain) in order to simulate the airflow on a microscale, i.e., a few tens of km or less [1]- [13].In RIAM-COMPACT, a large-eddy simulation (LES) has been adopted for turbulence modeling.LES is a technique in which the structures of relatively large eddies are directly simulated and smaller eddies are modeled using a sub-grid scale model.Efforts have been made to promote RIAM-COMPACT, mainly in the wind power industry (e.g., private wind power providers, local governments, and wind turbine manufacturers) in Japan.Computation time had been an issue of concern for the RIAM-COMPACT software, which focuses on unsteady turbulence simulations (LES).The present fluid simulation solver is compatible with multi-core CPUs such as the Intel Core i9 and also with GPGPU, which has drastically reduced the computation time, leaving no appreciable problems in terms of the practical use of the RIAM-COMPACT software. On another front, commercially available CFD software such as STAR-CCM+ [14] and ANSYS (CFD, Fluent, CFX) [15] has developed mainly as an engineering tool primarily in the automobile and aviation industries until the present time.Recently, some of the above-mentioned general purpose thermal fluid analysis software has started being adopted in the wind power industry.In the previous study [16] [17] [18], the simulation results obtained from the RIAM-COMPACT software were compared to those from STAR-CCM+, one of the leading commercially available CFD software packages.The results of the comparison are discussed.In addition, open-source CFD software packages are more widely used than in the past.One of the most widely used software packages is OpenFOAM (OpenField Operation And Manipulation) [19].Open-FOAM is an open-source CFD toolbox which has been released and distributed under the GNU GPL (General Public License) [20] by the OpenFOAM Foundation, a non-profit organization.In the previous study [21], the simulation results obtained from the RIAM-COMPACT software were also compared to those from OpenFOAM, and the results of the comparison are discussed. The wind power industry has on its own developed and distributed CFD software designed for selecting sites appropriate for the installation of wind turbine generators.One such leading software package is Meteodyn WT [22], which has been developed by Meteodyn in France.Meteodyn WT is a CFD software package which incorporates a RANS turbulence equation with a one-equation closure scheme (k-L turbulence model; here, k and L refer to turbulence energy and the turbulence length scale, respectively).On October 12, 2017, "WT6.0", the latest version of the software, was released.Another one of the most widely used software packages is WindSim [23] by Norway-based WindSim AS.WindSim is a CFD software package which uses a RANS turbulence model and has been designed specifically for wind resource assessment.In December 2016, the latest version of the company's software package, WindSim 8.0, was released.These two CFD software packages are specialized for wind power resource assessment as well as the RIAM-COMPACT CFD software package. In the present study, numerical simulations are performed for airflow over and around a three-dimensional, isolated hill with a steep slope angle using the three CFD software packages (WindSim, Meteodyn WT and RIAM-COMPACT).The results of the comparison are discussed.Next, the numerical simulations for airflow over a large-scale wind farm in China [21] are performed with RIAM-COMPACT, which is based on an LES turbulence model, and Meteodyn WT, which is based on a RANS turbulence model.The numerical wind simulations in the present study are conducted for high Reynolds number airflow over and around a three-dimensional, isolated hill with a steep slope angle and a large-scale wind farm in China.Table 1 shows the simulation set-ups adopted in the two software packages which are used in 1 and Figure 2 illustrate the full computational grid used for WindSim and an enlarged view of the grid in the vicinity of the isolated hill, respectively.Figure 3 shows the inflow profile used for all of the simulations in the present study.Figure 4 shows the characteristic wind velocity and length scales which are employed for the simulations with RIAM-COMPACT. Overview of the In RIAM-COMPACT, a collocated grid in a general curvilinear coordinate system is used in order to numerically predict local wind flow over complex terrain with high accuracy while avoiding numerical instability.For the numerical simulation method, a FDM is adopted, and an LES model is used for the turbulence model.For the computational algorithm, a method similar to a FS method [24] is used, and a time marching method based on the Euler explicit method is adopted.The Poisson's equation for pressure is solved by the SOR method.For discretization of all the spatial terms in the governing equations except for the convective term in the Navier-Stokes equation, a second-order central difference scheme is applied.For the convective term, a third-order upwind difference scheme is used.An interpolation technique based on four-point differencing and four-point interpolation by Kajishima [25] is used for the fourth-order central differencing that appears in the discretized form of the convective term. For the weighting of the numerical diffusion term in the convective term discretized by third-order upwind differencing, α = 3.0 is commonly applied in the Kawamura-Kuwahara scheme [26].However, α = 0.5 is used in the present study to minimize the influence of numerical diffusion.For the LES subgrid-scale modeling, the standard Smagorinsky model [27] is adopted with a model coefficient of 0.1 in conjunction with a wall-damping function.For further details of the numerical simulation techniques, refer to Uchida [1]- [13]. Regarding the boundary conditions adopted for the simulations with RIAM-COMPACT, the same inflow profile as used for the simulations with WindSim (Figure 3) is given at the inflow boundary.At the side and upper boundaries, free-slip conditions are applied, and convective outflow conditions Open Journal of Fluid Dynamics are applied at the outflow boundary.On the ground surface, a non-slip boundary condition is imposed.For the simulation at Re (=U in h/ν) = 10 7 , the number of grid points is changed to 101 in the vertical direction, and the minimum vertical grid spacing in is set to Δz min /h = 4 × 10 −7 according to the equation below (see Table 1): In contrast to RIAM-COMPACT, WindSim uses RANS models.In the present study, the RNG k-ε RANS model is selected for the simulations.Refer to [23] for the numerical simulation methods used in WindSim and other details about this software.Figure 9. Method adopted in Meteodyn WT for setting the inflow profile and the inflow profile generated for the present study. Comparison of the Simulation Since simulations for a flow with Re (=U in h/ν) = 10 7 were not feasible with the RIAM-COMPACT natural terrain version software because of the time step, a numerical wind simulation is performed at Re = 10 6 , which is an order of magnitude smaller than the flow simulated with Meteodyn WT.For this simulation, the number of grid points in the vertical direction is set to 101 (37 for the simulation with Meteodyn WT), and the minimum vertical spacing is set to Δz min /h = 10 −4 based on the Equation (1) (Δz min /h = 5.0 × 10 −3 for the simulation with Meteodyn WT, see Table 2).At the inflow boundary, an inflow profile which is almost identical to the inflow profile used for the simulation with Meteodyn (Figure 9) is used.Free-slip conditions are applied at the side and upper boundaries, and convective outflow conditions are applied at the outflow boundary.At the surfaces of the ground and the isolated hill, non-slip conditions are imposed.The time step is set to Δt = 10 −5 h/U in (refer to Table 2).The results from the simulations are compared. Comparison of the Simulation Results from the Two CFD Software Packages (RIAM-COMPACT and Meteodyn WT) in the Case of a Three-Dimensional, Isolated Hill with a Steep Slope Angle The vibration problem of turbine T12 was investigated by the operator Open Journal of Fluid Dynamics YUDWPC and a report was issued in April 2014 [28].Stated in the report was that high vibration data was recorded only when the wind was blowing from the southwest.Wind direction on the ground level was observed to be in the reverse direction from that recorded by the nacelle anemometer.Analysis of the vibration data indicates the vibration is in the vertical direction.This suggests the vibration is associated with abnormal vertical wind shear across the wind turbine rotor.As shown in Figure 15, a figure extracted from the report, it was deduced that the presence of the small hill located about 150 m upstream from turbine T12 was causing the onset of turbulence and reverse flow which led to the vibration recorded. For LES simulation, the RIAM-COMPACT natural terrain version software package was employed.The software uses a standard Smagorinsky turbulence model.For the simulation, SRTM 90 m data was used for elevation data.Wind direction is set to true north at 247 degrees and the computational domain constructed is shown in Figure 16 with the following details: To increase calculation accuracy, the mesh is concentrated around the turbine positions in both the x and y direction as shown in Figure 16.No roughness consideration is given in the present LES simulation.Atmospheric stability is set to neutral stability.After the calculation has been stabilized, numerical results in the calculation domain are output for a real time of ten minutes with an interval of one second.In this study, the commercial software Meteodyn WT (turbulence model: k-L RANS) was employed and its results were compared with the results calculated by the RIAM-COMAPCT.The calculation parameters are shown in Table 3. The calculation domain is a radius of 10 km for the x-y direction with turbine T12 as center; z direction has a maximum of 200 m.Wind Direction is set at 247 degrees with minimum vertical and horizontal resolution set to 5 m and 2 m respectively.Atmospheric stability is set to neutral.The calculation was completed smoothly with computation convergence recoded at 99.3%.20 and the resulting wind shear profile is compared with the shear profile (average values) predicted by RIAM-COMPACT.It can be seen from Figure 20 that the shapes of the two profiles are similar from 50 m upwards but distinctively different below 50 m.Meteodyn WT does not seem to predict any flow separation and reverse flow region and therefore there is no significant wind speed reduction between 25 m and 50 m, and also no negative wind speed values below 25 m as predicted by RIAM-COMPACT.Numerical comparison results are shown in Table 5. Referring to Table 5, across the wind turbine rotor face, RIAM-COMPACT predicted a large wind speed difference with a shear exponent exceeding the IEC standard value of 0.2 by a large margin.In sharp contrast, Meteodyn WT predicted a small wind speed difference with a shear exponent of 0.025 which is significantly below the IEC standard. Summary Simulations were performed for airflow around a three-dimensional, isolated hill with a steep slope angle in order to compare the flow pattern simulated in the vicinity of the hill by three software packages.For the simulations, three software packages were used: 1) WindSim (turbulence model: RNG k-ε RANS), 2) Meteodyn WT (turbulence model: k-L RANS), which are the leading commercially available CFD software packages in the wind power industry and 3) RIAM-COMPACT (turbulence model: standard Smagorinsky LES).Comparisons of the simulated results revealed a distinct difference in the simulated flow patterns in the vicinity of the isolated hill (especially downstream of the hill).No reverse flow region (vortex region) characterized by negative wind velocities was identified downstream of the isolated hill in the result from the simulation with WindSim (RNG k-ε RANS) and Meteodyn WT (k-L RANS).In the case Thus, the flow pattern which forms in the vicinity of the isolated hill varies significantly according to the velocity boundary conditions applied for the surfaces of the ground and the isolated hill. Software Packages (RIAM-COMPACT and WindSim) and Numerical Simulation Set-Up in the Case of a Three-Dimensional, Isolated Hill with a Steep Slope Angle Figure 3 . Figure 3. Inflow wind velocity profile used in the present study. Figure 4 . Figure 4. Characteristic wind velocity and length scales used in the simulation with RIAM-COMPACT. Figure 5 . Figure 5 shows the ensemble-averaged flow fields from the simulations with WindSim (turbulence model: RNG k-ε RANS).In neither of these simulations Figure 6 Figure 6 . Figure 6 shows instantaneous flow fields from the simulations with RIAM-COMPACT (turbulence model: standard Smagorinsky LES) (Re = 5 × 10 4 and 1 × 10 7 ).An examination of these simulation results reveals the clear presence of a reverse flow region (vortex region), in which the values of the streamwise wind velocity are negative, downstream of the isolated hill. Figures 10 - Figures 10-12 show results from the simulation with the Meteodyn WT software package (turbulence model: k-L RANS).These results (for a flow at Re = 10 7 ) indicate that a reverse flow region (vortex region) characterized by negative values of wind velocity does not form downstream of the isolated hill, and a pattern resembling potential flow is present there.Figure 13 shows the results from the simulation with the RIAM-COMPACT natural terrain version software package (turbulence model: the standard Smagorinsky LES).Examinations of the results reveal that a reverse flow region (vortex region) characterized by negative values of wind velocities clearly exists downstream of the isolated hill in the simulated flow at Re (=U in h/ν) = 10 6 . Figure 12 . Figure 12.Velocity vectors at the center of the span (y = 0) in the vicinity of the isolated hill, Meteodyn WT, k-L RANS model, Re = 10 7 . Figure 13 . 6 . Figure 13.Streamwise (x) wind velocity distribution at the center of the span (y = 0) in the vicinity of the isolated hill, RIAM-COMPACT, standard Smagorinsky LES, Re = 10 6 .(a) Instantaneous flow field; (b) Time-averaged flow field. Figure 15 . Figure 15.Deduction made on the airflow upstream and in the vicinity of turbine T12. Figure 16 . Figure 16.Computational domain and grid used for the simulation with RIAM-COMPACT. Figure 17 showsFigure 17 . Figure 17 shows an instantaneous vector plot across turbine T12.This picture clearly shows flow separation occurred at the small hill located 140 m upstream from the turbine, and the onset of the formation of the recirculating vortex behind the hill.The turbulent flow extends downstream forming a reverse flow region characterized by negative values of wind speed covering the lower part of the wind turbine rotor.The simulation results also indicate that the wind flow is relatively undisturbed above hub height level.The U component wind speed time series during the ten minute simulation at rotor top (106.3m), hub center (65 m), rotor bottom (23.7 m) and surface level (10 m) positions are plotted in Figure 18.Referring to Figure 18, it is obvious that the wind speed at surface (10 m) and rotor bottom is significantly lower and showing more fluctuations than the wind speed at the hub and top part of the rotor.Wind speed varies between 15.0 to 20.0 m/s at hub height and rotor top whereas for rotor bottom wind speed fluctuates between negative 6.2 m/s to 2.0 m/s.Negative wind speed indicates the Figure 18 . Figure 18.Time series of U component wind speed at rotor top, hub, rotor bottom and ground surface level at turbine T12. Figure 19 . Figure 19.Vertical shear profile predicted by RIAM-COMAPCT, average, minimum and maximum of U component wind speed variation with height. Convergence 99. 3 % Meteodyn WT's calculation output includes the speed-up factor from height 20 m to 200 m at an interval of 20 m at turbine T12.These values are shown in Table 4.The speed-up factor is the wind speed ratio at the given height referencing the wind speed at height 10 m.The speed-up factor therefore resembles the vertical shear profile.Assuming a wind speed of 9.5 m/s at 10 m height, wind Figure 20 . Figure 20.Comparison of vertical shear profile between the wind flows simulated by RIAM-COMPACT and Meteodyn WT. Figure 22 . Figure 22.Streamwise (x) wind velocity distribution in the vicinity of the isolated hill at the center of the span (y = 0), RIAM-COMPACT, standard Smagorinsky LES, Re = 10 4 .(a) Case 1: Simulation result from the case in which non-zero wind velocities are appliedas a Dirichlet boundary condition; (b) Case 2: Simulation result from the case in which all three wind velocity components are set to zero as a Dirichlet boundary condition. Table 3 . Numerical simulation methods, parameters, and settings for Meteodyn WT. Table 4 . Speed-up factor at turbine T12 and calculated wind speed. Table 4 . The wind speed figures in Table4are plotted in Figure
2018-12-25T11:25:47.241Z
2018-07-06T00:00:00.000
{ "year": 2018, "sha1": "c0cb603db9ccaad17d5a0cdea7c7688dce245534", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=87086", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "c0cb603db9ccaad17d5a0cdea7c7688dce245534", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Geology" ] }
52824993
pes2o/s2orc
v3-fos-license
White Paper for Global Palliative Care Advocacy: Recommendations from a PAL-LIFE Expert Advisory Group of the Pontifical Academy for Life, Vatican City Abstract Background: The Pontifical Academy for Life (PAV) is an academic institution of the Holy See (Vatican), which aims to develop and promote Catholic teachings on questions of biomedical ethics. Palliative care (PC) experts from around the world professing different faiths were invited by the PAV to develop strategic recommendations for the global development of PC (“PAL-LIFE group”). Design: Thirteen experts in PC advocacy participated in an online Delphi process. In four iterative rounds, participants were asked to identify the most significant stakeholder groups and then propose for each, strategic recommendations to advance PC. Each round incorporated the feedback from previous rounds until consensus was achieved on the most important recommendations. In a last step, the ad hoc group was asked to rank the stakeholders' groups by order of importance on a 13-point scale and to propose suggestions for implementation. A cluster analysis provided a classification of the stakeholders in different levels of importance for PC development. Results: Thirteen stakeholder groups and 43 recommendations resulted from the first round, and, of those, 13 recommendations were chosen as the most important (1 for each stakeholder group). Five groups had higher scores. The recommendation chosen for these top 5 groups were as follows: (1) Policy makers: Ensure universal access to PC; (2) Academia: Offer mandatory PC courses to undergraduates; (3) Healthcare workers: PC professionals should receive adequate certification; (4) Hospitals and healthcare centers: Every healthcare center should ensure access to PC medicines; and (5) PC associations: National Associations should be effective advocates and work with their governments in the process of implementing international policy framework. A recommendation for each of the remaining eight groups is also presented. Discussion: This white paper represents a position statement of the PAV developed through a consensus process in regard to advocacy strategies for the advancement of PC in the world. Background E very year, over 25.5 million people die with serious health-related suffering (SHS) associated with lifelimiting and life-threatening conditions. An additional 35 million live with these conditions and SHS. 1 Yet the vast majority of the world does not have access to adequate treatment and care and social support. Palliative care (PC) helps relieve SHS by providing physical, psychosocial, and spiritual care to patients and their families. PC relieves ''total pain'' by shifting the often overly technical modern medical model to a holistic person-centered model of care. 2 Estimates of unmet PC needs worldwide are around 26.8 million per year. 3 Other data suggest an even greater need of up to 40 million people per year, 4 with estimates reaching 61 million people around the globe suffering from SHS. 1 Various additional studies have shown a deficit of PC demand to PC supply, [5][6][7][8][9] highlighting a lack of access to PC as a major global health inequity issue. 10,11 There has been a rising burden of noncommunicable diseases (NCDs) worldwide, and globally, NCDs cause 70% of all deaths 12 and generate 93% of adult PC need, and nearly 80% of the global PC need is in low-to-middle income countries. 3,5 Furthermore, the global population is aging, and this, partnered with the increased prevalence of NCDs and the persistence of other debilitating chronic and infectious diseases, reflects an alarming increase in need for PC provision at the global scale. 4 In fact, studies estimate that by 2040, the proportion of people worldwide in need of PC will increase from 25% to 47%. 13 This growing need is recognized by global health organizations; the World Health Organization (WHO) recently approved the 13th General Program of Work recognizing the ''limited availability of [PC] services in much of the world and the great avoidable suffering for millions of patients and their families'' 4,14 and concluded with several recommendations for further PC development and support for global PC advocacy campaigns. Although research has shown that PC has steadily grown at the global level, the demand far outstrips supply, and this growth has been very uneven, with some countries having progressed very little over the past decade. [4][5][6][7][8][9] The Catholic Church's appreciation for the PC as an approach to take care of the vulnerable is evident in its catechism, which includes the following statement ''[Palliative care] represents a special form of disinterested charity, and as such, should be encouraged'' (Catechism of the Catholic Church, n. 2279). Recently, Pope Francis shared with health professionals meaningful words on PC: ''I encourage professionals and students to specialize in this type of assistance which is no less valuable for the fact that it is not life-saving. [PC] accomplishes something equally important: it values the person.'' 15 The Pontifical Academy for Life (PAV) is an academic institution of the Holy See (Vatican) dedicated to the promotion of human life, and, among other specific topics, the study issues in medical ethics. In 2017, the PAV launched an international project called ''PAL-LIFE: An International Advisory Working Group on diffusion and development of palliative care in the world'' to advise on how the Catholic Church could assist in continued PC development at the global level. 16 This white paper represents a position statement of the PAV regarding PC, intended to be used for advocacy with local governments, healthcare organizations, leaders on the ground, and faith-based communities. Design A process was developed to generate consensus among 13 PC experts on key recommendations for major stakeholders' groups, including ranking both the recommendations and the stakeholders' groups by importance, as well as providing suggestions for implementation. The study was submitted and approved by the Clinical Research Ethics Committee of the University of Navarra. Selection of experts and definition of the process The expert group was selected considering and balancing diverse geographical regions and professional backgrounds. Members included clinicians, ethicists, and health administrators working in academic centers or international and regional PC organizations and professing different faiths. The PAV initially chose three experts in PC advocacy with global expertise in PC development. In a second step, additional experts were added to the group through a snowball process of recommendations by at least 2 peer experts reaching a total of 13 persons considered to be experts in PC advocacy (''ad hoc group''). Two additional experts were invited in the last stage of the process based on the suggestion of several members of the group. Table 1 shows the members of the ad hoc group. An initial face-to-face meeting was conducted at the venue of the PAV in Rome, in March 2017. The purpose of the meeting was to define the strategy and methodology for identification of the key recommendations to be determined by the ad hoc group. It was outlined as the project for a draft of a position statement (''white paper'') on PC advocacy containing recommendations for health policy planning and providing guidance to different stakeholder groups on how to advance the development of PC in countries and regions. For the purposes of this project, the ad hoc group used the WHO definition of PC. The group also adopted the WHO public health strategy framework for PC. 17 Identification of stakeholder groups In Round 1, experts of the ad hoc group were invited by email to identify the most relevant stakeholder groups to which the recommendations would be directed to. These stakeholder groups were identified based on their key roles in their ability to promote PC development at national or regional levels in healthcare and/or society. From the initial list, through a Delphi consensus process, members of the ad hoc group suggested new stakeholder groups or modified ones already in the list, resulting in a final list of 13 groups. Based on the field of expertise, each expert was assigned to a specific stakeholder group. Table 2 shows the stakeholder groups agreed upon by the ad hoc group. Consensus process for the recommendations In Round 2, each member was contacted by e-mail and requested to provide two to three recommendations for his/ her corresponding stakeholder group. Each recommendation was accompanied by a statement, including rationale for the proposal, up to a maximum of 200 words. Recommendations were built upon the previous work and experience of the experts within their own or several other institutions to ensure best possible recommendations per stakeholder group. In Round 3, all the recommendations were shared with the entire ad hoc group through an online survey tool (https:// es.surveymonkey.com), and each member of the ad hoc group was asked to rank each recommendation on a Likert scale from 1 to 5 (1 being ''not important at all'' and 5 being ''extremely important''). The average number of points for each recommendation was calculated. The results were preliminarily presented in a PC conference organized by the PAV in Rome in March 2018 and, subsequently, discussed by the ad hoc group in a new face-to-face meeting with a subset of the experts. During this meeting, a thorough review of the recommendations and suggestions for implementation was conducted to improve wording. In Round 4, 12 members of the ad hoc group reviewed their previous ratings and conducted another round of rankings of the stakeholder groups based on their perceived importance for PC development. Using a 13-point scale, points were assigned to each, according to the ranking given by each member [range: 12 (worst = 1 point per expert) to 156 (best = 13 points per expert)]. An exploratory K-means cluster analysis provided a classification of the stakeholders in different levels of importance for PC development. As final step in this Round 4, members of the ad hoc group were asked to provide suggestions for implementation for each of the recommendations. PAV endorsement of the recommendations and presentation of outcomes The resulting recommendations from each stakeholder group were revised and agreed upon, then endorsed by the Board of Directors of the PAV. The endorsement will be announced during the plenary session of the annual meeting of the PAV ( June 2018) as the official position of the Academy and as the recommendations of PAL-LIFE. In this article we present the five highest ranked recommendations (''first-line'') with concrete suggestions for implementation. The additional eight recommendations for the remaining stakeholders' groups are presented as second-line interventions. All the recommendations are accompanied by a description, rationale, and bibliographic references. Additional Scores on relative importance (range 1-156) and K-means for cluster analysis. recommendations discussed by the group, but not ranked within the highest scores for importance, are included in a report on the PAV website (www.academyforlife.va). Results Thirteen stakeholder groups and 43 recommendations resulted from the first round, and of those, 13 recommendations were chosen as the most important (1 for each stakeholder group) and are presented in this study. These, plus the additional 30 recommendations are available in the PAV website. Table 2 indicates the stakeholder groups and the total scores each received through the ranking (range 1-156). The K-means cluster analysis confirmed the existence of two levels in the ranking of the stakeholders' groups as follows: five groups had higher scores (closer to K-mean 103.4) and nine stakeholder groups lower ones (closer to K-mean 52.4). First-line stakeholder groups for advocacy are (1) policy makers (in all units of government); (2) academia (universities and colleges); (3) healthcare workers; (4) hospitals and healthcare centers; and (5) PC professional associations. The recommendations for first-line stakeholders are included in Table 3 along with suggestions for their implementation. Second-line stakeholder groups (Table 4) are (6) international organizations; (7) mass media; (8) philanthropic organizations and charities; (9) pharmaceutical authorities; (10) patients and patient groups; (11) spiritual care professionals; (12) non-PC professional associations and societies; and (13) pharmacists. The recommendations can be seen in Table 4. Reflections for the advocacy of PC to first-line stakeholders' groups Policy makers. Patients with chronic progressive diseases, such as cancer, congestive heart failure, chronic obstructive pulmonary disease, and HIV/AIDS, develop severe physical, psychosocial, and spiritual symptoms before death. 1,18 There is strong evidence that PC is beneficial in reducing much of this suffering in patients, as well as psychosocial and spiritual or existential distress in families. 19 There is strong evidence that these benefits are accompanied by a reduction in the total cost of care. 20 Cost savings are achieved mainly by preventing unnecessary disease-oriented investigations and treatments, as well as hospitalizations in acute care hospitals and intensive care units. [21][22][23][24][25] Value in healthcare results from the balance between benefits and costs. PC has demonstrated impact on both components of value. Academia (universities and colleges). According to the UN Committee on Economic, Social and Cultural Rights (CESCR), Member States are required to ensure universal access to PC. This obligation includes the duty to ensure that healthcare workers meet appropriate standards of education. 4 Accordingly, the WHO urges Member States to integrate basic PC training into all undergraduate medical and nursing professional education. 11 In other words, international law stipulates that governments and universities of Member States provide adequate training of healthcare workers pursuant to the principles laid out by the WHO. 26 Studies also suggest that early and continuous student exposure to PC education is associated with positive attitudes and increased satisfaction toward PC among undergraduate medical students. 27 Studies also demonstrate undergraduate nursing students' belief that PC training should be an essential component of their education, contributing favorably to both their personal and professional development. 4 Complete integration of PC courses into all undergraduate curricula for future healthcare workers is both an obligation under international law and an evidence-based educational strategy. Healthcare workers. In addition to requiring basic-level PC training for all undergraduate medical and nursing professional education, the WHO urges Member States to ensure intermediate-level training to all healthcare workers who routinely encounter patients with life-threatening illnesses and to fully integrate PC into healthcare in every setting, specifically highlighting community settings, and throughout the course of advanced illnesses. Member States are also required to provide specialist-level training to prepare healthcare professionals who will engage in more than routine PC practice. 4 This means that healthcare workers must receive appropriate certification, acquiring competences that are required by the proper standards of certification. Specialist-level training is of particular importance in places where the role of PC specialists has not yet been institutionalized. Hospitals and healthcare centers. Modern medical science, unfortunately, based increasingly on technology, has become so disease oriented as to neglect the human being. Health-related suffering is often ignored. Persistent attempts at treating the disease, even in the face of futility of treatment, cause, in addition to physical, social, and mental suffering, financial difficulties and spiritual distress. In his address to participants in the Plenary of the PAV (Clementine Hall, March 5, 2015) Pope Francis said, ''I therefore welcome your scientific and cultural efforts to ensure that PC can reach all those who need it. I encourage professionals and students to specialize in this type of assistance, which has no less value on account of fact that it does not save lives. PC recognizes something equally important, the value of person.'' 15 The World Health Assembly in its landmark Resolution of 2014 4 called upon all Member States to integrate PC in Healthcare at all levels (primary, secondary, and tertiary) across the continuum of care (from the time health-related suffering starts until the death of the patient and continuing thereafter in the form of bereavement support for the family). PC associations. Patients who require PC often have diverse and overlapping illnesses and may be staying at home, in long-term care facilities, nursing homes, and hospitals. Delivery of holistic services to patients requires multidisciplinary teams which may work in the national/public health system, the church, or nongovernment sectors. 4 These teams need to plan their interventions based on the needs of the patient, whether adult or child, and the patient's family. 28 To develop the skills and improve their knowledge, the members of the multidisciplinary teams rely on guidelines and recommendations from PC associations and societies that often work with governments, other civil society agencies, donors, and promoters of PC to set up functional 1392 CENTENO ET AL. Ensure training in the trainer courses, also in primary healthcare teaching. 3. Healthcare workers: Healthcare professionals working in palliative care should receive appropriate certification while actively participating in continuing education to maintain the adequate competency levels Suggestions for implementation: Reach out to the national boards of medicine and nursing and the Ministries of Health and education through National Associations to advocate for the recognition of palliative care as a specialty. Establish a working group among members of the board of medicine and the board of nursing with palliative care experts in the country to determine the minimum level of competencies, knowledge and skills in palliative care, and years of dedication required to be recognized as palliative care professional. Standardize health professional education with basic and specialty certification programs according to each country's process of healthcare professional official certification 4. Hospitals and healthcare centers: Every hospital and healthcare center should ensure affordable access to palliative care medicines included in the WHO Model List of Essential Medicines, particularly to immediate-release oral morphine. It also should accept palliative care provision as a moral and ethical imperative. Suggestions for implementation: Ensure training of all staff in the fundamentals of palliative care Define a palliative care integration strategy for the hospital or Health Center To establish a minimum dataset to monitor the quality of care in advance disease and end of life 5. Palliative care associations: Representatives of national associations should be effective advocates and work with their governments in the process of implementing international policy framework, including Conventions, Resolutions, and Declarations in their countries (i.e., UNGASS outcome document, Agenda 2030, WHA Resolution). Suggestions for implementation: Implement advocacy workshops with representatives of national associations to empower representatives of civil society so that they adopt the skills to do effective advocacy campaigns and strategies. National associations have the power and legitimacy to request and demand from their governments the implementation of the international policies and frameworks which call for the inclusion of palliative care in the national policies and programs, the strengthening of NCD programs, and the adoption of the SDGs in the Agenda 2030. Work to set national standards in palliative care, including primary and specialist palliative education, and training and work with both governmental and nongovernmental stakeholders to develop a national palliative care strategy integrated into universal healthcare. NCDs, noncommunicable diseases; SDGs, sustainable development goals; WHO, World Health Organization. capacity building, service delivery, and research networks. 29 These build a system that can reach even the most disadvantaged communities not reached by conventional healthcare systems. 30 Reflections for the advocacy of PC to second-line stakeholders' groups International organizations. Recognizing that more than 75% of the world has no access to PC services, WHO Member States unanimously adopted WHA Resolution 67/19 in 2014. In 2015, UN Member States unanimously adopted Agenda 2030 for Sustainable Development in 2015 with the pledge to ''leave no one behind.'' Leaving no one behind means that UN Member States and agencies must collaborate to develop integrated, human rights based policies and procedures to realize their key public health outcomes. Human rights based public health policies make integrated personcentered services available to all citizens, migrants, and refugees of all ages in all settings: home, hostel or hospice, rural or urban clinic, hospital, and long-term settings such as nursing homes and prisons. 3,16,[31][32][33][34][35][36][37][38] Mass media. There is a misconception about PC both among the general public and among healthcare professionals that PC is synonymous with end-of-life care. 39 PC is not just for the dying. With this understanding comes an imperative for patients to receive PC earlier in their disease trajectory. 40 This requires a cultural shift that starts with physicians to the general population. Philanthropic organizations and charities. PC must be integrated into national health systems around the world. National governments have not provided adequate financing to support PC development, and nongovernmental organizations, professional organizations, foundations, faith-based organizations, charities, charitable trusts, and development agencies have played important roles in the development of hospice and PC at the international and community levels, providing both medical and social support. With the potential for governments to provide universal health coverage (UHC) and a basic package for PC, all donor organizations must work with PC providers to develop innovative educational and social support systems. [41][42][43][44] Pharmaceutical authorities. Morphine is recommended by the WHO as the first-line strong opioid for the management of moderate-to-severe cancer pain in adults and children. [45][46][47][48] Although it is available in different formulations, 49 it is recommended that the availability of cheap immediate-release oral morphine is a priority due to reasons such as affordability and flexibility in use. 50 Although other, newer strong opioids should also be made available, availability of these newer opioids should not be considered as a replacement to availability of morphine. To the mass media: Mass media should be involved in creating a culture of understanding around advanced illness and the role of palliative care throughout the life course and as a component of UHC. 8 To philanthropic organizations and charities: Individuals and organizations involved in palliative care must engage, educate, and advocate for philanthropic organizations and charities to support palliative care development and implementation of services. 9 To pharmaceutical authorities: Morphine (preferably immediate release oral formulation) is the preferred medication for the treatment of moderate/severe cancer pain and palliative care and should be made available and accessible. No government should approve modified-release morphine, transdermal fentanyl patches, or slow release oxycodone without also guaranteeing widely available immediate-release oral morphine. 10 To patients and patient groups: Patients and patient groups could be of great help in developing and demanding a health literacy campaign for all patients with PC needs and their families to increase the knowledge and understanding of PC and its role in the decision-making process. 11 To spiritual care professionals: Religious institutions and spiritual care groups should work to include spiritual care-including ongoing assessment of spiritual distress and spiritual well-being-integrated into guidelines of care and as a component of routine palliative care provision. 12 To professional associations and societies other than Palliative Care: Nonpalliative care professional associations and societies should encourage human rights organizations to consider existing declarations and to implement strategies whose aim is advancing palliative care development worldwide within a human rights framework. 13 To pharmacists: Pharmacists should play an active role in palliative care teams by assessing the appropriateness of the medicines prescribed to patients, by ensuring timely dispensation, by educating the team members about pharmacological interactions, and by ensuring that patients and caregivers understand the prescribed regimen to ensure adherence to treatment. PC, palliative care; UHC, Universal Health Coverage. Patients and patient groups. Health illiteracy, even in countries where PC is well developed, is an obstacle for early integration of PC, which improves therapy. 40 Mistakenly, some patients may perceive that alleviating symptoms is a way to hasten death. 51 There is significant health illiteracy, and patients and families are not aware that PC can be given concurrently with active disease oriented therapies. Education targeted to these groups can help to dispel the myths about PC as hastening death or only a care approach for dying patients. Spiritual care professionals. The WHO has recognized spiritual care as a required element of PC. Spiritual distress (spiritual or existential suffering) needs to be addressed by all members of the team to provide the best quality care for patients and families and to help relieve suffering of patients and families. Several U.S. and international consensus conferences have developed definitions and models for addressing spiritual distress in the clinical setting. 52 Religious leaders should advocate for the inclusion of interprofessional spiritual care in PC and advocate for appropriate training of all clinicians in providing spiritual care to patients and families, as well as developing, training, and helping to sustain adequate staffing of healthcare chaplains in all health settings. 53,54 Non-PC professional associations and societies. Acknowledgment of pain relief and PC as a human right have been widely declared by many institutions and organizations. 4,10,49,55,56 Pharmacists. PC patients often need to take multiple medications simultaneously and, as a result, have an increased risk of drug interactions and drug-related problems of essential medicines for PC. Pharmacists have more knowledge of medications and their effects than any other member of the healthcare team and are, therefore, the best equipped to detect possible problems and make the appropriate recommendations. 57,58 Discussion This article presents the consensus of 13 PC experts from around the world, in line with the PAL-LIFE objectives, on what are considered the most important recommendations to 13 groups of stakeholders to help advance PC development. Some of the recommendations are applicable to several stakeholder groups (i.e., recommendation for pharmaceutical authorities on morphine availability should also be directed to lawmakers, administrators, pharmaceutical manufacturers, dealers, and PC advocates or recommendation to universities should also be presented to healthcare workers and educators). Many of the items presented in this study require a coordinated approach. Globally, a majority of patients die with severe pain without having ever received a single dose of morphine or other opioid analgesic. To address this tragic situation, it is important to harmonize the need for increased access to opioids for pain treatment, while taking into consideration the abuse potential and adverse effects. This requires a coordinated approach among policymakers, universities, phar-macists, and professional associations so that safety measures are put in place for the goals to be achieved. The recommendations in this White Paper focus on crucial issues. However, optimal situations may require more comprehensive and broader recommendations (i.e., the recommendation for pharmaceutical authorities on morphine availability should be accompanied by a statement clarifying that more than one low priced opioid is needed since up to 80% of patients may need opioid rotation at some point, even though only morphine is specifically recommended in this study). The critical issue is that governments should take the necessary steps to ensure access to PC medicines included on the WHO Model List of Essential Medicines, including morphine as the gold standard and all the others in the List. Similarly, some recommendations may not capture the importance of spiritual care that is equally important to the physical and psychosocial domains. Spiritual, religious, and existential concerns are also dimensions which require care and should be addressed, registered, monitored, and managed by the PC team. One limitation of this article is that it is based on the consensus of a small (13) group of PC experts and later approved by the board of directors of the PAV. A larger group could have probably resulted in additional stakeholder groups which would have broadened the scope of this position article. For these reasons, the group strongly recommends considering the recommendations broadly, while taking into account the additional 30 agreed-upon recommendations available on the website of the PAV (www.academyforlife.va). This white paper represents a position statement of the PAV with regard to PC. Caring for the sick has been part of the missionary activity of the Catholic Church since its inception. The Church refers to PC as ''a special form of disinterested charity. As such it should be encouraged'' (Catechism of the Catholic Church, n. 2279). The Magisterium of the Catholic Church has intervened several times in recent years to emphasize the dignity and preciousness of each human being, even of those who are afflicted with serious or terminal illnesses. Recently, Pope Francis described PC as ''an expression of the truly human attitude of taking care of one another, especially of those who suffer. It is a testimony that the human person is always precious, even if marked by illness and old age. [.] Thus, I appreciate your scientific and cultural commitment to ensuring that palliative care may reach all those who need it. I encourage professionals and students to specialize in this type of care that is no less valuable for the fact that it 'is not life-saving.' PC accomplishes something equally important: it values the person.'' 15 The Christian movement, consistent with the teachings of Jesus of caring for the destitute, the vulnerable, and the poor, has developed and built large care networks which include hospitals, clinics, and health centers throughout the world. Faith-based hospitals and healthcare institutions, from local clinics to tertiary research institutions, are all sites where PC fits in as part of the concept of care and solidarity, as well as a component of care within the health system. In many countries, regardless of the most prevalent professed faith, a significant number of healthcare facilities are operated by the Catholic Church and other Christian denominations. With such a large network, the Church has the opportunity to lead a major movement to relieve the suffering of millions of patients and their families. This White Paper may be used as a checklist for countries or regions to identify and implement basic strategies to improve the care for patients and families with PC needs. It can also serve as the basis for development of a more comprehensive list of recommendations adapted to the institutions or groups within each stakeholder group or specific geographical contexts. It will be undoubtedly useful for advocacy with local governments, faith-based communities, and others. In summary, this White Paper emphasizes the responsibility of healthcare systems and stakeholders to recognize access to pain relief and PC as a basic right of the person and the family and the responsibility of all elements of the healthcare system. For this, it is necessary to recognize health as not only an absence of disease but also as physical, emotional, social, and spiritual well-being, which can be optimized only by making essential PC medicines available, governments integrating PC into their healthcare plans and UHC, and developing public and professional education, as well as clear frameworks for implementing this care to prevent needless suffering. The support of faith-based and philanthropic organizations, nongovernmental and governmental actors, and human rights organizations is needed to support PC integration. In short, a civil society response is needed.
2018-10-02T01:19:39.545Z
2018-10-01T00:00:00.000
{ "year": 2018, "sha1": "e9c0bb0cba146b10a18cfaa4a3386df998748ff1", "oa_license": "CCBY", "oa_url": "https://www.liebertpub.com/doi/pdf/10.1089/jpm.2018.0248", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "5a9e5c22ec94edc89a125d244c7535b37ca1b35f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14768928
pes2o/s2orc
v3-fos-license
New Insights into Polychaete Traces and Fecal Pellets: Another Complex Ichnotaxon? Neoichnological observations help refine paleoichnological records. The present study reports extensive observations on the distribution, morphology, occurrence and association of burrows and fecal pellets of the polychaete Nereis diversicolor in the Kundalika Estuary on the west coast of India. Our holistic study of these modern-day traces suggests it to be a complex trace arising from domichnial, fodinichnial and possibly pascichnial behavior of polychaetes. The study for the first time reports extensive fecal pellet production, distribution and their preservation as thick stacks in modern estuarine environment. These observations testify the fossilization potential of pellets and provide an explanation to their origin in the geological record. Their occurrence as strings associated with mounds not only suggests pascichnial behaviour of polychaetes but also allows the assignment of post-Paleozoic Tomaculum to the activity of polychaete worms. The production of fecal pellets in such large quantities plays a major role in increasing the average grain size of the substrate of these estuarine tidal flats, thereby improving aeration within the substrate. Introduction Fecal pellets comprise a vital group of trace fossils, especially when found in conditions where preservation of soft bodied organisms is absent or poor. They have been used to reconstruct almost all types of paleoenvironments and paleoecologies throughout time [1][2][3][4]. However, systematic neoichnological studies have provided modern analogues to calibrate interpretations of the fossil record. This has led to a better understanding of organismal media interactions, their behavior and preservation potential into the geological record. The present study describes the occurrence, distribution and architecture of a complex trace of polychaetes, giving new insights to pellet-burrow associations reported from the fossil record. Polychaete burrows associated with mounds of pellets are the most common traces visible on the vast tidal flats of the Kundalika Estuary. The Kundalika is a major river meeting the Arabian Sea at Revdanda in the Central West Coast of India (Fig 1A), which originates at an altitude of 820 m above sea level about 150 km southeast of Mumbai. It flows in a southeastnorthwest direction, has a funnel shaped mouth and the estuary is dominated by semi-diurnal tides. Of the total 40 km length of the estuary, the lower 27 km are the tidal stretch. The width of the channel increases considerably from the upper (150 m) to the lower reaches (600 to 700 m). Upstream the estuary is a drowned valley that opens up into a wide channel with expansive tidal flats in its middle reaches. The tidal flats in the upper estuary (50 to 150 m wide) are characterized by marshlands, whereas those in the middle and lower estuary (100 to 900 m wide) support dense mangrove vegetation (Fig 1B). The study area experiences tropical warm, humid climate throughout the year. The temperature ranges between 25°C and 35°C and the average annual rainfall is 3750 mm. Methodology In order to study the ichnoactivity in the Kundalika estuary, traverses were taken through the tidal flats, in the lower, middle and upper reaches of the estuary. Variation in the distribution of polychaete burrows was observed and recorded in field. The polychaetes which created the burrows were isolated wherever possible and narcotized using pure Ethanol for identification. Pellets associated with the burrows were collected for detailed microscopic observation. Sediment samples were collected in order to determine sediment textures supporting the burrows. Thermo Scientific Orion Star A329 portable, multiparameter meter was used to record environmental parameters such as temperature, salinity, pH and dissolved oxygen (Table 1) on water samples collected at the same sites where shallow cores were extracted for the study. Undisturbed sediment cores, both circular (length 1 m; diameter 11.5 cm) and box-shaped (20 x 18 x 22 cm 3 ) (Fig 2A), were extracted from the tidal flats for laboratory studies. The cores were cut open after a week or two, after little desiccation, to enable observations. The dried sediment cores and pellets were observed under the microscope. Scanning Electron Microscopic (SEM) imaging of pellets was done on a Zeiss EVO MA 15 machine in order to describe their surface ultrastructure. In order to determine the grain size distribution of the substrate,~15g of sediments collected in three replicates from different locations studied in the estuary were dried overnight at 60°C. Each dried sample was weighed and soaked in distilled water. They were subsequently treated with 10 ml of 10% sodium hexa-metaphosphate to dissociate the clay particles followed by 5 ml of 10% hydrogen peroxide to oxidise the organic matter, if present. The treated samples were wet sieved through 63 μm (250 mesh) size sieve. The sand residue retained over the sieve was dried at 60°C to get the weight of the sand fraction. The filtrate collected in a measuring cylinder was used for pipette analysis to determine the silt and clay fraction in each sample. It is declared that the field area was in public domain and was not part of any protected area / sanctuary, nor was it any private property. So no permissions were required to sample the study area. Polychaetes are not protected/endangered/scheduled animals. Observations Circular openings of polychaete burrows, visible to the naked eye, dominate the tidal flats in the lower reaches of the estuary. They become more conspicuous in occurrence as well as size, away from the low tide line (~40 m from LTL), as the substrate becomes firm. They are often associated with dispersed fecal pellets (difficult to identify and associate in the field) or with aggregated pellet mounds, located very close to the burrow openings (Fig 2B and 2C). The sediment texture is silty-mud (78% Mud = Silt 48% + Clay 30%). The consistency of the substrate is also controlled by 40-50% water content, causing the pellet mounds to splay. In the vegetated part of the tidal flat the water content of the sediment is low and the ground is comparatively firm. The burrow openings are well defined, circular in surface manifestation, and vary between 0.8 to 1.2 mm in diameter. The polychaete isolated from the tubular burrows is identified as Nereis diversicolor. As observed in field and from sediment cores, the worm makes tubular, branched burrows of uniform width throughout their length. These burrows are associated with a light halo throughout their length. These straight to slightly sinuous burrows extend to a depth of about 20 cm. (Fig 2D). They may or may not be associated with pellet aggregates that reach a maximum size of about 3 mm ( Fig 2E). Depending upon their overall size, the pellet aggregates consist of 25 to 1000 pellets each. The individual pellets are compact, well defined and elliptical in shape, with blunt ends. Their long axes range in length from 310 to 450 μm and their short axes from 130 to 220 μm. The length/width ratio (L/W) ranges from 2 to 2.5 (Avg. = 2.3±0.20). The surface of the pellets show inclined, fine striations bifurcating away from the long axes, which could be attributed to their ejection through the anus. (Fig 2F and 2G). The attachment scar along which pellets adhere to one another, is manifested as a broad, shallow furrow ( Fig 2F). However, they do not exhibit any differentiated internal structures. The pellets commonly show surface scars and minimal deformation due to compaction. Microscopic observation of the box core surface revealed that the entire surface of the sediment core is covered with a continuous layer of randomly oriented pellets (Fig 2H). The pellets occur as small mounds only when associated with a burrow opening. They also sometimes occur as strings across the surface. These strings commonly connect burrows openings (Fig 2I). Close observation of the side walls of the box cores shows compact layering of pellets, with their circular transverse sections stacked tightly upon each other. The entire surface constituting the lateral side of a core is stacked with pellets bound together ( Fig 2J). Deformation of the pellets due to compaction is not observed. The internal structure of the pellets is identical to the texture of the surrounding sediments. The dense accumulation of pellets seems to alter the original texture of the sediments on the tidal flat, from fine grained muddy, to coarser pelleted sands. Examination of the internal, desiccation parting surfaces of the core also reveals randomly oriented, but laminate stacking of pellets, parallel to the tidal flat surface. Upon drying, each of these pellets shows a dark colored core with an external, ultrafine, light colored, shiny lining comprised of crystalline, saline precipitates (Fig 2E). Polychaete burrows are not evident on the surface of the intertidal mudflats in the middle (12 km inland) and upper reaches (24 km inland) of the Kundalika estuary. In the upper reaches, the sediment texture is more muddy (Clay 74% + Silt 25% + Sand 1%) than that in the lower reaches. The flats, though exposed during low tide, are characterized by dense marsh and are always water logged. However, abundant polychaete burrows were observed in the sediment cores obtained at these locations. Drying or loss of water content in the sediments, post collection, could have enabled burrowing and / or these observations. The top surface of the sediment core showed a network of burrows, parallel to the bedding plane. Similar networks of burrows were observed along the periphery of the sediment cores, perpendicular to the sediment surface, up to a depth of about 16 cm (Fig 3A). Most of the burrows were stuffed with ellipsoidal fecal pellets, which showed no preferred orientation (Fig 3B and 3C). At some places, the inner surface of the burrow showed well-defined annulations (Fig 3D and 3E). The polychaete burrows either occur as circular openings on the tidal flat surface, or as a network of unlined, tubular and irregularly branching burrows. The internal surfaces of these burrows also occasionally display annulations. Very commonly these burrow openings are associated with pellet mounds / aggregates. These burrows are also seen packed with pellets, where the clay percentage and water-logging are considerably high. These pellets also occur as strings across the surface of the tidal flat, which in itself is covered with a blanket of fecal pellets. As seen in Fig 2H the strings often connect two pellet mounds associated with a burrow. Stacking of numerous laminae constituted by pellets, characterize the cross section of the tidal flat in the lower reaches of the estuary. Discussion Polychaete burrows associated with pellet mounds have been widely studied and reported from modern settings [5][6][7][8]. However, the present study for the first time reports them as a composite trace. The observations compiled above provide a holistic view of the intense bioturbation by polychaetes and have ichnological, sedimentological and paleoenvironmental implications. Ichnological implications The individual components constituting this complex trace, when seen in isolation, can be referred to various ichnotaxa / fossil analogues. The burrows can well be compared with the ichnogenus Trichichnus, which is reported to be eurybathic, associated with fine grained sediments, and attributed to marine meiofaunal deposit-feeders [9]. However, the same burrow featuring annulations can be compared with Planolites annularius [10]. Burrows filled with pellets are co-relatable with the ichnogenus Alcyonidiopsis [10] because this ichnogenus and all its species are characterized by burrows with a diameter of 5 to 7 mm and pellets with diameters of 400 to 600 μm, which are much larger than those observed in the present study. The pellets in the present study and their aggregates are most comparable with Tibikoia [11], though again differ by being much smaller in size. Tibikoia is an ichnogenus used to describe oblong shaped fecal pellet aggregates only. They have been attributed to polychaetes and are about 1 mm in length and 0.5 mm in diameter; individual aggregates attain maximum diameter of 20 mm. Tibikoia is now regarded as the junior synonym of Coprulus [12]. Baluk and Radwanski [13] described a new ichnospecies Tibikoia santacrusensis and attributed the pellets to polychaete annelids, presumably related closely to the present-day species Heteromastus filiformis (Claparède). The burrows of H. filiformis are also single aperture burrows showing subsurface branching identical to those found in the present study and are also reported from estuarine mud areas [14][15]. The strings of pellets, interrupted by small mounds or aggregates seen on the surface of the tidal flat, can be compared with the ichnogenus Tomaculum [16]. This ichnogenus describes strings of elliptical fecal pellets up to 10 cm long and 1 to 2 cm wide and lying on the bedding plane. Pellets therein (1-5 mm in length and 0.5 to 1.5 mm in diameter) occur in clusters which are loosely strung together and are attributed to trilobites or annelid worms [17]. However, fossil fecal pellet strings identifiable as Tomaculum in post-Paleozoic rocks cannot be attributed to trilobites. In contrast to our existing knowledge of organisms creating such pellets, post-Paleozoic Tomaculum can now also be attributed to the activity of polychaete worms, based on the observations of the present study. Additionally, the fact that the pellet strings recorded on the surface of the tidal flats, often connect two pellet mounds associated with a burrow, is suggestive of the pascichnial behavior of the polychaetes. Sedimentological implications Although beds of pelleted muds have not been reported from modern environments, early diagenetic, syndepositional pellet sands retaining their original texture have been reported from Danish Tertiary sediments ( [3] and references therein). In that study, Friis suggested the fast deposition of a 10 cm layer of pelleted mud in an environment dominated by relatively high and slightly variable energy conditions. This was attributed to the lack of bioturbation, silt grains intermingling with the pellets, and current conditions as indicated by the slight imbrication of the pellets. It was argued that the production of fecal pellets by deposit feeders cannot explain the deposition of mud, and instead only leads to the redeposition and rearrangement of the mud in current-bedded sediment. Instead, the pelleted sands were considered to result from the transport of pelleted mud from areas of suspension deposition into areas of more regular current deposition, and to represent the recycling of mud that was originally deposited by suspension feeders as their fecal pellets, as described by Pryor [18]. Pelleted laminations accumulated in the box cores extracted from tidal flat sediments of the Kundalika Estuary, offer renewed insights into several such geological records. The polychaete pellets deposited as mounds and strings on the surface of the tidal flats get splayed across the sediment surface as the water level gently rises through the high tide cycle. It is emphasized here that the coarse sand sized pellet aggregates seem to obliterate or overshadow the presence of associated burrows due to their concentration and/or their cementation in no particular alignment. This modern analogue could explain the formation and preservation of pelleted mud laminations in the Oligocene Vejle Fjord Formation of the Danish Tertiary. The fossilization of such fecal pellet accumulations seems possible only in the event of them being subjected to early cementation. In the Kundalika Estuary, the compact and cohesive fecal pellets are bound together in stacks by organic matter and salt matrix deposited by interstitial waters, facilitating the retention of the original shape of the pellets at this stage of early cementation. These modern and fossil occurrences of pellet beds testify the preservation potential of polychaete pellet sands in the geological record. Another important sedimentological connotation to be emphasized is the modification of sediment texture due to the aggregation of clay and silt sized particles into sand sized sediments, thereby improving the porosity of the sediment, leading to enhanced aeration in the surficial sediments. Paleoenvironmental implications The formation of pellet sands has biogeochemical implications. The alteration of sediment textures leads to better oxygenation of tidal mudflats, greatly reducing the preservation of organic matter. Interstitial oxygen in tropical, estuarine, tidal mudflats is crucial in the preservation of organic matter and preservation of peat. In this context the depth of bioturbation by the polychaetes is significant. Polychaete burrows have been reported up to a depth of 50 cm in sandy intertidal zones [7] where interstitial aeration is better. In the present study, where the substrate comprises 90 to 98% mud, the polychaete burrows are limited to a maximum depth of 20 cm within the sediment column, suggesting severe depletion in interstitial oxygen. The light halo consistently associated with these burrows represents an oxidized zone resulting from polychaete respiration. Kędzierski et al. [19] have reported light haloes lining only those Trichichnus burrows which are pyritized. They attribute the halo to the oxidation associated with pyritization. The complexity and distribution of the traces of the deposit feeding polychaetes discussed above suggests a developmental comparison with other composite traces, which attain different morphologies with progressive stages of development (e.g. Hillichnus lobosensis [20][21]). Thus, it is proposed that the making and distribution of these polychaete traces need to be revisited. Either this trace has in the past been observed in parts due to limitations in preservation, observations and/or due to ignorance about apparently inconspicuous association of its different elements. Conclusions Fecal pellets and associated burrows observed in the study area can be considered as modern analogues of different the ichnogenera, namely Alcyonidiopsis, Planolites, Coprulus, Tomaculum and Trichichnus. Here they are reported as complex traces [22] because the pellets occur as aggregates, strings, beds, and also stuffed within burrows, which are branched, unlined, characterized by a halo and occasionally annulated within. They are created by a combination of the feeding and dwelling activities of polychaetes. The present neo-ichnological study confirms the interpretation of Alcyonidiopsis / Tomaculum as indicators of feeding behavior of polychaetes [23][24]. However, that they represent pascichnial behavior [25] could not be ascertained due to lack of enough evidence. during field work is gratefully acknowledged. The help by Dr. Hemant Ghate, Emiritus Scientist in Zoology, Modern College of Arts Science and Commerce, Pune, in identification of the polychaete is greatly appreciated. We thank Dr. K.M. Paknikar, the Director of Agharkar Research Institute for supporting this study. We are thankful to Dr. Steffen Kiel, Dr. Rene Hoffman and another anonymous reviewer for their detailed, helpful reviews.
2017-04-30T16:01:11.571Z
2015-10-06T00:00:00.000
{ "year": 2015, "sha1": "06038865ec15fcb8bc81573cf961c2294973dbbb", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0139933&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "06038865ec15fcb8bc81573cf961c2294973dbbb", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
15130820
pes2o/s2orc
v3-fos-license
MACiE: exploring the diversity of biochemical reactions MACiE (which stands for Mechanism, Annotation and Classification in Enzymes) is a database of enzyme reaction mechanisms, and can be accessed from http://www.ebi.ac.uk/thornton-srv/databases/MACiE/. This article presents the release of Version 3 of MACiE, which not only extends the dataset to 335 entries, covering 182 of the EC sub-subclasses with a crystal structure available (∼90%), but also incorporates greater chemical and structural detail. This version of MACiE represents a shift in emphasis for new entries, from non-homologous representatives covering EC reaction space to enzymes with mechanisms of interest to our users and collaborators with a view to exploring the chemical diversity of life. We present new tools for exploring the data in MACiE and comparing entries as well as new analyses of the data and new searches, many of which can now be accessed via dedicated Perl scripts. INTRODUCTION Enzymes make the wonderful diversity of life possible, from thermophiles that exist under incredibly harsh conditions to the complexity of higher organisms, such as humans. However, despite their importance and our continued fascination with these often complex proteins we still have a relatively limited understanding of how they function. Since 1964, when the Enzyme Commission (EC) first published their rules for enzyme nomenclature and their system to classify the overall reaction that an enzyme performs (1), there have been over 5000 EC numbers assigned, although 836 have been subsequently either transferred to other EC numbers, or deleted (data correct as of June 2011). The first proteins with a fully defined sequence and assigned identifier from the curated portion of UniprotKB (Swiss-Prot) (2) were deposited in the 1980s, and the first crystal structures relating to an enzyme were deposited in the wwPDB (3) in the early 1970s. Since then, the growth of information has been persistent ( Figure 1A); however, there are still some significant gaps in our knowledge ( Figure 1B). Of the 4528 currently active EC numbers, only 2792 have a sequence in Swiss-Prot that has a fully assigned EC number (i.e. a catalytic activity with all four levels of the EC number assigned), and of those only 1761 also have an associated structure deposited in the wwPDB, although not all of these will have a reliable mechanism published in the primary literature. Despite this apparent lack of data, there is a great deal of knowledge available, including structures, gene sequences, mechanisms, metabolic pathways and kinetic data. However, these data tend to be spread between many different databases and throughout the literature. Most web resources relating to enzymes [such as BRENDA (4), KEGG (5), SABIO-RK (6), the IUBMB Enzyme Nomenclature website (1) and IntEnz (7)] focus on the overall reaction, accompanied in some cases by a textual or graphical description of the mechanism. MACiE (8,9), which stands for Mechanism, Annotation and Classification in Enzymes, is a collaboration between the Thornton group (EMBL-EBI), Mitchell group (University of St Andrews, Scotland) and Bertini group (University of Florence, Italy) and was designed to provide a computational description of mechanism by including detailed stepwise mechanistic information for a wide coverage of both chemical space and the protein structure universe. First published in 2005 (9), MACiE usefully complements both the mechanistic detail of the Structure-Function Linkage Database (SFLD) (10) which provides information for a small number of rather 'promiscuous' enzyme superfamilies, and the wider coverage with less chemical detail provided by EzCatDB (11) and the Catalytic Site Atlas (CSA) (12). Entries in MACiE are linked, where appropriate, to all of these related data resources. MACiE is also proving a useful resource for understanding how enzymes catalyse the vast array of chemistry with such a (relatively) limited repertoire of catalytic entities (13)(14)(15)(16). This new release of MACiE retains all the original features of previous releases, but includes enriched data content through the extension of data entries (next section), new tools for exploring the diversity of biochemical reactions in MACiE ('New Methods for Characterizing and Comparing Enzyme Mechanisms' section) as well as new searches and database statistics (see Supplementary Data). Each biologically meaningful search allows the user to not only access the individual entries, but also view the data in a comparative overview page. Many of these are now available as separate links and visualization of the database online has also been updated ('Updates to MACiE Website' section). DATA CONTENT AND NEW ANNOTATIONS IN MACIE This release of MACiE represents the addition of 133 new entries since the previous major release (bringing the total number of entries to 335). We now cover >90% (182) of the EC sub-subclasses with an available crystal structure, representing 321 distinct EC numbers. When we include related enzymes as defined using the distant homology described in the CSA, MACiE covers over 800 distinct EC numbers and over 17 000 PDB codes; with a stricter definition, statistically significant similarity using SSEARCH, an implementation of Smith-Waterman, MACiE covers over 600 EC numbers and over 7000 PDB codes. We have also incorporated new annotations, which will be described in the following subsections. With the incorporation of many homologues and functional analogues into MACiE, we have constructed some pre-defined datasets for users interested in specific aspects of MACiE, including datasets relating to the EC classification, diversity in structure and function, mechanistic diversity and other aspects such as cofactor requirements. For more detail on these, please see the Supplementary Data. Cofactors in MACiE In previous releases of MACiE (8,9), cofactor annotation was largely neglected. This has now been addressed, and there are two basic types of cofactors which are annotated in MACiE: metal and organic cofactors. Metal cofactors are primarily handled by Metal-MACiE (17), a sister database and collaboration with the Bertini group at CERM in Florence, Italy. Approximately half of all the entries in MACiE contain at least one metal ion (182 MACiE entries, covering 178 distinct EC numbers, have a corresponding Metal-MACiE entry, a complete list can be found at: http://www.ebi.ac.uk/thornton-srv/ databases/cgi-bin/MACiE/listBy.pl?by=metal). There is significant cross-talk between Metal-MACiE and MACiE, with Metal-MACiE relying upon MACiE for the mechanism annotation, and MACiE taking the metal cofactor annotation from Metal-MACiE. We have created a detailed overview page for each metal involved in a reaction that displays the structural and chemical data for a specific metal ion on a single page. It is possible to retrieve a Metal-MACiE entry directly from MACiE, and also to go directly to the The large pie-chart shows the percentage of EC numbers covered by the wwPDB (purple) and Swiss-Prot (light blue), the inset (small pie-chart) represents the percentage breakdown of the orphan enzymes (those with no sequence or structure) by EC class (the oxidoreductases (EC 1) in green, the transferases (EC 2) in red, the hydrolases (EC 3) in yellow, the lyases (EC 4) in blue, the isomerases (EC 5) in orange and the ligases (EC 6) in magenta). Metal-MACiE entry for a given metal ion from the overview page within MACiE. We now handle organic cofactors (those small molecules which are mainly composed of non-metal atoms) in a manner analogous to the amino acid residues in MACiE. Thus, we have annotated the function of the cofactor in the individual steps within MACiE, and these data are now displayed on the overview and step information pages as with the catalytic amino acid residues. As part of this remediation process, we have developed the CoFactor database (18) and, where appropriate, MACiE links out to CoFactor from the overview page, which describes the 27 different cofactors currently identified in detail from the perspective of the cofactors themselves, rather than the enzymes in which they function. Structural data and displaying MACiE in 3D In order to begin to understand how the local environment of the catalytic amino acid residues affects their function, we have added information on the protein structure. This section (accessible from the overview page under the 'Structural Overview' option on the side menu or from the 'Display structure information' in the general information section of the overview page) displays the biological unit representative crystal structure for the MACiE entry in an animated Jmol (19) applet (which is distinct from the reaction animations available for some entries) that shows the catalytic domains and catalytic species as a movie. We also identify the different catalytic sites present in the protein [from the CSA (12)]. For each catalytic residue, the residues contacting it have been calculated using HBPlus (20), and are shown, again in a Jmol applet with the display centred on the catalytic residue in focus. The contact information generated using HBPlus has also been used to create a query that allows a user to identify catalytic dyads and triads present in MACiE. Furthermore, we describe the flexibility of the catalytic residues; this is assessed using the B factors as a crude measure of flexibility. Each residue in the representative PDB code is assigned an average B factor by taking the mean B factor of all the atoms in the residue. In order to cope with the large potential variation in the average B factors and to report these data in a consistent manner, the normalized B Factor (a value between 0 and 1) within the protein structure is created by ordering the average B factors in increasing size and then dividing the ranked position by the total number of residues (21). Both the average B factor and the ranked value are displayed. This section also describes the relative solvent accessibility (RSA) of the residue. This is calculated using NACCESS (22) and is shown as a percentage. Both the B factor and RSA have been added to update the analysis previously performed on a much smaller sub-set of enzymes (21). Finally, this section includes information on the number of hydrogen bond acceptor and donor contacts to the catalytic residue. Other new annotations Each reaction now has a reversibility tag added to the overall reaction, which makes no inference on the biological reversibility of the reaction. This reversibility is determined automatically and depends on whether one or more steps are annotated as being unknown, irreversible, or reversible. If one or more steps are annotated with the 'unknown' reversibility tag, then the overall reaction is annotated with an unknown reversibility, irrespective of what annotations the other steps have. If one or more steps are annotated with the irreversible tag, then the overall reaction is listed as irreversible, otherwise (i.e. if all steps are annotated as reversible) the overall reaction is listed as reversible. We have also manually added a brief, textual description of the events of a reaction step. This is displayed from the entry overview page and above the image of the step's reaction on the step page. Furthermore, we have automated the annotation of CATH domains, based upon the latest release of CATH (v 3.4.0) (23) and the links to both EzCatDB and the SFLD. NEW METHODS FOR CHARACTERIZING AND COMPARING ENZYME MECHANISMS MACiE is unique in containing detailed information not only on the overall reaction being performed by an enzyme, but also in the step-wise mechanism and the catalytic residues and cofactors involved in that transformation. The criterion for inclusion into MACiE is that the enzyme is distinct at some level of one or more of these aspects (mechanism, overall reaction or catalytic machinery). In order to define the similarity between enzyme reactions we thus first define similarity (calculated using a Tanimoto similarity score) for each of these three aspects separately, and then combine them to get an 'overall' entry similarity. Defining similarity Catalytic machinery similarity. The catalytic machinery that is carrying out the reaction is defined for the purposes of this measure as the catalytic residues and those residues binding the metal cofactor ions (to include those cases where there are only metal ions acting in the mechanism). We do not currently include the metal and organic cofactors themselves due to the fact that they are often not present in the representative crystal structure used for the 3D superimposition. The simplest method to compare this machinery is to consider the complement of the catalytic amino acid residues. However, due to the variation in the number of amino acid residues annotated as catalytic (from no amino acid residues in M0204 up to 13 in M0143 with the average entry containing only four) a simple fingerprint, in which each amino acid residue type is considered independently and counted, can produce skewed results. In order to compensate for this, we also compare the 3D coordinates of the catalytic machinery by performing a superimposition of the residues using IsoCleft (24). The final similarity is calculated by combining both the complement and superimposition measures in a 9:1 ratio. Overall reaction similarity. MACiE contains the manual annotation of the bonds formed, cleaved and 'changed in order' for the overall and step reactions, and we have turned this annotation into a weighted (i.e. we count both the number and type of bond changed) bond change fingerprint. We have created two types of fingerprint, one that is direction-dependent (i.e. it is important that we know the C-O bond is formed), and another that is essentially direction-independent (i.e. we don't distinguish the exact nature of the bond change, just that the C-O bond is modified during the reaction). At this point stereochemistry is only annotated at the overall reaction level. The fingerprints describing the bond changes in the overall reaction can then be compared between entries to give an estimation of overall reaction similarity. We currently do not include any measure of the substrate/ product similarity, as this information is encoded in the EC number to some extent and it is interesting to observe the cases where very different EC numbers result in almost identical bond change profiles, or cases where similar EC numbers contain very different bond change profiles independent of the substrate/product similarity. Mechanism similarity. While the similarity of the overall reactions is relatively trivial to calculate, the similarity of the 'mechanism' is more difficult. In order to simply capture the similarity between two entries at the step level, we consider the 'mechanism' as the sum of all the bond changes involved in all the steps, which we call the 'composite bond change' fingerprint. We use this, rather than the more complicated approaches used previously (25)(26)(27), as this calculation can be performed quickly on the fly, and also effectively hides differences in how annotators have marked up the reaction, e.g. an elimination followed by a proton transfer happening in two successive steps in one entry and in a concerted manner in another, and reaction sequence timings, e.g. two reactions occur in parallel in the biological system but are annotated as occurring in sequence in MACiE. In the following, when we refer to composite reaction similarity, it is this measure to which we are referring. Defining the 'overall' entry similarity. Each fingerprint thus created can be compared using a Tanimoto similarity score for continuous variables (28), which may take a value between 0 and 1, where 0 indicates no bits in common and 1 indicates that the two fingerprints are identical. The final similarity between two entries can thus be calculated according to the following formula, in which the mechanism is considered the more important, followed by the catalytic machinery and finally the overall reaction chemistry occurring: The weights chosen are arbitrary and designed to define the similarity based on mostly the composite reaction information, but that are also informed by the catalytic machinery and overall reaction. However, each of the measures of similarity can also be investigated individually, and all four measures are displayed on the comparative overview pages. Exploring the data in MACiE In order to examine the differences between such sets of entries, we have developed the dataset overview pages, which display a comparative analysis of the data available within MACiE for all the entries in the set. This includes an overview of the CATH domains annotated, the number of steps involved, the catalytic machinery and overall reactions as well as the composite reaction similarity and involvement of cofactors. Each entry now includes a section detailing sequence homologues to the current MACiE entry using the homologues as determined by the CSA [the same as previously reported (8)] and also now using a non-iterative search [using SSEARCH (29)] for a stricter definition of homology (see Supplementary Data for more detail). Furthermore, this section includes details on other MACiE entries with the same EC number (identical to the fourth level) and CATH domains where entries have at least one catalytic CATH domain in common. We also offer the option to view all similar reactions using the overall reaction bond change similarity and the composite reaction similarity, which is available from the side bar menu. Where there are similar entries at the EC or CATH domain level the similarity at the composite reaction and catalytic machinery level is shown and there is the option to compare two reactions, or to view the dataset comparison (where there are three or more entries available). All entries in MACiE now also include links to view similar reactions from the overview page (for the overall reaction and composite bond change perspectives) and step details page (for the reaction steps). In all cases, only reactions with a Tanimoto similarity score of greater than or equal to a specific cut-off are shown. In the case of the individual reaction fingerprint, this cut off is 0.75, in the case of the composite reaction fingerprint, this cut-off is 0.65. These cut-off values are somewhat arbitrary and have been chosen to show the most similar reactions only. The cut-off value is one of the parameters of the Perl-CGI display script, and so can be altered in the HTML address of the results page by the user. In the following subsections, specific examples are used to highlight some of the new features available for the comparative overview of sets of entries. The Diversity within an EC number-the chloroperoxidases (EC:1.11.1.10). Recently (30) we investigated the number of evolutionary families present in each EC number, and found that on average each EC number had emerged approximately twice independently. Thus, there is potentially a great deal of mechanistic variability within a single EC number. While some of this variability might be related to substrate specificity for those EC numbers that are somewhat generic (e.g. EC 2.7.11.1), there are also cases where the mechanism and catalytic machinery are obviously very different. One such example is the chloroperoxidases (EC 1.11.1.10), for which there are three MACiE entries (M0014, M0248 and M0250), representing three evolutionarily unrelated families. For this set of entries, the dataset overview pages do not display the overall reaction analysis as all these are identical, the coverage of the EC classification and the mechanisms, some of which is shown in Figure 2. In the MACiE entries for EC 1.11.1.10 the exact method of producing the hypohalous acid (the common reactive intermediate) from a halide and hydrogen peroxide is different in all three cases. Each enzyme utilizes different catalytic CATH domains and different catalytic machinery, both in terms of amino acid residues and cofactors. These differences are reflected in the composite bond change fingerprints which fall in a relatively wide range (0.3-0.58), despite the overall reactions being identical. Table 1 shows a selection of homologues to the entry M0248 (one of the chloroperoxidases in MACiE, UniProtKB accession 031168) within UniProtKB. The protein sequence used is taken from the PDB code used as the representative in MACiE (1a7u) and the sequence is fully annotated with the catalytic residues, their location of function and activity, the results of which can highlight where changes in the residues annotated might be related to a change in EC number and hence protein function. The diversity within a catalytic motif-entries in MACiE containing a catalytic triad. One of the new searches added to MACiE (among several other new searches described in Supplementary section S.2 of the Supplementary Data) allows the user to search for Figure 2. Similarity of the chloroperoxidase entries in MACiE. The top panel shows the 3D crystal structures with the catalytic residues and cofactors shown in ball and stick notation [images created using Jmol (20)]. The catalytic CATH domains are shown in bold text, the non-catalytic CATH domains in grey. The bottom panel shows the similarity matrices generated for these entries. catalytic dyads and triads. These motifs are defined as two or three residues which are hydrogen bonded to one another, and are determined automatically using HBPlus. One potential application of this search might be to identify all the entries in MACiE that utilize a Ser-His-Asp triad, as described below. There are five entries in MACiE with a Ser-His-Asp triad where at least one of the residues is annotated as being catalytic. While the majority of these entries are in the hydrolase class of enzymes (EC 3), there are examples in the oxidoreductases (the cofactor-free choloroperoxidase, EC 1.11.1.10, M0248) and lyases (hydroxynitrilase, EC 4.1.2.37, M0217). Despite the fact that all these entries contain a Ser-His-Asp triad, these enzymes perform a distinct set of overall reactions (at the bond change only level) and have different catalytic machinery profiles, as can be seen from Figure 3. The difference in catalytic machinery may be partly related to the fact that although all these enzymes have an oxyanion hole (to stabilize the covalently attached oxyanion tetrahedral intermediate), this hole is usually made up of main chain amide groups (except in the case of M0094 where the side chain of Asn104 is one of the residues making up the oxyanion hole), and the actual identity of these residues are widely different (including Met, Phe, Leu, Glu, Gly and Tyr). Except for the lyase example (M0217) the mechanisms are similar, and indeed contain at least four identical steps; formation of the enzyme-substrate covalently attached tetrahedral intermediate, initial elimination to re-form the carbonyl group, addition of water to the covalently attached intermediate followed by cleavage of the product from the enzyme. The variation is often either in following steps (as with the chloroperoxidase) or in the substrates involved. However, in the case of hydroxynitrilase, the catalytic triad is not acting in this manner, nor does it appear to have the standard oxyanion hole, with the substrate lacking the common carbonyl group of the other reactions' reactants. Indeed, in this enzyme the serine is simply acting as a proton shuttle and not in covalent catalysis. The diversity within an evolutionarily related family. Another question that we can now address is to investigate the diversity of entries relating to a family of enzymes. We have recently shown, using the phosphatidylinositol-phosphodiesterase and Ntn-type amide hydrolase families, (N. Furnham et al., submitted for publication) that there is often a good deal of variability within a family of enzymes (as represented by a single CATH domain) at the overall reaction level, as well as the structural level. This variability can be analysed in terms of the overall reaction, mechanism, composite reaction and catalytic machinery using the new overview pages. We are also starting a long-term collaboration with the SFLD, a database of 'promiscuous' enzyme superfamilies, so that all reactions in that database that fulfil the criteria for inclusion into MACiE are incorporated into our dataset. Version 3 of MACiE already incorporates a total of 26 entries from the SFLD, with all 10 structurally characterized families in the crotonase superfamily already included into MACiE. UPDATES TO MACiE WEBSITE Version 2.0 of MACiE (8) was based on static HTML pages. We have since moved to a model in which all the pages relating to the data content of MACiE (i.e. the lists of entries by one of the EC number, PDB code, CATH code or MACiE identifier) are generated, on the fly, by Perl CGI scripts and thus are updated automatically whenever the database is updated. Other minor changes to the online content of MACiE include the addition of mouse-over descriptions of the amino acid residue functions, mechanisms and mechanism components. These descriptions are linked to the MACiE dictionaries. We have added navigation buttons to the reaction steps, to allow users to cycle through the steps. Finally, we have added in GO terms for each entry, based on the primary PDB code and the associated UniProt accession code (31). FUTURE DEVELOPMENTS MACiE is an actively developing resource, and we are continuously extending its coverage. As part of this, and as mentioned before, we are working closely with the SFLD to extend coverage in MACiE of evolutionarily related superfamilies. We are beginning to work towards a new data entry system, which will be online and as automated as possible and will allow the enzyme community to add data to MACiE. We are also working on allowing users to search the intermediates in the database as well as the substrates and products, not only textually (as is currently the case), but also through substructure similarity. Furthermore, we are working on ways to handle alternative mechanisms and enzyme promiscuity more robustly. Finally, we will continue to use MACiE to attempt to understand enzymes and how they function. [8,9,13,21,25,30].
2016-01-13T18:10:52.408Z
2011-11-03T00:00:00.000
{ "year": 2011, "sha1": "917cc897ec8b0bc279c9f45908313f77fa98282a", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/nar/article-pdf/40/D1/D783/16955986/gkr799.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2272116437c759d606c0b4de34274a39ef76c9b8", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Computer Science", "Medicine" ] }
233800199
pes2o/s2orc
v3-fos-license
Methods of Implementation of the Review of the Space of Parametric Profiler The article presents the results of the theoretical justification of the method for implementing the survey of the parametric profiler. With all the advantages of using nonlinear effects and hydroacoustic systems in a parametric transmitting mode for profiling the bottom and bottom sediments, the disadvantage is the small area of the bottom surface, scanned by the transmitting array of the profiler, which leads to the need to choose the method of scanning and sensing modes of the parametric profiler for a vertical profiling scheme. The analysis of the main methods of scanning, such as scanning in two planes with a phased receiving-transmitting array, the use of several phased arrays that scan by sector and scanning by the movement of the antenna carrier, allowed us to determine the main advantages and disadvantages of each method of viewing. The results of the theoretical study allowed us to evaluate the method of scanning by the movement of the antenna carrier and obtain expressions for the stages of the full scanning cycle, as well as to calculate the values of the follow-up period in the radiation mode for the values of the number of wave periods of the difference frequency of the radiated acoustic signal and the distance from the bottom of the transmitting antenna that are most common in real conditions of profiling bottom structures. Introduction The possibility of using nonlinear effects and adaptive hydroacoustic systems with parametric transmitting mode for profiling the bottom and bottom sediments attracts the attention of both domestic and foreign experts. The use of parametric arrays in hydroacoustic equipment makes it possible, due to their high directivity and low side field level, to increase information content and accuracy in detecting and determining the coordinates of underwater objects, and to obtain additional features for recognition [1][2][3][4][5][6][7][8][9]. Problem Actuality To date, many results have been published concerning methods for studying the surface and bottom structures of the sea floor using various acoustic systems of single-beam, multi-beam and side scanning. It is recognized that acoustic methods are a useful tool for research, since the attenuation of sound waves in water is lower than that of other methods, such as optical and magnetic. Acoustic methods are successfully used to determine a number of oceanological parameters. The study of acoustic scattering mechanisms and the characteristics of echo signals reflected from bottom structures is important for stratifying layers and determining their spatial characteristics. In practice, when profiling bottom structures [10,11], various methods of viewing space are used, such as scanning in two planes with a phased receiving-transmitting array, using several phased arrays that scan by sector, and scanning with the movement of the antenna carrier. Problem Statement With all the advantages of using nonlinear effects and hydroacoustic systems with a parametric transmitting mode for profiling the bottom and bottom sediments, the disadvantage is the small area of the bottom surface, scanned by the transmitting array of the profiler, which leads to the need to choose the method of scanning and sensing modes of the parametric profiler with a vertical profiling scheme and taking into account critical angles [12,13]. Mathematically, the expression for exceeding the critical angle can be expressed as: where θl and θtvalues of the compensation angles in the longitudinal and transverse planes. This equation can also be expressed in terms of direction numbers n: 4 < + < 30. The values of compensation angles in beam pattern (BP) during scanning can be calculated using the expression. where θmax -opening angle when scanning, and 2θ0.7 -width of the BP. The resulting angle values are valid for scanning in both planes. It should be noted that simultaneous compensation of the directivity characteristic in two planes for maximum angles is unacceptable, since it will lead to exceeding the critical angle of penetration of acoustic waves into the bottom soil. The use of a receiving-transmitting phased array in the case of offshore profiling is often impossible, since the radiation time may exceed the propagation time at shallow depths. Receiving a difference frequency wave brings additional difficulties. Since the receiving antenna of the difference frequency wave is non-directional, it is impossible to perform spatial selection of acoustic signals. Selection of the time does not give accurate information about direction of arrival due to multipath signals may overlap each other. The solution to this problem is related to the choice of the optimal method for viewing space with a parametric profiler in a particular situation. Scanning in two planes by a phased receiving-transmitting array One of the ways to provide scanning above the object at a space angle of at least 40° symmetrical with respect to the vertical axis is sequential scanning in the longitudinal and transverse planes with a phased array. The number of scanned space sectors is calculated as the ratio of the full scan angle to the width of a single directional characteristic [14][15][16]. For the case when it is necessary to cover a wide viewing sector with a phased array with a narrow partial width of the directional characteristic, it is necessary to form a set of n transmitters in the longitudinal plane and n transmitters in the transverse plane. In total, you need to scan n 2 sectors of space. The scanning angle is limited by the critical angle of penetration of acoustic waves from water into bottom sediments, which is approximately 22.1°for longitudinal waves. For the case n=16, calculations of the deviation angles showed that it was impossible to scan at the maximum deviation angles in 24 of the 256 scanned sectors. The scanned space will have the form shown in figure 1. Receiving-transmitting array produces a sequential scan in each sector. For ease of technical implementation, an equi-signal space survey method is usually chosen. The maximum propagation time for vertical probing is calculated using the expression: where H -maximum operating depth, h -maximum thickness of the bottom layer, cw and cs -sound speed in the water and sediments, respectively, θmax -maximum angle of deviation during scanning days, θs -angle of propagation of acoustic waves in sediments, calculated from the Snellius law As the depth decreases, the time for viewing space will decrease linearly. The advantages of using scanning in two planes of a phased array are the small size of the antenna system (high-frequency receiving and low-frequency receiving), the need for a single processing path, and easily implemented spatial filtering. Disadvantages -necessity to form a phase distribution in two planes will greatly complicate the radiating path; low optimal speed of the carrier; by reducing the number of partial antennas, you can achieve the desired scan time, but there are gaps in the viewing band. Application of multiple phased array scanning by sector The second method of providing scanning is the use of several receiving-transmitting phased arrays located along the ship, each of which scans in the transverse plane with a fixed angle of deviation in the longitudinal. The geometrical dimensions and layout of the profiler antenna system for a complete version of the series-parallel method of viewing space are shown in Fig. 2 Figure 2. Geometry of the profiler antenna system for a complete version of the series-parallel method of viewing space. In the static position, scanning of each phased array in the transverse plane provides information about the sea ground in the solid angle determined by the maximum compensation angle. Spatial filtering of the difference frequency wave is also provided by the receiving linear antenna. Advantages -short scanning time, easy to implement spatial filtering, there is a potential opportunity to meet the requirements for roll, trim, increasing depth and speed. Disadvantages -large weight and size, the need for several transmitting and receiving paths, as well as the need to take separate measures to exclude the influence of transmitting and receiving antennas on each other [17][18][19]. Scanning by moving the antenna carrier To get a picture of the bottom structures by parametric profiler, a survey method can be selected that is associated with the movement of the receiving-transmitting antenna system of the profiler due to the movement of the antenna carrier. Simultaneous scanning with the directional characteristic in the transmitting mode in the traverse plane of the transmitting antenna system allows to detect a band on the bottom surface. The bottom structure scanning scheme corresponding to this variant of the profiler implementation is shown in Fig. 3. This way of scanning is possible only when the antenna carrier is moving. Since scanning in the longitudinal plane is carried out due to the movement of the carrier, in order to prevent gaps in the scanned surface, the distance traveled by the carrier during scanning should not exceed the diameter of the spot, scanned by one BP. The advantages of this method are simplicity of implementation, small dimensions, one receiving and radiating path is needed. Disadvantages -long scan time, the inability to view without moving the media, the possibility of skipping at high speeds. Theoretical evaluation of the method of scanning by the movement of the antenna carrier In the static position of the carrier, scanning in the transverse plane provides information about the bottom structures in the solid angle, determined by the sum of the directional characteristics of each of the transmitting antennas. The translational displacement of the profiler antenna system in the direction of the antenna carrier movement allows you to get information about the internal structure of the bottom in the field of view. To ensure sufficient angular resolution in the transverse plane and save energy resources of the profiler, the viewing sector in the transverse plane in the parametric profiler can be covered with a rolling of several partial BP. In the case of five partial directional characteristics, the complete sensing cycle includes the steps shown in table 1. To simplify the scan scheme (equal-interval option), the maximum value of the interval between the previous and current probing pulses can be selected. In this case, the only variable parameter included in the equation is the distance of the radiating surface of the antenna from the bottom H. Therefore, the period of probing pulses may vary from cycle to cycle depending on the distance of the radiating surface of the antenna from the bottom. The calculated values of the follow-up period in the mode of short pulse transmission in the form of one, two, three periods of the difference frequency F = 10 kHz for various distances from the bottom of 5, 20, 100 m and the maximum penetration into the ground of 10 m are shown in table 2. Taking into account the features of the nonlinear method of creating a field of probing signals, it is necessary to use a phased array in the radiation mode to form spatial channels of transmissionreception of the profiler [20]. Conclusion Analysis of methods for scanning with a parametric profiler and calculated expressions allows to draw the following conclusions: with a vertical profiling scheme, it is necessary to choose a method for viewing space and probing modes of a parametric profiler and take into account critical angles; the use of a receiving-transmitting phased array in the case of profiling on the shelf is often impossible, since the radiation time may exceed the propagation time at shallow depths; for the practical implementation of a parametric profiler, the most suitable method is scanning by the movement of the antenna carrier.
2021-05-07T00:04:29.916Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "db3fc4b1b8008a49578e3b7b8c1963789303188d", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/666/4/042097", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "8821f110fda1df22529056951dd35ec668f7c5d6", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
84182595
pes2o/s2orc
v3-fos-license
Geographic and Temporal Patterns of Antimicrobial Resistance in Pseudomonas aeruginosa Over 20 Years From the SENTRY Antimicrobial Surveillance Program, 1997–2016 Abstract Background The SENTRY Antimicrobial Surveillance Program was established in 1997 and encompasses over 750 000 bacterial isolates from ≥400 medical centers worldwide. Among the pathogens tested, Pseudomonas aeruginosa remains a common cause of multidrug-resistant (MDR) bloodstream infections and pneumonia in hospitalized patients. In the present study, we reviewed geographic and temporal trends in resistant phenotypes of P. aeruginosa over 20 years of the SENTRY Program. Methods From 1997 to 2016, 52 022 clinically significant consecutive isolates were submitted from ≥200 medical centers representing the Asia-Pacific region, Europe, Latin America, and North America. Only 1 isolate per patient per infection episode was submitted. Isolates were identified by standard algorithms and/or matrix-assisted laser desorption ionization-time of flight mass spectrometry. Susceptibility testing was performed by Clinical and Laboratory Standards Institute (CLSI) methods and interpreted using CLSI and European Committee on Antimicrobial Susceptibility Testing 2018 criteria at JMI Laboratories. Results The most common infection from which P. aeruginosa was isolated was pneumonia in hospitalized patients (44.6%) followed by bloodstream infection (27.9%), with pneumonia having a slightly higher rate of MDR (27.7%) than bloodstream infections (23.7%). The region with the highest percentage of MDR phenotypes was Latin America (41.1%), followed by Europe (28.4%). The MDR rates were highest in 2005–2008 and have decreased in the most recent period. Colistin was the most active drug tested (99.4% susceptible), followed by amikacin (90.5% susceptible). Conclusions Over the 20 years of SENTRY Program surveillance, the rate of MDR P. aeruginosa infections has decreased, particularly in Latin America. Whether the trend of decreasing resistance in P. aeruginosa is maintained will be documented in future SENTRY Program and other surveillance reports. The SENTRY Antimicrobial Surveillance Program was established in 1997 and encompasses over 750 000 bacterial isolates from more than 400 medical centers worldwide. Among the pathogens tested in the SENTRY Program, Pseudomonas aeruginosa remains a common cause of multidrug-resistant ([MDR] nonsusceptible [NS] to at least 1 antimicrobial in 3 or more drug classes) bloodstream infections and pneumonia in hospitalized patients. Zilberberg et al [1] found that MDR P. aeruginosa was much more common in bloodstream infections (14.7%) and pneumonia (22.0%) than carbapenem-resistant Enterobacteriaceae from bloodstream infections (1.1%) and pneumonia (1.6%), which makes treatment of serious P. aeruginosa infections more challenging. Furthermore, delaying appropriate antimicrobial therapy has been associated with increased morbidity and mortality [2]. Patients with MDR P. aeruginosa have a higher 30-day mortality than patients with non-MDR P. aeruginosa [3]. Frequently, MDR P. aeruginosa isolates are resistant to carbapenems and other β-lactams, which is mediated through multiple mechanisms, including acquisition of metallo-β-lactamases, increased chromosomal AmpC production, extended spectrum β-lactamases, increased efflux, or changes in membrane permeability [4,5]. In the present study, we reviewed geographic and temporal trends in resistant phenotypes of P. aeruginosa over the 20 years of the SENTRY Antimicrobial Surveillance Program. During the period from 1997 to 2016, 52 022 clinically significant P. aeruginosa isolates were submitted for testing in the SENTRY Program from ≥400 medical centers representing the Asia-Pacific (excluding China and India), European (including Turkey and Israel), Latin American, and North American regions. Participating centers submitted bacterial clinical isolates (1 isolate per patient per infection episode) that were consecutively collected by infection type according to a common protocol. The common SENTRY Program protocol established the number of isolates for the target infection types and the time period each year during which the isolates should be collected. Each institution contributed a specified number of isolates per year with approximately 50 isolates per target infection type. Infection types included bloodstream infection (BSI), pneumonia in hospitalized patients, skin and skin structure infection (SSSI), intra-abdominal infection, and urinary tract infection. Isolates were identified by the submitting laboratory's standard algorithms and/or matrix-assisted laser desorption ionization-time of flight mass spectrometry and confirmed at JMI Laboratories (North Liberty, IA). Susceptibility (S) testing was performed at JMI Laboratories by the Clinical and Laboratory Standards Institute (CLSI) broth microdilution method and interpreted using CLSI and European Committee on Antimicrobial Susceptibility Testing (EUCAST) 2018 criteria [6,7] The antimicrobials tested included amikacin, cefepime, ceftazidime, ciprofloxacin, colistin (tested 2006-2016), meropenem, piperacillin-tazobactam, and tobramycin. Gentamicin, imipenem, and levofloxacin were also tested for resistant phenotype determination. Resistant phenotypes analyzed using EUCAST criteria were as follows: MDR (NS to at least 1 antimicrobial in ≥3 drug classes), extensively drug-resistant ([XDR] NS to at least 1 agent in all but ≤2 drug classes), and pan drug-resistant (PDR), according to Magiorakos et al [8]. Ceftazidime-NS and meropenem-NS were determined according to EUCAST interpretive criteria. Infection Types The most common infection type from which P. aeruginosa was isolated was pneumonia in hospitalized patients (44.6%, n = 23 227) followed by BSI (27.9%, n = 14 539) and SSSI (19.1%, n = 9952) as shown in Table 1. The number of isolates from each of the 4 regions by infection type is shown in Figure 1. Pseudomonas aeruginosa was most frequently isolated from pneumonia in all 4 regions. Antimicrobial Susceptibility Pneumonia had a higher rate of isolates with MDR and XDR (27.7% and 19.0%, respectively) than BSIs (23.7% and 17.4%, respectively) as shown in Table 1. Multidrug-resistant rates over time are shown in Table 2 Table 2). Geographic Resistance Trends Isolates with the MDR phenotype were most frequently isolated in Latin America with 41.1%, followed by Europe with 28.4%, North America with 18.9%, and Asia-Pacific with 18.8% ( Figure 2). Table 4 shows the percentage susceptibility of the antimicrobials by 4-year period for all regions and for each individual region. Susceptibilities for North American and European isolates were relatively stable with variations within 10% between each period. The largest shift in susceptibility for North American isolates was the decrease in meropenem susceptibility from 85. DISCUSSION Over the 20 years of SENTRY Program surveillance, the rates of MDR and other resistant phenotypes for P. aeruginosa were highest in 2005-2008 and decreased in the most recent period. Latin America showed the sharpest decrease in MDR rate, which was associated with a rise in susceptibility to aminoglycosides and β-lactams. The metallo-β-lactamase SPM-1 that has been reported in multiple Brazilian institutions may be contributing to the meropenem resistance reported there [9,10]. Among the 6722 P. aeruginosa isolates collected from the Latin American region, 3057 (45.5%) were collected from Brazilian medical centers. The results could have been directly influenced by any changes in the epidemiology within Brazilian medical centers. The high carbapenem resistance rates found in Brazilian hospitals have been mainly caused by the spread of the XDR P. aeruginosa ST277 clone, which chromosomally encodes for SPM-1 and RmtD, a 16S ribosomal ribonucleic acid (rRNA) methylase [10,11]. This clone also possesses mutations on the quinolone-resistant determining regions of gyrA and parC, and it harbors sul1 and aminoglycoside-modifying enzyme-encoding genes such as aac(6′)-Ib-cr and aadA7 [11]. In general, SPM-1-producing P. aeruginosa ST277 isolates are susceptible only to polymyxins. Although no studies that include isolates from all Brazilian regions have been carried out, studies evaluating isolates from specific regions or single institutions have shown a decrease in the frequency of SPM-1-producing P. aeruginosa isolates [12,13]. These studies may support the increase in the antimicrobial susceptibility rates in Latin America, especially for aminoglycosides and carbapenems, observed by the SENTRY Program study in the 2013-2016 period. Cacci et al [13] also found that as the frequency of the SPM-1 clone decreased, carbapenem-resistant isolates displayed the more commonly observed resistance mechanisms overall, including porin loss and efflux overproduction [14]. The Asia-Pacific region had an overall lower frequency of MDR P. aeruginosa than Latin America and Europe (Figure 2). The region saw an increase in MDR P. aeruginosa, from 15.6% in 1997-2000 to 24.7% in 2005-2008 and decreased to 15.0% in 2013-2016. A study by Pfaller et al [15] found that frequency of meropenem-resistant P. aeruginosa varied by country in the period 2013-2015, with South Korea having the highest rate (46.3%). Studies in the Asia-Pacific region have shown an increasing prevalence of metallo-β-lactamases and carbapenemases in P. aeruginosa, particularly the ST235 clone, which may explain the increase in MDR seen in 2005-2008 [16][17][18]. Because strain typing was not performed in this study, it is unknown whether the decrease in resistance is due to a decrease in the prevalence of ST235 or other causes. The European and North American medical centers had a stable frequency of MDR P. aeruginosa, with the North American and European rates ranging from 17.5% to 21.2% and 25.6% to 30.9%, respectively, over the 20-year period. Isolates from both regions showed a decrease in meropenem susceptibility in 2009-2012, although susceptibility improved in the most recent time period for both regions. The ST235 clone and others that have been globally disseminated may have contributed to the increase in meropenem resistance and, perhaps, to the variations observed [19]. CONCLUSIONS This study has shown variation in the resistance rates over time and over geography; however, MDR P. aeruginosa remains a cause of serious infections. The improved activities of newer agents, such as ceftolozane-tazobactam and ceftazidime-avibactam, against P. aeruginosa including MDR isolates have been published elsewhere, and those agents may be effective treatment options, especially for patients with infections caused by meropenem-resistant isolates [20,21]. Whether the trend of decreasing resistance in P. aeruginosa is maintained will be documented in future SENTRY Program and other international surveillance studies.
2019-03-22T16:08:18.559Z
2019-03-01T00:00:00.000
{ "year": 2019, "sha1": "ddc78f47117c6220f646a36f7e5643d37dd6857b", "oa_license": "CCBYNCND", "oa_url": "https://academic.oup.com/ofid/article-pdf/6/Supplement_1/S63/33591134/ofy343.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ddc78f47117c6220f646a36f7e5643d37dd6857b", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
233388677
pes2o/s2orc
v3-fos-license
Phytotoxic Secondary Metabolites from Fungi Fungal phytotoxic secondary metabolites are poisonous substances to plants produced by fungi through naturally occurring biochemical reactions. These metabolites exhibit a high level of diversity in their properties, such as structures, phytotoxic activities, and modes of toxicity. They are mainly isolated from phytopathogenic fungal species in the genera of Alternaria, Botrytis, Colletotrichum, Fusarium, Helminthosporium, and Phoma. Phytotoxins are either host specific or non-host specific phytotoxins. Up to now, at least 545 fungal phytotoxic secondary metabolites, including 207 polyketides, 46 phenols and phenolic acids, 135 terpenoids, 146 nitrogen-containing metabolites, and 11 others, have been reported. Among them, aromatic polyketides and sesquiterpenoids are the main phytotoxic compounds. This review summarizes their chemical structures, sources, and phytotoxic activities. We also discuss their phytotoxic mechanisms and structure–activity relationships to lay the foundation for the future development and application of these promising metabolites as herbicides. Introduction Phytotoxic secondary metabolites from fungi (or called fungal phytotoxins) are toxic compounds to plants produced by fungi, especially by plant fungal pathogens responsible for serious diseases of agrarian and forest plants causing significant economical losses [1]. Fungal phytotoxins play an important role in the development of plant disease symptoms, inclduing leaf spots, wilting, chlorosis, necrosis, and growth inhibition and promotion [2,3]. Their chemical and biological characterizations as well as the structure-activity relations and modes of action can help us to deeply investigate plant-pathogen interactions. Fungal phytotoxins are either host specific (HST) or non-host specific (NHST) toxins. Hosts specific phytotoxins (or called host-selective toxins) are active only towards plants that are hosts of the toxin-producing fungi, and are essential for pathogenicity [4]. Host specific toxins are mainly produced by plant pathogenic fungi of Alternaria, Colletotrichum, and Helminthosporium [5,6]. In some cases, host sensitivity is mediated by gene-for-gene interactions, and phytotoxin sensitivity is mandatory for disease development [7]. Contrarily, non-host specific phytotoxins (or called non-host-selective toxins) are primary determinants of host range and not essential for pathogenicity, although they may contribute to virulence. These phytotoxins have a broader range of activity, causing symptoms not only on hosts of the pathogenic fungi, but also on other plant species [8]. Fungal phytotoxins belong to different classes such as polyketides, phenols and phenolic acids, terpenoids, nitrogen-containing metabolites based on their biosynthetic pathways and structural characters. To our knowledge, there are no detailed reviews about the phytotoxic secondary metabolites from all fungal species to be published. This review describes fungal phytotoxic metabolites, their structures, isolated fungi and phytotoxic activities. Furthermore, the probable roles played by fungal phytotoxins in the induction of plant disease symptoms, structure-activity relationships, phytotoxic mechanisms, as well as the potential applications in agriculture are also discussed. Polyketides Polyketides are an extremely important class of bioactive secondary metabolites. They are produced by repetitive Claisen condensations of an acyl-coenzyme A (CoA) starter with malonyl-CoA elongation units in a fashion reminiscent of fatty acid biosynthesis. The biosynthesis of polyketides from acyl-CoA thioesters is catalyzed by polyketide synthase (PKS), a multi-enzyme complex that is highly homologous to fatty acid synthase (FAS). The diverse structures of polyketides can be explained as being derived from poly-β-keto chains, formed by the coupling of acetic acid units via condensation reactions. Although sharing a similar synthetic process, PKSs can be classified into three types, namely type I PKS, type II PKS, and type III PKS. Type I PKSs are multifunctional peptides containing linearly arranged and covalently fused domains. The type I PKSs can be further classified into iterative type I PKSs (iPKSs) and modular type I PKSs (mPKSs). Type II PKSs are multi-enzyme complexes composed of monofunctionall proteins. Type III PKSs are simple homodimers that use CoA rather than acyl carrier protein (ACP) as an anchor for chain extension. In addition, both type II and type III PKSs are iterative [29]. Most fungal phytotoxic metabolites belong to polyketides. They are mainly divided into aromatic and aliphatic polyketides. Aromatic Polyketides Aromatic polyketides are characterized by their polycyclic aromatic structures. The biosynthesis of aromatic polyketides is usually accomplished by the type II polyketide synthases (PKSs), which produce highly diverse polyketide chains by sequential condensation of the starter units with extender units, followed by reduction, cyclization, aromatization and tailoring reactions [29]. Many fungal phytotoxic polyketides belong to aromatic polyketides that mainly include benzopyrones, dibenzopyrones, benzophenones, naphthopyrones, azaphilones, naphthalenes, anthraquinones, perylenequinonoids, and aromatic macrolides. Monocerin (6) was isolated from Eserohilum turcicum (syn. Drechslera turcica), the leaf pathogen of the noxious weed Johnson grass (Sorghum halepense). This metabolite possessed non-specific phytotoxic activity to inhibit root and shoot growth of Johnson grass and cucumber seedlings [34]. The structures of the fungal phytotoxic benzo-γ-pyrones (chromenones) are shown in Figure 2. Chloromonilinic acids B (13), C (14), and D (15) were isolated from the liquid cultures of Cochliobolus australiensis, the leaf pathogen of the weed buffelgrass (Pennisetum ciliare). These three chloromonilinic acids were toxic to buffelgrass in a seedling elongation assay, with significantly delayed germination and dramatically reduced radicle growth [40]. Coniochaetone A (16) and rabenchromenone (17) were isolated from the culture filtrates of Fimetariella rabenhorstii, an oak-decline-associated fungus in Iran. They were toxic by causing a necrosis diameter in the range of 0.2-0.7 cm with a leaf puncture assay on tomato and oak leaves [41]. Many fungal dibenzo-α-pyrones possess a wide spectrum of biological activities such as cytotoxic, phytotoxic, and antimicrobial activities [42]. The structures of phytotoxic dibenzoα-pyrones produced by the fungi from the genera Alternaria are shown in Figure 3. Both alternariol (18) and alternuisol (22) were isolated from the cultures of Alternaria sp., the pathogen of the invasive weed Xanthium italicum. They inhibited shoot and root growth of Pennisetum alopecuroides and Medicago sativa by seedling growth assay [43]. Further stud-ies on the mode of action showed that alternariol (18) and alternariol-9-methyl ether (also called AME, 19) from Alternaria alternata inhibited the photosynthetic electron transport chain in isolated spinach chloroplasts [44]. Benzophenones Benzophenones share a common phenol-carbonyl-phenol skeleton. They are considered as the derivatives of xanthones [45]. The A-ring is derived from the shikimic acid pathway, and the B-ring is derived from the acetate-malonate pathway [46]. The structures of phytotoxic benzophenones from fungi are shown in Figure 5. Two benzophenones named daldinalds A (25) and B (26) were isolated from Daldinia concentrica. Both metabolites showed inhibition on the root growth with a rice seedling assay [47]. Moniliphenone (27) and rabenzophenone (also called chloromoniliphenone, 28) were isolated from the culture filtrates of Fimetariella rabenhorstii, an oak-decline-associated fungus in Iran. They were active by causing a necrosis diameter in the range of 0.2-0.7 cm with leaf puncture assay on tomato and oak leaves [41]. These two benzophenones were also isolated from the solid culture of Alternaria sonchi, the leaf pathogen of sowthistles (Sonchus spp.). Both metabolites were toxic to the leaves of couch-grass (Elytrigia repens) and sowthistle (Sonchus arvensis) by a punctured leaf disc assay [48]. Azaphilones Azaphilones (or called azaphilonoids) are a structurally variable family of fungal polyketide metabolites possessing a highly oxygenated pyranoquinone bicyclic core, usually known as isochromene, and a quaternary carbon center. They belong to a large group of fungal pigments, which turn red in the presence of primary amines due to an exchange of the pyran oxygen for nitrogen, arising from their affinity of the 4H-pyran nucleus to undergo substitution with primary amines to form the corresponding vinylogous γ-pyridones. Some fungal azaphilones showed phytotoxic activities. However, most of azaphilones have not been screened for their phytotoxic activities [52]. The structures of phytotoxic azaphilones from fungi are shown in Figure 7. Acetosellin (37) was isolated from the mycelia of Cercosporella acetosella, the pathogen of leaf spots of the cosmopolitan weed (Rumex acetosella). It inhibited root elongation of Lepidium sativum and Zea mais at 6.4 × 10 −4 M [53]. Ascochitine (38) was produced as a main phytotoxin from Ascochyta fabae and A. pisi, two pathogens that caused the so called 'brown spots' disease in broad bean and necrotic lesions on pea leaflets [54]. This compound was later isolated from the cultures of Phoma clematidina, the pathogen of leaf spot-wilt disease of Clematis sp. This metabolite was toxic to the leaves of Clematis sp. by leaf disc assay [55]. Lunatoic acid A (41) was isolated from Cladosporium oxysporum DH14, the fungus residing in the gut of locust (Oxya chinensis). This metabolite exhibited significant inhibition against radicle growth of Amaranthus retroflexus seedlings [24]. Spiciferinone (42) was isolated from the culture filtrates of Cochliobolus spicifer, the pathogen of leaf spot disease in Gramineae. This metabolite was phytotoxic to wheat cotyledons by using protoplast viability assay [56]. Naphthalene Derivatives Phytotoxic fungal naphthalene derivatives include naphthols, naphthoquinones, and naphthalenones. One naphthol and seven naphthoquinones with phytotoxicity were found in fungi. Their structures are shown in Figure 8. Agropyrenal (43), a naphthol, was isolated from the liquid cultures of Ascochyta agropyrina var. nana. When the leaves of several weed plants (i.e., Mercurialis annua, Chenopodium album and Setaria viridis) were assayed, agropyrenal (43) was proved to be phytotoxic by causing the appearance of necrotic lesions [57]. Further, 2-hydroxyjuglone (48) was isolated from the culture broth of Ceratocystis fimbriata f.sp. platani, the canker pathogen of plane tree (Platanus orientalis). This compound induced large necrotic lesions in stem explants of plane tree as was observed in vivo [60]. Lentiquinone A (49) was isolated from Ascochyta lentis, the pathogen of lentil (Lens culinaris). It exhibited a strong phytotoxicity to the punctured leaves and seed germination of host and non-host plants [61]. Anthraquinones Anthraquinones are a group of polyketides containing eight C2 units, which generates in turn with three aldol type condensations of the carbon skeleton of anthraquinones except for the two carbonyl oxygens of the central ring [74]. The structures of fungal phytotoxic anthraquinones are shown in Figure 11. Two anthraquinones, namely altersolanols A (73) and J (74), were isolated from the pathogen Phomopsis foeniculi (teleomorph: Diaporthe angelicae) of fennel (Foeniculum vulgare). They showed a modulated phytotoxicity on the detached tomato leaves [75]. Altersolanol A (73) was also isolated from Alternaria porri. This compound inhibited growth of lettuce and stone-leek seedlings [76]. Neoanthraquinone (75) was isolated from Neofusicoccum luteum, the causal agent of Botryosphaeria dieback in Australia. Neoanthraquinone (75) showed the obvious toxic effect by causing severe shriveling and withering on grapevine by leaf assay [37]. Catenarin (78) was produced by the necrotrophic fungus Pyrenophora tritici-repentis (anamorph: Drechslera tritici-repentis), the causal agent of tan spot foliar pathogen of wheat. Catenarin (78) induced necrosis on the leaves of wheat. It also infected wheat kernels by causing a red discoloration known as red smudge [79]. Dothistromin (85) was isolated as the main phytotoxin produced by Dothistroma pini, the pathogen by causing necrotic disease characterized by the formation of red bands on the infected needles of Pinus radiata and other pines [82]. Lentiquinones B (88) and C (89) were isolated from Ascochyta lentis, the pathogen of lentil (Lens culinaris). Both compounds caused severe leaf necrosis when applied to the punctured leaves of host and non-host plants. [61]. Rhodolamprometrin (99) was isolated from Fusarium proliferatum ZS07, the endophytic fungus residing in the gut of long-horned grasshopper (Tettigonia chinensis). This compound exhibited inhibitory activity on the radicle growth of Amaranthus retrofleus seeds to show its potential as a biocontrol agent in agriculture [22]. Perylenequinonoids Perylenequinonoids are a class of aromatic polyketides characterized by a pentacyclic conjugated chromophore. Fungal perylenequinones are the photoactivated phytotoxins which act by absorbing light energy and generating reactive oxygen species that damage host plant cells [92]. The structures of phytotoxic perylenequinonoids from fungi are shown in Figure 12. Alterlosins I (103) and II (104) were isolated from the cultures of Alternaria alternata, the pathogen of spotted knapweed (Centaurea maculosa), a major weed pest in rangelands of the northwestern United States and southwestern Canada. Both metabolites induced necrotic lesions on knapweed by a leaf puncture assay. Alterlosin I (103) induced larger necrotic lesions compared to the small flecks induced by alterlosin II (104) [93]. Calphostin C (107) was isolated from plant pathogen Cladosporium cladosporioides. This metabolite was a protein kinase C (PKC) inhibitor by competing at the binding site for diacyglycerol and phorbol esters. Specific inhibitor of PKC would be very useful for calphostin C (107) as the pharmacological tool and potential drug [95]. Cercosporin (108) was isolated from cultures of Cercospora nicotianae, and was tested for toxic effects on suspension-cultured cells of tobacco. Cercosporin (108) was toxic to tobacco cells only when it was incubated under the light [96]. It was found that cercosporin (108) can be produced by a few pathogenic fungi in the genus Cercospora. It was toxic to plants by the generation of activated oxygen species, particularly singlet oxygen. Cercospora fungi penetrate host tissues through the stomata and colonize the intercellular spaces. Production of the membrane-damaging cercosporin (108) would allow for cell breakdown and leakage of nutrients required by the fungi for growth and sporulation in the host plant [97]. Isocercosporin (109) was isolated from Scolecotrichum graminis, the causal fungus of a leaf streak disease of orchardgrass. This metabolite was higher toxic than cercosporin (108) by lettuce seedling growth assay [98]. Elsinochrome A (110) was isolated from Stagonospora convolvuli, the biocontrol fungus to bindweed (Convolvulus arvensis). This metabolite showed inhibition on the root elongation of tomato by seedling growth assay, and toxic to bindweed and grapevine leaves by leaf-wounded assay [99]. Aromatic Macrolides Aromatic macrolides are a class of fungal polyketides possessing a macrolide core structures fused into an aromatic ring. The typical metabolites are benzenediol lactones. They have various biological activities such as phytotoxic, cytotoxic, and nematicidal activities. The structures of phytotoxic aromatic macrolides from fungi are shown in Figure 13. Curvularin (113) and α,β-dehydrocurvularin (114) were isolated from the cultures of Curvularia intermedia, the leaf pathogen of Pandanus amaryllifolius. Both metabolites were toxic to lettuce (Lactuca sativa) and bentgrass (Agrostis stolonifera) with seed germinatin assay [100]. In addition, α,β-dehydrocurvularin (114) was isolated from the culture filtrates of Alternaria zinnia, the fungus causing leaf necrosis of Xanthium occidentale. It was toxic to the test plants by using leaf puncture assay [101]. α,β-Dehydrocurvularin (114) was also isolated from Nectria galligena, the apple canker pathogen in Chile. This compound significantly reduced elongation and epicotyl growth of lettuce seedlings [102]. Simple Furan and Furanone Analogues The structures of phytotoxic furan and furanone analogues from fungi are shown in Figure 14. (−)-Botryodiplodin (122) was isolated from the cultures of Botryodiplodia thebromae, the pathogen of soybean charcoal rot disease. (−)-Botryodiplodin (122) was a simple lactol analogue which was toxic to soybean and duckweed (Lemna pausicostata) [106]. This compound has been synthesized by using stereoselective radical cyclizations of acyclic esters and acetals [107]. Nigrosporione (127) was isolated from Neofusicoccum luteum, the causal agent of Botryosphaeria dieback in Australia. It showed the phytotoxic effect by causing severe shriveling and withering on grapevine by leaf assay [37]. Papyracilic acid (128) was a 1,6-dioxaspiro [4,4]nonene isolated from the solid culture of Ascohyta agropyrina var. nana, the leaf pathogen of quack grass (Elytrigia repens). This compound was toxic to host plant and a number of non-host plants of the fungus. It was considered as the potential mycoherbicide for control of E. repens [110]. Penicillic acid (129) from Malbranchea aurantiaca showed significant inhibition of radicle growth of Amaranthus hypochondriacus seedlings with IC 50 value of 3.86 µM [111]. Quercilactone A (130) was isolated from Raffaelea quercivora, the pathogen of Japanese oak wilt disease. This compound exhibited weak phytotoxic activity by inhibiting root growth of lettuce seedlings [66]. Sapinofuranones A (131) and B (132), belonging to 5-substituted dihydrofuranones, were isolated from liquid cultures of Sphaeropsis sapinea, the pathogen to cause a wide range of disease symptoms on conifers such as Cupressus macrocarpa and C. sempervirens. Both metabolites were diastereomers of each other. Bioassay of sapinofuranones A (131) and B (132) gave epinasty and brown discoloration on petioles of tomato leaves, sapwood stain on inner cortical tissues of the stem of cypress seedlings, and yellowing and needle blight on pine seedlings [112]. Aromatic-Free Pyrones Phytotoxic aromatic-free pyrones include α-pyrones and γ-pyrones. Most of them belong to α-pyrones. The structures of phytotoxic aromatic-free α-pyrones from fungi are shown in Figure 15. ACRL toxins I (140), II (141), III (142), IV (143), and IV' (144) were isolated from the culture broth of Alternaria citri, the fungal pathogen causing brown spot disease of rough lemon (Citrus jambhiri) and Rangpur Lime (Citrus limonia). They were toxic to the host plants rough lemon and Rangpur Lime by leaf puncture assay and electrolyte leakage assay. These ACRL toxins were considered as the host-specific phytotoxins [118,119]. Alternaric acid (145) was isolated from the culture filtrates of Alternaria solani, the pathogen of early blight and collar rot diseases on tomato plants. Alternaric acid (145) was toxic to tomato seedlings [120]. Simple α-pyrones are often lactone derivatives of fatty acids. Diplopyrone (147) was a phytotoxic metabolite of Diplodia corticola [122] and Diplodia mutila [123], which were phytopathopagenic fungi causing different forms of cork oak canker on Quercus suber with heavy economic losses. Diplopyrone (147) was toxic to the cuttings of cork oak and tomato by causing necrosis and wilting. The absolute configuration of diplopyrone (147) was determined by vis-à-vis comparison of experimental and simulated spectra [124]. Pestalopyrone (152) was a pentaketide phytotoxin isolated from Pestalotiopsis guipinii, the pathogen to cause twig of hazelnut (Corylus avellana). This compound was toxic to a few non-host plants such as Cirsium arvense, Sonchus oleraceus, and Chenopodium album by causing extensive necrosis on the test plant leaves [127]. Solanapyrones A (160) and B (161) were isolated from the culture filtrates of Alternaria solani, the causal organism of early blight disease of tomato and potato. Both metabolites induced leaf necrotic lesion of the host plants [132]. Solanapyrone A (160) was later isolated from the culture filtrates of Ascochyta rabiei grown in the Czapek-Dox medium supplemented with seed aqueous extract of host plant chickpea. Solanapyrone A (160) was toxic to the cultured cells of chickpea [133]. Three phytotoxic aromatic-free γ-pyrones ( Figure 16) namely spiciferones A (162), B (163) and C (164) were isolated from the fungus Cochliobolus spicifer. Among them, spiciferone A (162) was the most toxic to wheat cotyledon protoplasts, spiciferone C (164) was the least, and spiciferone B (163) had no activity. This indicated that the substitution on the γ-pyrone ring of spiciferone A (162) affected its phytotoxicity, and the methyl at C-2 was also essential to its phytotoxicity [134,135]. Furopyran and Pyranopyran Analogues Phytotoxic furanpyran and pyranpyran analogues from fungi with their structures are shown in in Figure 17. Three dihydrofuropyran-2-ones afritoxinones A (165) and B (166), and oxysporone (167) were isolated from Diplodia africana, the causal agent of branch dieback on Juniperus phoenicea. Three compounds showed phytotoxic activity on host (Phoenicean juniper) and non-host plants (holm oak, cork oak and tomato) by cutting and leaf puncture assays. Among them, oxysporone (167) was the most phytotoxic compound [136]. Biscopyran (168) was a phytotoxic hexasubstituted pyranopyran isolated from the liquid culture filtrates of Biscogniauxia mediterranea, the pathogen of cork oak (Quercus suber). This compound caused epinasty on cork oak cuttings, and wilting on non-host tomato [137]. Luteopyroxin (170) was isolated from Neofusicoccum luteum, the causal agent of Botryosphaeria dieback in Australia. This compound showed the phytotoxic effect by causing severe shriveling and withering on grapevine by leaf assay [37]. Macrolide Analogues Phytotoxic aromatic-free macrolides from fungi with their structures are shown in Figure 18. Brefeldin A (171) was a bicyclic lactone isolated from the culture filtrates of Alternaria zinnia, which was used as the biocontrol agent of Xanthium occidentale (Compositae). Brefeldin A (171) was toxic to a series of the tested plants such as Chenopodium album, Cirsium arvense, Mercurialis annua, Nicotiana tabacum, Sonchus oleraceus, and Xanthium occidentale at 10 −4 M by leaf pucture assay [101]. Cladospolides A (172) and B (173) were isomers isolated from the culture broth of Cladosporium cladosporioides. Cladospolide A (172) inhibited root elongation of lettuce and rice seedlings. However, cladospolide B (173) promoted root elongation of lettuce seedlings. It was interesting that these isomers had different plant growth regulatory activities [139]. Cladospolide C (174), a diastereomer of cladospolide A (172), was isolated from the culture filtrate of Cladosporium tenuissimum. Cladospolide C (174) inhibited shoot elongation of rice seedlings [140]. Cladospolide B (173) and myxotrilactone A (180) were isolated from the solid-substrate cultures of the endolichenic fungus Myxotrichum sp. Both compounds significantly inhibited shoot elongation of Arabibopsis thaliana by seedling growth assay [141]. Luteoxepinone (179) was isolated from Neofusicoccum luteum, the causal agent of Botryosphaeria dieback in Australia. It showed the phytotoxic effect by causing severe shriveling and withering on grapevine by leaf assay [37]. Putaminoxin (187) was isolated from the liquid culture filtrates of Phoma putaminum, the causal agent of leaf necrosis of Erigeron annuus. Putaminoxin (187) was toxic to a wide range of host and non-host plants with leaves of E. annuus being most sensitive [148]. Putaminoxin C (188) was isolated from the liquid culture filtrates of Phoma putaminum. This compound showed toxic effects similar to putamnoxin (187) [149]. Pyrenophorin (189) was isolated from the cultures of Pyrenophora avenae. It depressed radical growth of oat (Avena sativa) seedlings [150]. (−)-Dihydropyrenophorin (190) was isolated from the liquid culture of Drechslera avenae, the causal agent of leaf blotch of oats. This compound caused sunken lesions on oats and a variety of other plants at 3.2 × 10 −4 M [151]. Pyrenophorol (191) was later isolated from D. avenae and was toxic to oats [152]. Seiricuprolide (192) was isolated from Seiridium sp., the pathogen causing canker disease of cypress. It showed minor inhibition to the test plants by cutting assay [153]. Sorbicillinoids Sorbicillinoids (also called vertinoids) belong to hexaketide metabolites in which the cyclization has taken place on the carboxylate terminus. They have a variety of biological activities including cytotoxic, antioxidant, antiviral, antimicrobial and phytotoxic activity [154,155]. Four phytotoxic sorbicillinoids ( Figure 19) named bisvertinolone (193), demethyltrichodimerol (194), trichodimerol (195), and trichotetronine (also called bislongiqinolide 196) were isolated from the rice solid cultures of Ustilaginoidea virens (teleomorph: Villosiclava virens), the pathogen of rice false smut disease. These compounds were evaluated for their phytotoxic activity, and showed strong inhibition against the radicle and germ elongation of rice and lettuce seedlings. Among these compounds, bisvertinolone (193) displayed the strongest inhibition [156]. Linear Polyketides The structures of phytotoxic linear polyketides from fungi are shown in Figure 20. Three AF-toxins have been reported as AF-toxins I (197), II (198), and III (199), which were produced by Alternaria alternata, the pathogen of black spot of strawberry. They were host-specific toxins. AF-toxin I (197) also showed toxicity towards pear. AF toxin III (199) was highly toxic towards strawberry and less toxic to pear, while AF-toxin II (198) was toxic to pear [4,157]. Depudecin (200) was isolated from the weed pathogen Nimbya scirpicola. This metabolite produced necrotic lesions on kuroguwai, cowpea, and kidney bean by leaf-puncture assay, and inhibited the root elongation of lettuce seedlings. It did not show significant effects on the other test plants, which indicated that depudecin (200) was a host specific toxin [158]. Three host-specific toxins namely drechslerols A (201), B (202), and C (203) were successively isolated from the culture filtrate of Drechslera maydis, the pathogen of leaf blight disease of Costus speciosus. They all caused necrotic and chlorotic lessions on the leaves of C. speciosus, and inhibited root growth of wheat seedlings [159][160][161]. Three host-specific toxins namely PM-toxins A (204), B (205), and C (206) were isolated from the corn pathogen Phyllosticta maydis. They belonged to the linear polyketides with phytotoxicity toward the tissues and mitochondria obtained from susceptible corn varieties [162]. Spencer acid (207) was a diacrylic acid derivative isolated from Spencermartinsia viticola, the causal agent of Botryosphaeria dieback on grapevine in Australia. It exhibited strong phytotoxicity on Vitis lambrausca and V. vinifera cv. Shiraz by grapevine leaf assay [163]. Phenols and Phenolic Acids Phenols and phenolic acids are mixed biosynthetic origins. Most phenol and phenolc acid derivatives are of polyketide origin such as salicylaldehyde analogues. Other biosynthetic origins include shikimic acid and mevalonic acid pathways [164]. The structures of phytotoxic phenols and phenolic acids from fungi are shown in Figure 21. Agropyrenol (208) was a dihydroxypentenyl substituted salicyladehyde isolated from the liquid cultures of Ascochyta agropyrina var. nana. When the leaves of several weed plants (i.e., Mercurialis annua, Chenopodium album, and Setaria viridis) were assayed, agropyrenol (208) was proved to be phytotoxic to cause the appearance of necrotic lesions by leaf puncture assay [57]. Ascosalitoxin (209) was a trisubstituted salicylic aldehyde which belonged to the methylated hexaketide via polyketide biosynthetic pathway [165]. This metabolite was isolated from Ascochyta pisi var. pisi to show phytotoxic activities on the leaves and pods of pea and bean, as well as on tomato seedlings [166]. Moreover, 2,4-dihydroxy-3,6-dimethylbenzaldehyde (210) isolated from Leptosphaeria maculans was virulent on canola. This metabolite had strong root and hypocotyl growth inhibition on lettuce seedlings [167]. p-Hydroxybenzoic acid (213) was isolated from Alternaria dauci, which was the causal agent of Alternaria leaf blight. It showed an important phytotoxic activity when tested in the leaf-spot assay on parsley (Petroselinum crispum), in the leaf infiltration assay on tobacco (Nicotiana alata) and marigold (Tagetes erecta), and in the immersion assay on parsley and parsnip (Pastinaca sativa) leaves. It might play an important role in the pathogenicity of the fungus [169]. Diorcinol (or called 3,3 -dihydroxy-5,5 -dimethyldiphenyl ether, 224) was isolated from Diplodia corticola, an oak pathogen. This metabolite was toxic to the leaves of Quercus afares, Q. suber, Q. ilex and Celtis australis at 1 mg/mL by causing necrotic lesions [175]. Diorcinol (224) was also isolated from the endophytic fungus Epichloe bromicola obtained from Elymus tangutorum grass. It displayed obvious inhibition on the root and shoot growth of Lolium perenne and Poa crymophila seedlings, and was as active as the positive control glyphosate [176]. p-Methoxyphenol (235) was isolated from the culture filtrates of Ascochyta lentis var. lathyri, the causal agent of Ascochyta blight of grass pea (Lathyrus sativus). p-Methoxyphenol (235) caused clear necrosis on leaves of seven test plants, and inhibited seed germination and rootlet elongation of the parasitic weed Phelipanche ramosa [178]. Phomozin (237) was an ester of orsellinic acid and dimethylglyceric acid. It was isolated from Phomopsis helianthi which was the causal agent of leaf necrosis and steam cankers of sunflowers. Phomozin (237) was thought as a host-specific phytotoxin by leaf puncture assay and cutting test [179]. Stemphol (249) was isolated from Stemphylium botryosum, the pathogen of oilseed rape. This metabolite was toxic to the cells of oilseed rape and chickpea by using cell viability assay [184]. Sesquiterpenoids Many sesquiterpenoids from fungi showed phytotoxic activities. Their structures are shown in Figure 23. Two drimane-type sesquiterpenoids, named altiloxins A (257) and B (258), were isolated as the main phytotoxins from Phoma asparagi, the causal agent of stem blight disease on saparagus. When tested on root elongation of the non-host lettuce seedlings, both compounds showed a weak inhibitory activity. Meanwhile, in the same assay carried out on the host plant at 10 µg/mL, they inhibited the root elongation of 48.2% and 48.5%, respectively [187]. Aspterric acid (259) was previously found from Aspergillus terreus to inhibit the pollen development of Arabidopsis thaliana. However, the mode of action was not clear [188]. This compound was later found to inhibit dihydroxy acid dehydratase (DHAD), which is an essential and highly conserved enzyme among plant species that catalyses β-dehydration reactions to yield α-keto acid precursors to isoleucine, valine and leucine. DHAD along with other two enzymes: acetolactate synthase (ALS) and actohydroxy acid isomeroreductase (KARI) are three enzymes in the plant branched-chain amino acid (BCAA) biosynthetic pathway, which is essential for plant growth [189]. Prehelminthosporol (295) was isolated from Dreschlera sorokiana (syn. Helminthosprorium sativum, Bipolaris sorokiniana). This metabolite was a plant growth regulator that promoted shoot growth of rice seedlings but inhibited the coleoptile growth of wheat seedlings [201]. Prehelminthosporol (295) and dihydroprehelminthosporol (296) were isolated from the culture filtrates of Bipolaris species which was the pathogen of Johnson grass (Sorghum halepense), one of the worst weeds in tropical and subtropical areas of the world. Both metabolites were toxic towards sorghum (Sorghum bicolor) in leaf spot assay [202]. Prehelminthosporolactone (297) was latter isolated from the the culture filtrates of Bipolaris species to show toxic to the leaves of sorghum and sicklepod (Cassia obtusifolia) [203]. Pyrenophoric acid (313) and pyrenophoric acids B (314) and C (315) were isolated from seed pathogen Pyrenophora semeniperda of cheatgrass (Bromus tectorum). Three metabolites showed phytotoxic activity by reducing coleoptile elongation of cheatgrass seedlings [209,210]. Among three metabolites, pyrenophoric acid B (314) was the most phytotoxic to use the abscisic acid (ABA) biosynthesis pathway at the level of alcohol dehydrogenase ABA2 to reduce seed germination of cheatgrass [211]. Seiricardines A (319), B (320), and C (321) were separately isolated from the culture filtrates of Seiridium cardinale, S. cupressi, and S. unicorne, that all were associated with canker disease of cypress (Cupressus sempervirens) in the Mediterranean area [213,214]. The solution of seiricardine A (319) at 0.3 mg/mL was absorbed by severed twigs of cypress to cause the leaf yellowing and browning. Subperidermal injection of the solution of seiricardine A (319) at 0.1 mg/mL into young cypress trees caused necrotic lesions on the stem and a diffuse yellowing of adjacent twigs [213]. Seiricardines B (320) and C (321) were epimeric diastereomers. They showed similar phytotoxic activity to sericardine A (319) [214]. Sorokinianin (322) was isolated from the culture broth of Bipolaris sorokiniana, the pathogen of barley. This compound inhibited germination of the seeds of barley (Hordeum vulgare) [215]. Chenopodolin (331) was an unrearranged ent-pimaradiene diterpene isolated from the pathogen Phoma chenopodiicola, which was proposed for the biological control of Chenopodium album, a common worldwide weed of arable crops such as sugar beet and maize. At concentration of 2 mg/mL, the compound caused necrotic lesions on the leaves of Mercurialis annua, Cirsium arvense, and Setaria viride [220]. Fusicoccn A (332) and dideacetylfusicoccin A (333) were diterpene glycosides produced by the plant pathogenic fungus Fusicoccum amygdali (syn. Phomopsis amygdali) with a unque O-prenylated glucose moiety. They stimulated seed germination of the parasitic weeds Orobanche spp. [221]. Further mechanism investigation showed that fusicoccn A (332) binded to a hydrophobic cavity in plant 14-3-3 proteins and stabilized the interaction with the C-terminal phosphorylated domain of plasma membrane H + -ATPase, thereby promoting stomatal opening and eventually leading to plant death [222]. Triterpenoids Phytotoxic triterpenoids are mainly isolated from the fungi of Basidiomycetes. Their structures are shown in Figure 26. Three lanostane triterpenoids namely aeruginosols A (361), B (362) and C (363) were isolated from the fruiting bodies of Stropharia aeruginosa. Among them, aeruginosol C (362) showed root growth inhibitory activity on lettuce seedlings [238]. Meroterpenoids Meteroterpenoids are natural products that are partially derived from terpenoid biosynthetic pathways. Phytotoxic meteroterpenoids usually contain monoterpene, sesquiterpene, and diterpene biosynthetic pathways. Meroterpenoids Containing Monoterpene Biosynthetic Pathways The structues of fungal phytotoxic meroterpenoids contain monoterpene biosynthetic pathways are shown in Figure 27. Foeniculoxin (367), a geranylhydroquinone, was isolated from Phomopsis foeniculi which was the fungal pathogen (Phomopsis foeniculi) of fennel (Foeniculum vulgare subsp. vulgare) to cause the necrosis of stems, leaves and inflorescences leading to a marked decrease in fruit production [242]. Guignardone A (368) was isolated from the culture filtrates of Macrophomina phaseolina which was the charcoal rot pathogen of many crops. It was toxic to the non-host plant tomato leaf puncture assay. However, it did not show phytotoxic activity to the host plant soybean [243]. Phyllostictones A-C (369-371), and E (372) were isolated from the endophytic fungus Phyllosticata capitalensis derived from the plant Cephalotaxus fortune. These three compounds inhibited shoot and root growth of Lactuca sativa and Lolium perenne seedlings [21]. Phomentrioloxin (373), a phytotoxic geranylcyclohexenetriol, was isolated from the liquid culture of Phomopsis sp. (teleomorph: Diaporthe gulyae) which was isolated from symptomatic saffron thistle (Carthamus lanatus). Phomentrioloxin (373) causes the appearance of necrotic spots when applied to the leaves of both host and non-host plants. It also caused growth and chlorophyll content reduction of the fronds of Lemna minor and inhibition of tomato rootlet elongation [244]. The structure-activity relationship study showed that the hydroxy groups at C-2 and C-4 appeared to be important features for the phytotoxicity, as well as an unchanged cyclohexentriol ring and the unsaturations of the geranyl side chain [245]. Phomentrioloxins B (374) and C (375) were isolated from Diaporthe gulyae, the pathogen of sunflower (Helianthus annuus) by causing stem canker. Phomentrioloxin B (374) showed small but clear necrotic spots on a number of plant specices when assayed at 5 mM on punctured leaf disks of weedy and crop plants [125]. Meroterpenoids Containing Sesquiterpene Biosynthetic Pathways The structures of fungal phytotoxic meroterpenoids contain sesquiterpene biosynthetic pathways are shown in Figure 28. 4β-Acetoxytetrahydrobotryslactone (376) was isolated from the culture broth of Botrytis cinerea. This lactone compound showed a phytotoxic effect on Phaseolus vulgaris when tested up to 250 µg/mL by leaf disk assay. It was specu-lated that the biosynthetic origin of this compound belonged to sesquiterpene-polyketide pathway [246]. Four meroterpenoid quinones cochlioquinons A (377) and B (378), isocolioquinone A (379), and stemphone (384) were isolated from the cultures of Bipolaris bicolor, the pathogen of gramineous plants such as rice and millet. They inhibited the root growth of the seedlings of finger millet and rice [247]. Their absolute configurations were further elucidated by spectroscopic data interpretation, single-crystal X-ray diffraction analysis, chemical transformations, and biosynthetic considerations [248]. They belonged to polyketidesesquiterpenoid hybrid compounds biosynthesized through type I polyketide gene cluster by genome sequence analysis of Bipolaris sorokiniana [249]. Phyllostictone D (382) was isolated from the endophytic fungus Phyllosticata capitalensis derived from Cephalotaxus fortune. This compound inhibited shoot and root growth of Lactuca sativa and Lolium perenne seedlings [21]. Meroterpenoids Containing Diterpene Biosynthetic Pathways The structures of fungal phytotoxic meroterpenoids contain diterpene biosynthetic pathways are shown in Figure 29. Three meroterpenoids namely colletotrichin (also called colletotrichin A, 386), colletotrichin B (387) and colletotrichin C (388) were isolated from the cultures of Colletotrichum nicotianae. Their structures all contained a norditerpene and a polysubstituted γ-pyrone. When applied to tobacco leaves, these compounds induced symptoms similar to those of the tobacco anthracnose caused by C. nicotianae [121]. They were also toxic to lettuce and rice seedlings [251]. Cyclic Peptides Cyclic peptides are cyclic compounds formed mainly by the amide bonds between either proteinogenic or non-proteinogenic amino acids. Phytotoxic cyclic peptides from fungi mainly include ester bond-containing cyclic peptides (or called cyclic depsipeptides) and ester bond-uncontaining cyclic peptides. Cyclic Depsipeptides Cyclic depsipeptides (CDPs) are cyclopeptides in which amide groups are replaced by corresponding lactone bonds due to the presence of a hydroxylated carboxylic acid in the peptide structure [252]. The structures of phytotoxic cyclic depsipeptides from fungi are shown in Figure 30. AM-toxins I (389), II (390) and III (391) belong to cyclic tetradepsipeptides. They were host-specific phytotoxins isolated from Alternaria mali, the pathogen of apple blotch disease [253,254]. It was found that AM-toxin I (389) inhibited photosynthetic O 2 evolution in a host-specific manner [255]. Destruxin congeners are cyclic hexadepsipeptides belonging to host-specific phytotoxins. Destruxin A (392) was isolated from the culture broth of Alternaria linicola, the seed-borne pathogen of linseed (Linum usitatissimum). The infected seeds caused poor germination and damping-off of the seedlings. Alternaria linicola also caused leaf spotting on seedling and adult plants, and a form of head blight in the seed capsules which resulted in a loss of yield and reduction in oil quality [256]. Three cyclic hexadepsipeptides, namely destruxin B (393), desmethyldestruxin B (395) and homodestruxin (396), were isolated from the culture filtrates of Alternaria brassicae, the pathogen responsible for the balck spot of canola. They were assayed on the leaves of host and non-host plants. Dextruxin B (393) induced symptoms ranging from severe chlorosis and necrosis to almost no visible chlorosis. Dextruxin B (393) was proved as the host specific phytotoxin [257]. Both destruxin B (393) and homodestruxin B (396) could be transformed to hydroxydestruxin B (394) and hydroxyhomodestruxin B (397), respectively by host plants. The hydroxylated products (394 and 397) were less phytotoxic than their corresponding destruxins. It was considered as the detoxification strategy of canola against Alternaria fungi [258]. Two destruxin E derivatives, namely destruxin E chlorohydrin (398) and [β-Me-Pro]destruxin E chlorohydrin (399) from Beauveria feline were screened to have phytotoxic activity against the radicle growth of Amaranthus retroflexus seedlings. The structure-activity study showed that chlorine atom played an important role for their phytotoxic activity [28]. Phytotoxic enniatin derivatives included enniatins A (400), A1 (401), B (402), and B1 (403). They belong to the class of cyclodepsipeptides found in various Fusarium species, and consist of alternating residues of D-2-hydroxyisovaleric acid and a branched chain N-methyl L-amino acid, linked by peptide and ester bonds. Enniatins are host non-specific toxins which caused wilt and necrosis during infection of the host, probably related to their ionophoric properties [259]. Enniatins from Fusarium tricinctum reduced the growth of germination of wheat seeds [260]. Enniatins might act synergistically as a phytotoxin complex, which caused wilt and necrosis of plant tissue [261]. Enniatin B (402) and acetamido-butenolide (515) isolated from Fusarium avenaceum, the pathogen of spotted knapweed (Centaurea maculosa), also acted synergistically to cause necrotic lesions on the leaves of different plant species [262]. Two bicyclic lipopeptides, gramillins A (404) and B (405), were isolated from Fusarium graminearum. They were produced in planta in maize silks by promoting fungal virulence on maize, but had no discernible effect on wheat head infection. Leaf infiltration of the gramillins induced cell death in maize, but not in wheat. This indicated that gramillins were host-specific phytotoxins which were deployed as the virulence agents by F. graminearum in maize [263]. Phomalide (406) was a host-selective phytotoxin isolated from the virulent isolates of Leptoshphaeria maclans. It was a cyclic pentadepsipetide with three α-mino acids and two α-hydroxy acids. Phomalide (406) caused disease symptoms (necrotic, chlorotic, and reddish lesions) on canola, but not on either brown or white mustard [264]. Roseotoxin B (407) was isolated from Trichothecium roseum, the pathogen of apple pathogen. This metabolite was able to penetrate apple peel and produced chlorotic lesions by using kinetic fluorescence imaging method. It was the direct evidence of phytopathogenic activity of reseotoxin B (407) of Trichothecium roseum on apple [265]. Phomalirazine (412) was isolated from Leptosphaeria maculans, the pathogen of canola. This compound was to toxic to canola and brown mustard by leaf puncture assay [270]. HV-toxin M (437) was another host specific phytotoxin isolated from the culture broth of Helminthosporium victoriae, the causal agent of victoria blight disease of oat [281]. Phomopsin A (438) was a cyclic tripeptide with a tripeptide side chain isolated from Phomopsis leptostromiformis. This compound inhibited seedling growth of lupinis [282,283]. Ustiloxins A (442), B (443) and G (444) were isolated from Ustilaginoidea virens (teleomorph: Villosiclava virens), the pathogen of rice false smut disease. They showed strong inhibition on the radicle and germ elongation of rice seedlings. When their concentrations were at 200 µg/mL, the inhibitory ratios of radicle and germ elongation were more than 90% and 50%, respectively, the same effect as that of positive control (glyphosate). They also induced abnormal swelling of the roots and germs of rice seedlings [267]. Noncyclic Oligopeptides Fungal phytotoxic noncyclic oligopeptides are linear compounds composed of several amino acids (Figure 33). AS-I toxin (446) was a phytotoxic tetrapeptide (Ser-Val-Gly-Glu) isolated from the culture filtrates of Alternaria alternata, the pathogen of sunflower (Helianthus annuus) by causing leaf necrotic spots. AS-I toxin (446) was toxic to sunflower. Nontoxic or very slight toxic effects were observed on the other tested plants which indicated that AS-I toxin (446) was a host-specific phytotoxin [288]. Depsilairdin (447) produced by Leptosphaeria maculans possessed a tripeptide coupled with a sesquiterpene moiety. Depsilairdin (447) caused disease symptoms similar to those caused by the pathogen. Plant leaves of brown mustard treated with depsilairdin (447) showed strong necrotic and chlorotic lesions, but such symptoms were not observed in canola at a wide concentration range from 1 µM to 1 mM [289]. Cytochalasin Congeners Cytochalasins belong to class of perhydroisoindolylamcrocylclic lactones. The structures of phytotoxic cytochalasin congeners from fungi are shown in Figure 34. Cytochalasin B (448) was isolated from the culture filtrates of the plant pathogens Drechslera wirrenganensis and D. campanulata, and was toxic to the leaves of faba bean by leaf puncture assay [290]. Cytochalasins C (449) and D (450) were isolated from the endophytic fungus Xylaria cubensis associated with Eugenia brasiliensis (Myrtaceae). Both cytochalasins showed phytotoxic activity on wheat coleoptiles [291]. Cytochalasin D (450) was also isolated from the culture filtrates of Ascochyta rabiei (teleomorph: Didymella rabiei), the pathogen of chickpea. This metabolite was toxic to the leaflet cells of chickpea [292]. Three cytochalasins named phomacins D (455), E (456) and F (457) were identified from the wheat pathogen Parastagoospora nodorum by genomics-driven discovery. Both phomacins D (455) and E (456) obviously inhibited wheat seed germination at 100 µg/mL. Phomacin F (457) just had week inhibitory activity on wheat seed germination. Interestingly, phomacin D (455) did not show any inhibition of seed germination against the dicots Arabidopsis thaliana and Lepidium sativum, which indicated that seed germination inhibition of phomacin D (455) could be specific to monocots [294]. Pyrichalasin H (458) was isolated from the cultures of Pyricularia grisea, the causative fungus of blast disease in crabgrass (Digitaria sanguinalis). This compound strongly inhibited growth of rice seedlings at 1 µg/mL [295]. Lactams The structures of phytotoxic lactams from fungi are shown in Figure 35. Cichorine (459), zinnimidine (485) and Z-hydroxyzinnimidine (486) were isolated from the fungus Alternaria cichorii, the pathogen of foliar blight disease of Russian knapweed (Acroptilon repens). These compounds were toxic to the leaves of Russian knapweed by in vitro leaf puncture assay [185]. Four oxazatricycloalkenones phyllostictines A (467), B (468), C (469) and D (470) isolated from Phyllosticta cirsii showed phytotoxic to Cirsium arvense. Phyllostictine A (467) was proved to be highly toxic. Phyllostictines B (468) and D (470) were slightly less toxic compared to phlyllostictine A (467), wheras phyllostictine C (469) was almost not toxic, that showed a clear structure-activity relationship between the phytotoxic activity and the structural features characterizing phyllostictine group. Phyllostictine A (467) should be potential mycoherbicide for Cirsium arvense biocontrol [300]. Porritoxin (471) was first identified as a benzoxazocine derivative from the culture broth of Alternaria porri, the causal pathogen of black spot disease in stone-leek and onion [301]. The structure of porritoxin (471) was then corrected as isoindol-1-one congener [302]. This compound inhibited shoot and root growth of lettuce seedlings at 10 µg/mL [301]. Another isoindo-1-one, namely porritoxin sulfonic acid (472) was later isolated from A. porri. Structure-phytotoxicity investigation showed that the N-alkyl and hydroxyl groups contributed to the phytotocitiy, but that this activity became weak with sulfonation [286]. Triticones A (481) and B (482) were two spirocyclic γ-lacams isolated from Drechslera tritici-repentis, the causal agent of reddish brown spots on wheat (Triticum vulgare). Two compounds in mixture showed phytotoxicity on the leaves and protoplasts of wheat [314]. Indole Derivatives The structures of phytotoxic indole derivatives from fungi are shown in Figure 36. Chlamydosporin (487) was isolated from the endophytic fungus Fusarium chlamydosporum residing in the roots of Suaeda glauca. This indole derivative exhibited significant phytotoxic activity against the radicle growth of Echinochloa crusgalli seedlings with the inhibition rate of more than 80%, even at concentration of 1.25 µg/mL [316]. Colletophyrandione (488) was a tetrasubstituted indolyllidenepyrandioine isolated from the culture filtrates of Colletotrichum higginsianum. It was toxic to four plant species Sonchus arvensis, Helianthus annuus, Convolulus arvensis, Ambrosia artemisiifolia by leaf puncture assay [317]. Crypticin C (489) was isolated from the culture filtrates of Diaporthella cryptica, the emerging hazelnut pathogen. This compound was active in the tomato cutting assay [173]. Pyridine Derivatives The structures of phytotoxic pyridine derivatives from fungi are shown in Figure 37. Ascosonchine (491) was the enol tautomer of 4-pyridylpyruvic acid with herbicidal activity produced by Ascohyta sonchi, the leaf pathogen of Sonchus arvensis, a perennial herbaceous weed occurring throughout the temperate regions of the world. Ascosonchine (491) was toxic to Sonchus arvensis and showed selective herbicidal properties [319]. Fusaric acid (also called 5-butylpicolinic acid, 492) was isolated from Fusarium oxysporum. It was toxic to tobacco leaves by pucture assasy [320]. Fusaric acid (492) was produced by several Fusarium species which commonly infected cereal grains and other agricultural commodities [321]. Both fusaric aicd (492) and 9,10-dehydrofusaric acid (493) were isolated from Fusarium nygamai, which caused large leaf and stem necrosis on the host Striga hermonthica. These two compounds showed a wide chlorosis and necrosis in the punctured aera of tomato leaves as well as a strong inhibition on root elongation of tomato seedlings [322]. Luteoethanones A (502) and B (503), two 1-substituted ethanones, were isolated from Neofusicoccum luteum, the causal agent of Botryosphaeria dieback on grapevine. Both metabolites caused large necrotic spots, severe shriveling, and distortion of the leaf lamina of grapevine by leaf detached assay [39]. Solanapyrone C (528) has been isolated from the culture filtrate of Alternaria solani, the causal organism of early blight disease of tomato and potato [132], and the culture filtrates of Ascochyta rabiei, the pathogen of chickpea [133]. Solanapyrone C (528) was toxic to the leaves of the host plants. Other Nitrogen-Containing Metabolites The structures of other fungal phytotoxic nitrogen-containing metabolites are shown in Figure 39. Brasicicolin A (529) was isolated from Alternaria brassicola, the dark leaf spot pathogen of Brassica species. Brasicicolin A (529) was a polyester of mannitol esterified with two α-isocyanoisopentanoyl, two α-hydroxyisopentanoyl and two acetyl residues. It was a mixture of diastereomers due to the epimerizable protons adjacent to the isocyano group. Brasicicolin A (529) was host specific phytotoxin by causing chlorosis and necrosis on the leaves of brown mustard (Brassica juncea cv. Cutlass, susceptible) at 0.5 mM, but no detectable damage on the leaves of white mustard (Sinapis alba cv. Ochre, resistant) [335]. Maculansin A (530) was isolated from Leptosphaeria maculans (anamorph: Phoma lingam) cultured in potato dextrose broth (PDB) at high temperature. The structure of maculansin A (530) was similar to that brasicicolin A (529). Maculansin A (530) was more toxic to resistant plant (brown mustard) than to susceptible plant (canola) [167]. (S)-Ornidazole (531), a nitroimidazole, was isolated from the solid culture of Penicillium purpurogenum derived from soil. This compound inhibited root and hypocotyl growth of radish seedlings at 100 µM [26]. β-Nitropropionic acid (532) was isolated from Septoria cirsii, the pathogen of weed Canada thistle (Cirsium arvense) growing in virturally all temperate areas of the world. This compound inhibited seed germination and root elongation, and caused the typical symptoms of chlorosis and necrosis on the leaves of Canada thistle and other test plants [336]. Two isoquinoline derivatives named pyrenolines A (533) and B (534) were isolated from the cultures of Pyrenophora teres, the pathogen of barley. Both compounds were toxic to both monocots and dicots by leaf puncture assay [337]. Miscellaneous The structures of miscellaneous phytotoxic metabolites from fungi are shown in Figure 40. Crypticin A (535), a phenylpropanoid, was isolated from the culture filtrates of Diaporthella cryptica, the pathogen of hazelnut. This compound was also called 2-hydroxy-3-phenylpropanoate methyl ester. It was found to be inactive at 1 mg/mL on the leaves of cork oak, grapevine, hazelnut, and holm oak by leaf puncture assay [173]. Two cyclohexene epoxides named (+)-epiepoxydon (536) and PT-toxin (544) were isolated from Pestalotiopsis longiseta and P. theae. They induced leaf necrosis on the test leaves [338]. Phaseocyclopentenones A (540) and B (541) are penta-and tetrasubstituted cyclopentenones isolated from the culture filtrates of Macrophomina phaseolina, the charcoal rot pathogen of many crops. Both compounds were toxic to the non-host plant tomato (Solanum lycopersicum) by leaf puncture assay and seedling cutting assay. However, they did not show phytotoxic activity to the host plant soybean (Glycine max) [243]. Phenylacetic acid (542) was isolated from the liquid culture filtrates by Biscogniauxia mediterranea, the pathogen of cork oak (Quercus suber). This compound caused epinasty on cork oak cuttings, and wilting on non-host tomato [137]. Stagonosporyne G (545) was an oxygenated acetylenic cyclohexanoid isolated from the Parastagonospora nodorum SN15, the pathogen of wheat. This compound displayed a significant phytotoxic activity by killing Arabidopsis thaliana seedlings [339]. Conclusions and Future Perspectives Due to long-term co-evolution of pathogenic fungi and their host plants, the fungus has evolved strategies for successful infection of host. Among these strategies is the production of phytotoxins. Fungal phytotoxins play an important role in the process of pathogenesis as the mediators of virulenece [12]. They are either host specific or non-host specific. This is why many phytotoxins have been identified from phytopathogenic fungi. In fact, fungal phytotoxins play diverse roles in plant disease, from impacting symptom expression, disease progression, to being required for pathogenesis. Some phytotoxins are directly toxic by killing plant cells and allowing for infection of dead cells. Others interfere with the induction of defense responses, or induce programmed cell death-mediated defense responses in order to generate necrosis required for pathogenicity [7,167,335]. In recent years, more and more phytotoxic metabolites have been discovered from other fungi such as plant and animal endophytic fungi, soil-derived fungi, and marinederived fungi [21-24,27,28]. Most of these phytotoxic metabolites are non-host specific. They are suitable for development of herbicides with a broad spectrum. In addition, the phytotoxins from weed pathogens are a very promising source of specific herbicide development for weed control [25, 340,341]. An example of success is the discovery of ophiobolins which have shown their potential as herbicides [229,230]. Transformation of phytotoxin-resistant genes from the fungus into plants is another strategy. Aspterric acid (259) is a fungal phytotoxin. Aspterric acid-producing fungi have the self-resistance gene, astD, which was validated to be insensitive to aspterric acid (259). The fungal self-resistance gene astD has been deployed as a transgene in the establishment of plants to create aspterric acid-resistant crops. Aspterric acid (259) should be a promising lead for development as a broad-spectrum commercial herbicide [189]. The current review describes the phytotoxic activities of secondary metabolites from fungi. In fact, many fungal phytotoxins have other biological activities in addition to their phytotoxic activity. For example, emodin (86) exhibits antioxidant [342], antitumor [343], phytotoxic [86], insecticidal [344], antimicrobial [344], and acetylcholinesterase (AChE) and glutathione S-transferase (GST) inhibitory [344] activities. In addition, many other isolated fungal metabolites have not been evaluated for their phytotoxic activities due to either the lack of phytotoxic assays or not enough of the compounds could be isolated to perform toxicity assays. These metabolites remain to be further tested for their phytotoxicities. For example, herbarin (47) is a naphthoquinone congener. It was previously isolated from a few fungal species such as Anteaglonium sp. FL0768 [345] and Corynespora sp. BA-10763 [346], and later showed obviously phytotoxic activity [59]. In addition to their potential as herbicides, fungal phytotoxins have other potential applications in agriculture [1]. As different fungal species produce specific phytotoxins, this characteristic could be used to develop rapid, simple and specific methods to recognize plant diseases such as the development of practical kits (i.e., rapid test strip) to be used directly in the field by farmers. Furthermore, phytotoxins could be used to select plant varieties resistant to disease instead of using whole plant-pathogen systems. In this way, the disease resistance breeding can be accelerated.
2021-04-26T05:15:48.422Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "6d5bcba0e7825d2fd0bf03fc07668de96374aa1a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6651/13/4/261/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6d5bcba0e7825d2fd0bf03fc07668de96374aa1a", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
233236646
pes2o/s2orc
v3-fos-license
Changes Over Time and Predictors of Online Gambling in Three Norwegian Population Studies 2013–2019 Objectives: To investigate changes over time and identify predictors of online gambling among gamblers by using three Norwegian representative samples covering a 6-year (2013–2019) period. We also aimed to identify different characteristics (including video game participation and video gaming problems) of online compared to offline gamblers. Methods: Data from gamblers (N = 15,096) participating in three cross-sectional surveys (2013, 2015, and 2019) based on random sampling from the Norwegian Population Registry were analyzed. Participants were asked how frequently they engaged in online gambling on different platforms (e.g., mobile phone). Data on sociodemographics, games gambled, gambling problems, gaming, and problem gaming were collected and analyzed by logistic regression analyses. Results: Overall, an increase in online gambling from 2013 to 2015 was found (a larger percentage of gamblers reported having gambled online at least once during the last year), and an increase in online gambling from 2015 to 2019 was found (more gamblers reported having gambled online at least once last year and at least once per week). The increase was largest for gambling on mobile phone. Consistent predictors of online gambling (at least once last year and at least once per week) were male gender, high income, being unemployed, being on disability pension, having work assessment allowance, being a homemaker or retiree, number of games gambled, and gambling problems. Conclusions: Online gambling, especially on mobile phones, has increased significantly during the last 6 years in Norway. Hence, gambling availability seems to have grown, which may pose a risk for development of gambling problems. Compared to offline gamblers, online gamblers were more likely to be men, young, not working or studying, gambling on several games, and having gambling problems. Responsible gambling efforts aiming at preventing or minimizing harm related to online gambling should thus target these groups. INTRODUCTION During recent decades, we have witnessed a sharp rise in Internet use. The partial substitution of many offline activities, including gambling, with online analogs is probably both a cause and a consequence of this increase. Online gambling is however assumed to be more addictive than offline gambling, as the former entails greater availability (both in terms of time and location), anonymity, ease of betting, and enabling of games with high gambling speed (1)(2)(3)(4). Online gambling is also cheaper to operate, often leading to higher payout ratios, which may also intensify gambling behavior. In line with this, a German study estimated that replacing 10% of offline gambling with online gambling would increase an individual's likelihood of being a problematic gambler by 8.8-12.6% (5). So far, few studies (6,7) have investigated which mode of access online gamblers use. However, one study of treatmentseeking gamblers showed that mobile phones were the most commonly used platform for gambling online (7). Whether the prevalence rates of online gambling through mobile phones have changed over time in line with the development of smart phone technology has previously not been investigated, hence this should be elucidated empirically. The vast majority of studies to date show that online gamblers report more gambling problems than offline gamblers (8)(9)(10)(11)(12)(13)(14). However, one study found an inverse relationship between online gambling and gambling problems when controlling for the number of gambling activities (15). Consequently, it is recommended that controlling for the latter is important when investigating whether online gambling actually is associated with gambling problems (16)(17)(18). Another consistent finding, in addition to a higher prevalence of gambling problems among online compared to offline gamblers, is that online gambling is associated with the male gender and young subjects (9,13,19). The following factors have also been associated with online (as opposed to offline) gambling in at least one study: being single, consuming more alcohol, welleducated and in managerial/professional occupations, tobacco use, fewer gambling fallacies, being employed, more positive attitudes toward gambling, higher gambling expenditure, not being Asian, illicit drug use, higher household income, being engaged in a higher number of gambling activities, and being more likely to bet on sports (9,19,20). Still, the number of studies identifying predictors of online gambling is rather limited, and few such studies have been conducted using national representative samples of gamblers. Hence, more studies identifying characteristics of online gamblers are warranted. Another pertinent topic in terms of online gambling concerns the relationship with video game playing. Although one study showed that consumers perceive clear market boundaries between online gambling and gaming products (21), it has nevertheless been suggested that video games with perceived gambling elements may initiate the process of normalizing and increasing the interest in gambling (22). Studies have attested to this notion, showing a positive relationship between online gambling and Internet gaming disorder (23), and a longitudinal study showed problematic video gaming to be a predictor of later problematic gambling (24). A link between problematic gambling and purchase of loot boxes in video games has also been documented (25). Against this backdrop, the aim of the present study was to investigate changes over time and identify predictors of online gambling among gamblers by using three Norwegian representative samples covering a 6-year (2013-2019) period. We also aimed to identify different characteristics (including video game participation and video gaming problems) of online compared to offline gamblers. The following research questions were formulated: (1) Is there an overall difference in the proportion of Internet gamblers (gambling online either at least once last year and at least weekly) between 2013, 2015, and 2019? (2) Is there a difference in the proportion of gamblers using stationary computers, laptops, tablets, and mobile phones for gambling purposes (either at least once last year and at least weekly) between 2013, 2015, and 2019? (3) Across all time points, which factors (gender, age, marital status, children in household, educational level, income, occupational status, country of birth, gambling problem category, number of games gambled, video game participation, and video game problems category) can predict online gambling (either at least once last year and at least weekly)? These questions are important, as answering them can inform gambling operators, regulatory authorities, and treatment agencies about the development of online gambling and identify characteristics of gamblers engaged in online gambling. The potential added valued of the present study to the research field pertains in particular to the use of national representative samples of gamblers, crosssectional data covering a 6-year period, and the ability to characterize online gamblers on central sociodemographic and gambling characteristics. Procedures The data were collected as part of three national surveys about gaming and gambling problems in Norway. The first survey was conducted during autumn 2013. Here, 24,000 persons aged 16-74 years were randomly selected from the Norwegian Population Registry. They were sent a questionnaire with a prepaid return envelope and an information letter explaining the purpose of the study. Up to two reminders with a new questionnaire were sent to those who did not respond. The respondents could also answer on the Internet. A total of 10,081 answered, of whom 6,034 had gambled during the last year. Another national survey using a similar approach (albeit only based on paper-based questionnaires) was conducted during autumn 2015, entailing a gross sample of 14,000. A total of 5,485 took part in the survey, of whom 3,232 had gambled during the last year. A third survey was conducted in 2019, also using a similar procedure, except that the questions initially could only be answered online. However, both reminders in the 2019 survey included a paper-based questionnaire together with a prepaid return envelope. In the 2019 survey, the gross sample size was 30,000. A total of 9,248 participated, of whom 5,830 had gambled during the last year. When adjusting for those not able to answer (wrong addresses, dead, abroad, sick, or not able to understand Norwegian), the response rates of the three surveys were 43.6, 40.8, and 32.7%, respectively. In terms of inclusion criteria in the surveys, no other requirement than having an address in Norway and being between 16 and 74 years was enforced. For participating in the present study, the only additional inclusion criterion was that the participant needed to have participated in gambling at least once last year. In order to keep the response rate as high as possible, recommended approaches such as keeping the questionnaire relatively short, printing it in color with a unique ID number, arranging a lottery with gift cards (worth 500 NOK ≈ 50 e) for those who replied, showing researchers' university affiliation, and highlighting confidentiality were emphasized in all three surveys (26). The first two surveys were approved by the Regional Committee for Medical and Health Research Ethics (REK vest 2013/120), whereas the third survey was approved by the Norwegian Center for Research Data (No. 528056). Instruments The same or similar items were used in all three surveys. Gambling Participation Participants were asked to report their gambling participation on an item defining gambling ("staking money on the outcome of an event or draw where one can win money") and asked if they had participated in gambling (in any form) during the last 12 months ("no"/"yes"). Online Gambling Respondents were asked how often they gambled online using: (a) stationary PC, (b) laptop, (c) tablet, and (d) mobile phone. Each of these four items could be answered: "never, " "less often than once per month, " "about once per month, " "about once per week, " and "about once per day." Hence, online gambling was in the present paper defined as any type of gambling (e.g., from placing odds online to gamble online interactive games) involving the use of the Internet. Problem Gambling Severity Index The Problem Gambling Severity Index (PGSI) assesses gambling problems and comprises nine items, each consisting of a description of a problem gambling behavior or a consequence which the participants are asked to rate according to occurrence frequency, ranging from "never" (0) to "always" (3). Based upon the composite score across the nine items, each participant is assigned to one of four gambling categories: Non-problem gambling (sum score of 0), low-risk gambling (sum score of 1 or 2), moderate-risk gambling (sum score of 3-7), and problem gambling (sum score of 8-27) (27). Cronbach's alpha across the nine items was 0.90, 0.88, and 0.91 for the 2013, 2015, and 2019 survey, respectively. Gambling on Specific Types of Games A list of different types of gambling was provided, and the participants were asked to select the specific types of games they had participated in during the last 12 months. The number and types of games listed changed somewhat across the three surveys due to changes in the gambling marked. In order to compare gambling from survey to survey, only the types of gambling presented in all surveys were included in the present study. These amounted to 17 different games: "paper-based scratch card, " "online-based scratch card, " "bingo in bingo premises, " "data bingo, " "Belago (slot machine in bingo premises), " "online bingo in bingo premises, " "Multix (slot machine), " "gambling on ferries, " "online poker, " "online casino gambling offshore, " "horse racing, " "sport betting, odds games offshore, " "sport betting, odds games state monopolist, " "pool betting, " "number games, " "private gambling, " and "other games." Participation in Video Gaming One item defined video gaming (electronic games played on PC/Mac, tablets, mobile phone, or different game consoles like Playstation, Xbox, PS Vita, Nintendo 3DS, and the like), and the respondents were asked if they had participated in video gaming during the last 6 months ("yes"/"no"). Game Addiction Scale for Adolescents The Game Addiction Scale for Adolescents (GASA) has seven items reflecting the six core addiction (salience, mood modification, tolerance, withdrawal symptoms, conflict, and relapse) components (28) as well as one item related to problems generated by gaming. The response alternatives range from "never" (1) to "very often" (5). According to the instructions, the responses should reflect experiences and behavior during the last 6 months (29). A common approach to identify problem gamers based on GASA is to categorize those scoring 3 or more (i.e., "sometimes" or more often) on 3-6 items as problem video gamers and those scoring 3 or more on all seven items as addicted to video games. In the present study, Cronbach's alpha for the GASA was 0.85, 0.86, and 0.87 for the survey conducted in 2013, 2015, and 2019, respectively. Table 1 presents an overview of the distributions or mean scores and standard deviations for the study variables collected in the three surveys for those who had gambled at least once last year (weighted according to the distribution of age, gender, and county of the general population). Somewhat more men than women were present among the gamblers. Most were married or had a common-law partner, and most lived in households with no children they had caretaker responsibilities for. Bachelor's degree and 400,000-599,999 NOK were the most frequently reported educational and income level, respectively. The majority of the respondents were full-time employed and born in Norway. Among the online gamblers, the largest proportion accessed the Internet via a laptop in 2013 (15.4 vs. 12.4% for mobile phone), while the vast majority of online gamblers used a mobile phone (48.7 vs. 16.2% for laptop) for this purpose in 2019. About four in five of the gamblers were non-problem gamblers. Less than half of the gamblers had participated in video gaming during the last 6 months, and more than 90% were categorized as non-gamer/normal gamer. Statistical Analysis Data were analyzed with IBM SPSS Statistics, version 25. In all analyses, data were weighed in terms of age, gender, and resident county to adjust for any discrepancies between the full sample and the Norwegian population in the age range of 16-74 years. Adjusted logistic regression analyses (adjusting for gender, age group, and problem gambling category) were conducted in order to investigate whether online gambling of any of the following: stationary PC, laptop, tablet, mobile phone, or any of these platforms, had changed in the period 2013-2015. Year 2015 was used as a reference category. One analysis was performed for having gambled online at least once during the last 12 months (ever), and one analysis was performed for frequent (at least weekly) online gambling. Furthermore, adjusted logistic regression analyses were conducted to investigate characteristics associated with online gambling. Gambled online at least once last year across all modes of Internet access (ever gambled online) and gambled online at least once per week across all modes of Internet access (frequent online gambling) comprised the dependent variables. In both logistic regression models, the independent variables were gender, age group, marital status, children in household, educational level, income, occupational status, country of birth, problem gambling category, number of games gambled, gaming participation, and problem gaming category. RESULTS The first research question concerned the proportion of gamblers gambling over the Internet (either at least once last year or at least weekly) in 2013, 2015, and 2019. For any mode of access, the probability of gambling online at least once during the last year was lower in 2013 than in 2015 and higher in 2019 than in 2015 (Figure 1 and Table 2). For any mode of access, the probability of gambling online at least weekly was higher in 2019 compared to 2015 (Figure 2 and Table 2). The second research question concerned the proportion of gamblers gambling over the Internet broken down by mode of access. For gambling at least once last year, no changes by year were found for either stationary PC or laptop. The probability of gambling online on a tablet was, however, significantly higher in 2019 than in 2015. For mobile phone, the probability of frequently gambling online was significantly higher in 2019 than 2015 (Figure 1 and Table 2). For online gambling at least weekly the probability of gambling on a laptop was lower in 2015 than in 2013, wheres the probability of at least weekly online gambling using a mobile phone was higher in 2019 than in 2015 (Figure 2 and Table 2). The third and last research question addressed differences between online and non-online gamblers. Table 3 presents the finding for the results of the logistic regression analysis predicting online gambling at least once during the last year. The model was significant (χ 2 = 2669.6, df = 30, p < 0.001), and the predictors explained between 17.9% (Cox and Snell R 2 ) and 24.2% (Nagelkerke R 2 ) of the variance. The model with the intercept only correctly classified 60.1% of the respondents, whereas the model including all predictors correctly classified 70.6% of the respondents. Significant predictors of online gambling at least once last year were male gender and young age. Those with three or more children in the household had a lower probability of online gambling at least once during the last year than those with no children in the household. Those with high school or bachelor's degree had a higher probability of online gambling at least once during the last year than those not having completed mandatory school or with mandatory school only. Those with higher income than the lowest class (0-199,999 NOK) had a higher probability of online gambling at least once during the last year. Compared to respondents with a full-time position, those working part-time, being unemployed/on disability pension/on work assessment allowance, and homemakers/retirees had a higher probability of online gambling at least once during the last year. Country of birth was unrelated to online gambling at least once during the last year. Those categorized as a low-risk gambler, moderate-risk gambler, and problem gambler all had a higher probability of online gambling at least once during the last year compared to those in the non-problem gambler category. Number of games gambled was positively associated with online gambling at least once during the last year. Participating in video gaming (as opposed to not participating) during the last 6 months was associated with an increased probability of online gambling at least once during the last year, whereas the category of video game problems was unrelated to online gambling at least once during the last year. Table 3 also presents the findings for the results of the logistic regression analysis predicting frequent (at least once per week) online gambling. The model was significant (χ 2 = 1039.2, df = 30, p < 0.001), and the predictors explained between 7.4% (Cox and Snell R 2 ) and 15.8% (Nagelkerke R 2 ) of the variance. The model with the intercept only correctly classified 90.5% of the respondents. Classification was not improved by the model including all predictors. Men gambled more frequently online than women. The respondents in the age range of 46-55 years had a higher probability of frequent online gambling compared to ones in the age range of 66-74 years. Marital status and children in the household were unrelated to frequent online gambling. Those with a master's degree/PhD had a lower probability of frequent online gambling than those who had not completed or only completed mandatory school. People earning 200,000-999,999 NOK had a higher probability of frequent online gambling than those with the lowest (0-199,999 NOK) income. Those being unemployed/on disability pension/on work assessment allowance as well as homemakers/retirees had a higher probability of frequent online gambling compared to the reference group (full-time employed). Country of birth was not related to frequent online gambling. Low-risk gamblers, moderate-risk gamblers, and problem gamblers all had a higher probability of frequent online gambling compared to nonproblem gamblers. Number of games gambled increased the probability of frequent online gambling. Neither involvement with video games nor gaming problems were associated with frequent online gambling. Taken together, online gambling, especially on mobile phones, has increased significantly from 2013 to 2019. Consistent predictors of online gambling (both ever and frequent) were male gender, young age, earning high income, not working or studying, having gambling problems, and number of games gambled. In the 2013 survey, 6.3% responded via Internet and 93.7% responded via a paper questionnaire. Of these, 50.4 and 25.2% (χ 2 = 111.9, df = 1, p < 0.001, continuity correction) had gambled online, respectively. The data collection of the 2015 survey was exclusively conducted via paper-based questionnaires. In the 2019 survey, 65.6% responded via Internet and 34.4% responded via a paper questionnaire. Of these, 62.9 and 48.1% (χ 2 = 118.1, df = 1, p < 0.001, continuity correction) had gambled online, respectively. DISCUSSION Overall, online gambling among gamblers had increased during the last 6 years in Norway, both in terms of ever (at least once during the last 12 months) and frequent (at least once per week) online gambling. This increase is attributable to increased online gambling on mobile phones, which now, by far, seems to be the most used mode of Internet access by gamblers. Another study showed however that online gambling via computers was the most frequent online gambling mode (20), whereas a more recent study of help-seeking gamblers attested to mobile phones as the preferred mode of Internet access for gambling purposes (7). Taking publication year into consideration, these findings overall suggest that mobile phone seems to have become the prevailing mode of accessing the Internet for gambling purposes. This development may be worrisome as, in line with the accessibility hypothesis, those gambling online on mobile phones report more often gambling problems than those who gamble on a computer (30). Online gambling (ever and frequent) was more common among men than women. This is in line with several other studies (9,13,19,20,31) and most likely reflects that men generally are more involved in gambling than women (32). Young subjects had a higher probability of gambling online compared to older ones (especially at least once during the last 12 months). This also run tandem with previous findings (9,19,20,31) and suggests that younger people in general are more familiar with Internet use than older people (33) and may also be more attracted to the games available there. For frequent online gambling, the only significant finding related to age was that the age group 46-55 years had a higher probability of such gambling than those 66-74 years old. Unlike other studies, marital status was unrelated to online gambling. One explanation to this is that the present study controlled for several sociodemographic variables simultaneously. Those with three or more children in the household had a lower probability of ever gambled online during the last year compared to those with no children in the household. This may imply that high childcare responsibility load, probably due to time constraints, prevents online gambling. Those with a high school education and a bachelor's degree had a higher probability of online gambling (at least once during the last year) than those not having completed any education beyond mandatory school. Similarly, those with a master's degree exhibited a lower probability for engaging in frequent online gambling. These findings are in accordance with a study from Sweden showing that a higher proportion of those with medium (as opposed to low and high) level of education gambled online (31). One explanation to this finding is that those with low education are more Internet-illiterate than those with a higher education (34). Those with the highest education were less inclined to frequent online gambling compared to those not having completed any education beyond mandatory school. This may reflect that the former group is less interested in online gambling due to being less influenced by cognitive biases (35) and may thus perceive gambling in a more realistic way. Those in the lowest income class had a lower probability of Internet gambling (both ever and frequent) than those with higher incomes. This runs counter with two other studies showing no relationship between income and online gambling (9,36). The present finding most likely reflects that people with a low income have limited amounts of money to spend gambling. Regarding occupational status, the results showed that unemployed, people on disability pension, work assessment allowance, homemakers, and retirees were overrepresented among online gamblers (both ever and frequent) compared to full-time employees. The reason for this is not clear, but it may reflect that those in the former groups have more free or available time to gamble than those employed full-time (19). Country of birth was unrelated to online gambling. Overall, the most consistent predictor of online gambling was gambling category, showing that both low-risk gamblers, moderate-risk gamblers, and problem gamblers had a higher probability of online gambling (both ever and frequent) than non-problem gamblers (while controlling for all other variables including number of games gambled). This is in contrast to a former study showing that gambling problems were inversely related to online gambling when controlling for the number of games gambled (15). The discrepancy between the current finding and the findings of Philander and MacKay (15) may relate to the year of the surveys, as the data of Philander and MacKay's (15) were collected in 2010, while the current study's data were collected in 2013, 2015, and 2019. It is conceivable that online gambling was more uncommon and less advanced in 2010 and that the association between problem gambling and online gambling in 2010 could be explained by problem gamblers seeking out a larger number of different games (online as well as offline). By 2013 and later, however, online gambling has become more common including more advance games containing "addictive features." Thus, the association between problem gambling and online gambling can no longer be explained solely by the number of games played and may instead perhaps be explained by features of online gambling facilitating the development of problem gambling. The finding showing that those with gambling problems were more involved (both ever and frequent) with online gambling than non-problem gamblers is further in line with the majority of studies on this topic (8)(9)(10)(11)(12)(13)(14). Having played video games during the last 6 months was associated with an increased probability of having gambled online at least once during the last year but was unrelated to frequent online gambling. This may suggest that a common denominator between gaming and online gambling is the use of relevant technology. The fact that gaming problems were not related to the probability of online gambling, neither ever nor frequent, supports this notion and does not support previous findings showing a positive relationship between online gambling and Internet gaming disorder (23). It may appear contradictory that both high income and unemployment of some sort (e.g., disability pension) were associated with online gambling. However, each association was adjusted for all of the other included variables, thus it makes sense that individuals who are not at work may engage in more online gambling (and gambling in general) when income level is held constant and vice versa-that higher income may be associated with more gambling when employment status is held constant. Both in the 2013 and in the 2019 survey, a correspondence between answering format (via paper or Web) and participation in online gambling (no vs. yes) was found. This seems reasonable and suggests that people's general online usage is associated with online gambling. Still, as the sample was drawn from the National Population Registry, the mode of answering should not influence the overall representativeness of the sample as a whole. Limitations and Strengths A limitation of the present study is the mediocre response rates, which may limit the generalizability of the findings, although it could be argued that the response rates are reasonable, taking the general falling response rate to surveys worldwide into account (37). The cross-sectional design of the study, although based on three surveys conducted over a 6-year span, prevents conclusions about directionality and causality. Regarding the numbers of games controlled for, it should be noted that some categories were broad and contained more than one game (e.g., number games), whereas other games were represented by more than one category (sports betting offshore or with state monopolist). Another limitation is that the present study did not differentiate between online gambling in terms of just placing bets (e.g., sports betting and number games) and online gambling (e.g., online casino games) where the games themselves unfold on the Internet. Still, in both cases, it is arguable that online gambling increases availability, hence the current operationalization is justifiable from such point of view. The second regression model explained less variance than the first. This most likely reflects differences in base rate (in this case, proportion of those who have gambled online) between the two models (0.391 and 0.093), as the outcome in cases where the base rate is close to 0 or 1 is already much determined in contrast to outcomes in which the base rate is close to 0.5 (38). Strengths of the present study are the high number of respondents, the representative samples of gamblers drawn from the National Population Registry, and the use of validated instruments to assess gambling (27), as well as gaming problems (29). The fact that the relationship with online gambling and relevant correlates was analyzed using a multivariable approach, controlling for several confounders is also an asset of the present study. As far as we know, the present study is the first elucidating change over time in terms of online gambling in representative samples. Conclusions Among gamblers, online gambling, especially on mobile phones, has increased significantly from 2013 to 2019. Since the consistent predictors of online gambling (both ever and frequent) were found to be male gender, young age, earning high income, not working or studying, having gambling problems, and number of games gambled, responsible gambling initiatives aimed at preventing or minimizing harm related to online gambling (e.g., responsible gambling tools) should thus target those in these groups. In terms of policy implications, the results showing a significant increase in online gambling suggest that gambling operators should use this as an opportunity to increase their focus on mandatory registered gambling and responsible gambling initiatives, as both are more feasible to implement in online compared to offline gambling settings (39). DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Regional Committee for Medical and Health Research Ethics (REK vest 2013/120). Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements. AUTHOR CONTRIBUTIONS SP, RM, AM, and EE designed the study and collected the data. SP drafted the first version of the manuscript and conducted the analyses. RM, AM, JE, PK, and EE critically revised the manuscript. All authors approved the final version of the manuscript submitted for publication. FUNDING This study was funded by the Norwegian Competence Center for Gambling and Gaming Research.
2021-04-15T13:41:54.798Z
2021-04-15T00:00:00.000
{ "year": 2021, "sha1": "e20b71ae45d8d89252cd792ee5e92f0926710e71", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyt.2021.597615/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e20b71ae45d8d89252cd792ee5e92f0926710e71", "s2fieldsofstudy": [ "Psychology", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
252817752
pes2o/s2orc
v3-fos-license
Research Gaps in Fragile X Syndrome: An Updated Literature Review to Inform Clinical and Public Health Practice ABSTRACT: Objective: The phenotypic impact of fragile X syndrome (FXS) has been well-documented since the discovery of the fragile X messenger ribonucleoprotein 1 gene 30 years ago. However, gaps remain in clinical and public health research. The purpose of this literature review was to determine the extent to which these gaps have been addressed and identify targeted areas of future research. Methods: We conducted an electronic search of several scientific databases using a variety of key words. The search focused on 5 areas identified as research gaps by an earlier review: (1) diagnosis, (2) phenotypic presentation, (3) familial impact, (4) interventions and treatments, and (5) life span perspectives. Inclusion criteria included publication between 2014 and 2020, focus on human subjects, and publication in English. A total of 480 articles were identified, 365 were reviewed, and 112 are summarized in this review. Results: Results are organized into the following categories: (1) FXS phenotype and subtypes (FXS subtypes, medical profile, cognitive/developmental profile, social and behavioral profile); (2) needs of adults; (3) public health needs (clinical diagnosis and newborn screening, health care needs, and access); (4) treatment (treatment priorities, pharmacological treatments, and behavioral and educational interventions); and (5) families (economic burden and mother-child relationship). Conclusion: Despite the progress in many areas of FXS research, work remains to address gaps in clinical and public health knowledge. We pose 3 main areas of focused research, including early detection and diagnosis, determinants of health, and development and implementation of targeted interventions. gene cause of intellectual and developmental disabilities. Expansions of trinucleotide repeats cystosine-guanine-guanine (CGG) on the 59untranslated region of the fragile X messenger ribonucleoprotein 1 (FMR1) gene, located on the X chromosome, affect the production of fragile X messenger ribonucleoprotein (FMRP), which is crucial for brain development. Individuals with more than 200 CGG repeats have the full mutation, or FXS, whereas those with 55 to 200 repeats are carriers of FXS, also referred to as having the FMR1 premutation. Typically, male patients are more severely affected given the protective factor of a second X chromosome in female patients. The phenotypic impact of FXS has been welldocumented since the discovery of the FMR1 gene 30 years ago, including our own public health literature review that summarized research between 1991 and 2014. 1 The goal of the previous review was to identify what was known about FXS in the areas of development, social-emotional well-being, medical needs, treatment options, and the impact on the family. Five gaps were identified as areas in need of further research: (1) identification of FXS subtypes; (2) needs of adults with FXS; (3) public health needs, such as access to health care services; (4) efficacy of educational, behavioral, and pharmacological treatments; and (5) impact on families of individuals with FXS. The purpose of this updated literature review was to examine the progress made in addressing these gaps and to summarize recent research. These updated findings can be used to inform both future research initiatives and clinical and behavioral health services for individuals with FXS and their families. METHODS A list of search terms are provided in Table 1. We limited our search to articles published since our earlier review (2014-2021) with samples of human subjects. Language was limited to English. The following databases were searched: PubMed, CINAHL, EBSCOhost, Psy-cINFO, and Web of Science. Articles were reviewed using Covidence, 2 a systematic review management program designed to organize complex literature reviews. The results were uploaded into Covidence and screened by the study team to remove articles that were not applicable (e.g., animal models, individuals with FMR1 premutation) or were covered in the previous literature review. The remaining studies were double-coded for inclusion based on relevance. Any discrepancies between reviewers were discussed and resolved. A total of 112 articles were included in the current review. Figure 1 depicts the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) 3 flow diagram for this review. RESULTS The results from the systemic review are organized based on the research gaps identified in the previous literature review. Medical Profile Classic clinical features of fragile X syndrome (FXS) include an elongated face, broad forehead, prominent ears, and flat feet. Hyperflexibility of joints and connective tissue problems are also common. New evidence suggests that as adults, individuals with FXS may be at increased risk for a number of other medical problems, such as obesity, hypertension, and gastrointestinal disorders. In addition, seizures are a common comorbid feature. Female patients with FXS have a less severe presentation given the presence of a second unaffected X chromosome. 4 A review of neuroanatomical studies of individuals with FXS has shown dendritic spine abnormalities and changes in brain structure, and electroencephalogram (EEG) studies show alterations in gamma waves which correlate with clinical symptomology. 5 Cognitive/Developmental Profile Our earlier article 1 provides a detailed overview of the cognitive development of individuals with FXS. A recent review furthers this work by summarizing what is known about executive function in FXS. 6 Verbal and nonverbal working memory is impaired in individuals with FXS across the life span, although performance on tasks was dependent on cognitive load. Challenges with inhibitory control, cognitive flexibility, and processing speed emerge early in childhood and persist into adulthood. Attention is one of the core deficits in FXS, with impairments in both auditory and visual sustained attention when compared with chronologically age-matched and mental age-matched peers. In a study of infants with FXS, delays in overall development were seen as early as 6 months when compared with typically developing peers. 7 Language development in FXS is well studied. Recent work adds to the knowledge base in receptive, expressive, and pragmatic development. Children with FXS show signs of receptive and expressive language delays as early as 12 months. 8 Early use of consonants during babbling and intentional communication, including the use of gestures, is predictive of later expressive language. 9 By the time of preschool, though, children with FXS have the ability to repair a breakdown in communication (i.e., when there is a miscommunication with a conversational partner), indicating well-developed language and social skills. In addition, maternal language plays a role in language development. Maternal commenting is associated with receptive, but not expressive, vocabulary from early childhood into adolescence. 10 Maternal pragmatic language is associated with receptive and expressive vocabulary in adolescents and young adults. 11 Although receptive language ability is lower in children and adolescents with FXS than those who are typically developing, it is a relative strength when compared with ability levels of those with autism spectrum disorder (ASD) or Down syndrome. 12,13 Children with FXS continue to make improvements in their functional skills into middle childhood, at which point cognitive development begins to slow. However, those with lower nonverbal developmental scores or more ASD symptoms had different growth patterns and less overall skill attainment when examining raw scores. 14 Over time, male patients with higher adaptive behavior age-equivalent scores in early childhood had fewer aberrant behaviors at age 10 years. 15 When compared with typically developing children, those with FXS show plateaus or declines in their adaptive behavior over time, suggesting that those with FXS overall have a lower rate of skill attainment. 16 Cross-sectional studies have found similar patterns when comparing young children with FXS with their same-aged peers. 17 Another study, though, showed that adaptive skills, in particular standard scores for communication and social ability, improved over time. 18 Social and Behavioral Profile In a recent review of behavioral concerns, prevalence estimates across studies for individuals with FXS were 48.8% for self-injury, 35.8% for aggression, and 24.5% for destruction, with male patients significantly more likely than female patients to engage in any type of challenging behavior. 19 When compared with male patients with mixed-etiology intellectual disability, male patients with FXS had higher rates of self-injury and specific forms of aggression, such as scratching or biting others. 20,21 Functional analysis of these behaviors has shown that social or environmental factors, including gaining attention or access to preferred objects or escaping from social demands, reinforce and maintain these challenging behaviors. 22 Other behavior problems include restrictive and repetitive behaviors and sensory issues. Cross-sectional studies across the life span show peaks in sensory-motor behaviors between ages 2 and 12 years and in restricted and repetitive behaviors between ages 7 and 12 years. 23 Restricted interests, such as being strongly attached to one specific object and fascination with one subject or activity, were rated as moderate to severe. 24 Studies differ, however, on whether restrictive and repetitive behaviors are related to an individual's intelligence quotient (IQ). 23,24 Recent studies exploring sensory processing indicate that hypersensitivity to visual, auditory, or tactile stimuli may be the underlying issue in behavioral challenges in individuals with FXS. 25,26 Social avoidance is a key characteristic that emerges in male patients as early as infancy, with increases into middle childhood but steadying in later years. 27 During observational assessments, eye contact avoidance, in particular, was evident in adolescent and young adult male patients, especially in those with more ASD symptoms. Alternate measurement techniques of social avoidance have been developed, including caregiver-completed assessment scales, 28 other observational rating scales, 29 and eye-tracking techniques. 30 Fragile X Syndrome Subtypes One of the gaps we identified in our earlier article was lack of information regarding subtypes of the FXS phenotype. Since then, our understanding of the FXS phenotype has continued to grow, with increased focus on genotype-phenotype associations and neuroanatomical correlates of clinical features. Mosaicism (i.e., variation in CGG repeats across different cell types) and methylation status (i.e., variation in FMRP levels across different cell types) are now well-known molecular indicators of phenotypic impact. 31,32 Research uncovering novel mutations and mosaic genetic presentations on the FMR1 gene have helped to identify disruptions on specific exons and untranslated regional variations, which have implications for the function of FMRP and the variable phenotypic expression seen in FXS. 33,34 Studies have explored how FMRP acts as a regulator and affects clinical features in individuals with FXS. 35,36 In 1 study, as FMRP levels decreased below 70% of the mean for unaffected individuals, individuals with FXS had declining IQ scores. An average FMRP level of 35% below the mean was needed for individuals with FXS to reach an IQ of 85 or 1 SD below average. 37 Given the role FMRP plays in cognitive functioning, studies have suggested it as a therapeutic target. 38 However, challenges remain in accurately measuring FMRP in different sample types (e.g., blood versus buccal cells). 35 Work on identifying phenotypic subgroups in FXS has continued to emerge. One study examined how best to diagnose ASD in individuals with FXS using FX-specific e58 Research Gaps in Fragile X Syndrome versions of the Social Communication Questionnaire and the Social Responsiveness Scale. 39 Additional studies have examined behavioral differences in those with FXS only compared with those with FXS and ASD. In keeping with earlier work, individuals with FXS and ASD show more severe behavioral problems, challenges with receptive and expressive language and social interaction, and greater cognitive impairment. [40][41][42] A second subgroup, those with both the FXS and the Prader-Willi phenotype, is marked by obesity, hyperphagia, and delayed puberty, with about half also diagnosed with ASD. 43 Needs of Adults with Fragile X Syndrome A second gap in the literature we identified in our earlier review was a lack of understanding of the needs of young adults, middle-aged adults, and seniors living with FXS. Individuals with FXS live, on average, to age 87 years, 44 underscoring the importance of understanding the trajectory of adult needs in this population. Over the past several years, there have been a handful of studies describing the phenotype of adults, although mostly in young to middle adulthood. Receptive language skills continued to be a relative strength in adults when compared with expressive ability. 45 In a large-scale survey study, young adults aged 18 to 21 years had more ASD symptoms than middle-aged adults. Eye contact continues to be a deficit for adults with FXS. 46 Despite these challenges, adolescents and adults with FXS were more likely to engage in hobbies, spend time with friends or neighbors, and have a mutual friend when compared with those with ASD. 47 Positive findings also emerge when functional and behavioral skills are examined into adulthood. Although some studies have found plateaus or even declines in functional skills, a longitudinal study of adolescents and adults showed some growth in adaptive behavior over time. Behavior and internalizing problems were inversely related to age. Maternal criticism was a significant predictor of behavior problems and externalizing problems. 48 and executive function 50 have been shown to be strongly associated with functional skills. Public Health Needs Another major gap in knowledge previously identified was the public health needs of individuals with FXS. These needs were categorized into (1) clinical diagnosis and newborn screening and (2) health care needs and access. Clinical Diagnosis and Newborn Screening Traditionally, diagnosis of FXS is performed clinically after the onset of symptoms. 51 However, this process can be long and arduous for families, with a mean age of diagnosis around 32 months and fewer than 20% of children receiving a diagnosis within the first year of seeking medical attention. 52 There have been efforts to decrease the diagnostic odyssey and improve the process for receiving a clinical diagnosis, including carrier screening of women before or during pregnancy, [53][54][55] preimplantation genetic testing, 56,57 and newborn screening. Newborn screening for FXS in both male patients and female patients is supported by most developmental and behavioral pediatricians. However, voluntary screening is preferred over mandatory. 58 Voluntary screening in hospitals shortly after birth is possible, but challenging. In a multisite US study of 28,000 newborns, two-thirds of parents were willing to have their child screened, but educational materials were essential in supporting informed decision-making. 59 A smaller-scale study in Australia found higher rates of consent, with 94% agreeing to screening. 60 More recently, a statewide, voluntary newborn screening study conducted in North Carolina was implemented as a partnership between public health staff and researchers. One of the main considerations was how to best recruit families. [61][62][63] Barriers to full-scale newborn screening for FXS include public health burden to conduct the screening; the need for inexpensive, high-throughput screening methodologies; lack of demonstrated treatment for asymptomatic children; and insufficient capacity for long-term followup. 64,65 Health Care Needs and Access Indirect and direct health care costs are sizable for individuals with FXS. Using Medicare/Medicaid administrative claims data, annual all-cause health care costs ranged from $2222 to $9702, with higher costs in the Medicaid cohort. Main cost drivers included medical procedures, both routine (e.g., office visits, immunizations) and nonroutine (e.g., laboratory tests, therapies, anesthesia), followed by hospitalizations in a subset of individuals, and finally medications. 66 Studies in Europe have also found high health care and caregiver burden costs, with non-health care costs (e.g., informal care) being the main contributor. 67,68 Direct and indirect health care costs are higher for those with FXS and their families than those without FXS. 69,70 In a survey study of over 600 caregivers of individuals with FXS, 20% reported having difficulty accessing specialty services, and nearly 40% indicated that their child's primary care provider was not knowledgeable about FXS. 71 Parent-reported data of preventive care services have shown that 92% of individuals with FXS met immunization guidelines and 75% met dental care guidelines, but only 55% met influenza vaccination guidelines, and just 24% met physical activity guidelines. 72 Treatments for Fragile X Syndrome At the time of our 2017 literature review, a noted gap was lack of research to explore the impact of symptombased pharmaceuticals and behavior-based interventions. In the intervening years, however, there have been extensive studies focused on identifying treatment priorities, pharmacological treatments, clinical trial, and educational and behavioral interventions. Vol. 44, No. 1, January 2023 Treatment Priorities Stakeholder involvement from the beginning of treatment development is considered best practice. 73 As such, understanding the perspectives of parents and individuals with FXS has been a key goal over the past few years. In a study of treatment targets, 439 family members of at least 1 individual with FXS indicated that medications to address anxiety, learning, and behaviors such as tantrums and aggression were their top priorities. 74 Similarly, Cross et al. 75 found behavior and selfcare to be the most important targets for treatment for caregivers across age groups. Parents are generally supportive of pharmacological clinical trials, yet there may be concerns about safety and long-term implications for their child in the decision process. 76 Concerns about side effects, swallowing tablets, blood tests, financial costs, and travel can be barriers to participation in clinical trials, 77 as are misunderstanding of the objectives of pharmacological clinical trials 76 and the likelihood that their child will experience a direct benefit. 78 Pharmacological Treatment Psychotropic medications are used by more than twothirds of adolescents and adults with FXS, whereas a quarter are likely to use nonpsychotropic medications. 79 Once on a medication, individuals with FXS were more likely to stay on medication over a 3-year period. Those with more autism symptoms, behavioral challenges, and greater family incomes were more likely to use psychotropic medications. In a review of psychopharmacological management in FXS, Eckert et al. 80 analyzed the use of medications to address irritability, aggression, agitation, and self-injury in 415 individuals with FXS. The most commonly used medications identified were the antipsychotics aripiprazole and risperidone (used by 37% and 27%, respectively), with most users experiencing no side effects from these medications. Psychopharmacological management tended to be accessed more often by older male patients who had more significant impairments. A retrospective analysis of the use of risperidone in conjunction with other nonantipsychotic medications to target irritability in 32 male patients with FXS found a 33% responder rate, 81 leading to a conclusion that monoantipsychotic treatment with risperidone is limited in FXS. Clinical Trials Clinical trials of gamma-aminobutyric acid (GABA) modulators, including ganaxolone (a GABA A receptor agonist) and arbaclofen (a GABA B receptor agonist), and amino-terminal tripeptide of insulin-like growth factor 1 (i.e., trofinetide), have been used to target the core pathophysiology of FXS. A randomized, double-blind, placebo-controlled trial of ganaxolone 82 in children with FXS did not meet primary (i.e., Clinical Global Impression-Improvement scale) or secondary (e.g., Pediatric Anxiety Rating Scale-Revised) study end points. However, post hoc analyses showed a trend in reducing anxiety, hyperactivity, and social avoidance for a subset of participants who entered the study with higher anxi-ety or lower IQ scores. A phase 3 trial of arbaclofen 83 demonstrated positive change in children who were on the highest dose (10 mg TID), with lower irritability and parenting stress scores; scores for social avoidance and hyperactivity trended toward statistical significance. More recently, a phase 2 trial of trofinetide demonstrated benefit, with individuals in the treatment group having lower clinician-reported and parent-reported symptom scores on 3 core efficacy measures. 84 Other trials targeting cognition and/or behavior have found moderate success. A small study of individuals with FXS who received between 2.5 and 10.0 mg of donepezil, an acetylcholinesterase inhibitor, did not demonstrate change on cognitive or behavioral outcomes. 85 However, after 12 weeks of treatment, improvements were seen on direct versus averted gaze as measured by functional magnetic resonance imaging. In a randomized, double-blind, placebo-controlled trial of sertraline, 86 a selective serotonin reuptake inhibiter, children with FXS showed improvements in secondary end points, including improved motor and visual perception skills and social participation. Two case series of individuals with FXS examined the use of metformin, typically used to treat type 2 diabetes, obesity, or glucose intolerance. 87,88 The results showed improvements in behavior and language development. Finally, cannabinoids have been used to reduce anxiety and aberrant behavior and improve language skills and overall quality of life. 89,90 Behavioral and Educational Interventions In 2015, Moskowitz and Jones 91 published a systematic review of 31 behavioral intervention studies. The findings suggest that behavioral approaches are promising for addressing a variety of disruptive behaviors or functional outcomes in individuals with FXS. In a 12week trial of a telehealth-delivered function-based behavior analytic intervention, rates of problem behavior decreased significantly in 8 of 10 children with FXS (ages 2 to 10 years). 92 In another intervention study, 20 boys with FXS, ages 8 to 18 years, were randomized to receive discrete trial instruction plus relaxation training administered at 1 of 2 prescribed doses over a 2-day period. 93 Levels of social gaze behavior increased significantly across blocks of training trials for 6 boys (60%) who received the high-dose behavioral treatment and for 3 boys (30%) who received the low-dose behavioral treatment. Therapeutic physical exercise 94 and specific diet 95 have also been used to address behavioral or socioemotional challenges in FXS. Both of these studies suggest diet and exercise may be helpful for individuals with FXS, but each approach requires further study. A series of studies evaluated the use of Cogmed, a validated computer-based training designed to improve working memory and executive functioning in children with FXS. The results from the first study demonstrated the feasibility of a 5-week Cogmed training. 96 A followup study indicated significant improvement in working memory, some domains of executive function, and e60 Research Gaps in Fragile X Syndrome parent-reported and teacher-reported behaviors during the treatment period, with many changes maintained at follow-up after 3 months without training. 97 In a subsequent "deep dive" into the data, those with a higher IQ or mental age at baseline showed greater gains. 98 Communication is a significant challenge for many individuals with FXS, and frustration with communication failures is likely a major contributor to problematic behaviors. In 2015, a multiple baseline study of a delayed video feedback intervention for a mother and her 31month-old son showed that the behavior support strategies used increased appropriate requesting and reduced the frequency of the child's self-injurious behavior. 99 Two pilot studies of a parent-mediated language intervention found that mothers were able to increase their use of strategies to help focus their child's attention and communicative acts. 100,101 A follow-up randomized trial found that mothers in the treatment group used these strategies at posttreatment significantly more often than mothers of children in the comparison group. When compared with those in the control group, children in the treatment group were more likely to show increased duration of engagement, use more utterances that maintained the topic of the story, and use prompted inferential language. 102 Similar results were found in sample of younger boys and their mothers. 103 A combined pharmacological and language treatment study used a randomized, double-blind trial design and assigned families to the same parent-mediated language intervention plus lovastatin (10-40 mg/d) or an intervention group plus placebo. 104 Both groups demonstrated improvements in all primary outcome measures, including direct assessment and parent report measures, further supporting the efficacy of the language intervention but not providing evidence for the benefit of the addition of medication. For a subsample of individuals with FXS, verbal expressive language does not develop with enough capacity for efficient communication. For these individuals, augmentative or alternative communication (AAC) techniques may be beneficial to provide an avenue for communicating their wants and needs. Schladant and Dowling recently conducted a qualitative study to explore parental acceptance, use, and engagement of AAC in 4 FXS affected mother-child dyads. The investigation exposed 3 main systemic gaps that may limit the successful integration of AAC in the home including (1) failure to consider unique aspects of the family context, (2) limitations of AAC technologies, and (3) inadequate knowledge of FXS and AAC among practitioners. 105 Families The initial public health literature review identified several gaps regarding the impact on families of individuals with FXS given the complex nature of the FMR1 gene expansions. Families of individuals with FXS are unique; in most cases, there is more than 1 family member affected. Given the hereditary patterns of FMR1 expansions, biological mothers of children with FXS nearly always have an FMR1 premutation or may have a full mutation themselves. Given the high penetrance of symptoms among those with a premutation, mothers of children with FXS are at high risk for fragile X-associated physical or mental health issues. 106 In addition, many families have a second child with FXS before the first is diagnosed and/or choose to have additional children even after knowing the risk for FXS. Some families will have multiple members across generations with fragile X-associated disorders. Mothers of children with FXS frequently reported behaviors such as defiance, tantrums, inattention, stereotypy, aggression, and social inappropriateness in their children and describe these as having a major negative impact on family life. 41 Relative to other neurogenetic conditions (e.g., Williams syndrome), parents of children with FXS were more likely to indicate restrictions on the family and a less positive perspective of parenting because of their child's behavioral or psychiatric conditions. 107 Challenging behaviors of children over time negatively affected maternal mental health, which in turn influenced the quality of the mother-child relationship. 108 Similar results were found in another study, with child challenging behavior serving as a predictor of changes in maternal depression over time. 109 Moreover, high levels of child challenging behavior were related to increased feelings of maternal closeness toward the child over time. In a study exploring expressed emotion, mothers with FXS were described as having high levels of worry and emotional overinvolvement. 110 This increased anxiety may have a negative impact on the outcomes of the child; lower closeness in the mother-child relationship and higher maternal distress were found to be associated with higher levels of withdrawal. In female patients with FXS, the closeness of the mother-child relationship predicted greater fluid and crystallized intelligence. 36 Maternal responsivity, 1 factor in the mother-child relationship, has also been found to be an important predictor of outcomes for individuals with FXS. Sustained maternal responsivity has a significant positive impact on the trajectories of communication and, to a lesser extent, other adaptive behavior domains through middle childhood, with many effects remaining significant after controlling for autism symptoms and developmental level. 111,112 Family resources and social supports are important predictors of quality of life, well-being, and overall impact of the diagnosis 113 ; however, little research has explored strategies to improve access to these necessary supportive resources. How and when the diagnosis is obtained and shared with family members remains another important topic for future exploration. 114,115 the fragile X syndrome (FXS) literature. Although progress has been made in the past few years, there is still work to be performed to fill these gaps. In this study, we summarize the 3 main areas of that remain in need of more focused clinical and public health research. Early Detection and Diagnosis The American Academy of Pediatrics provides guidance to pediatricians for the evaluation and referral to genetic testing for children with intellectual disability with unknown etiology. 116 Similarly, for children diagnosed with ASD, there are recommendations for fragile X testing. 117 However, children with FXS are typically diagnosed around age 3 years, after clinical onset of symptoms. 118 Thus, early emergence of symptoms in infants and toddlers with FXS has not been well described. Without universal screening for FXS, it is likely that children with FXS are underdiagnosed. This makes it challenging to determine the timing and degree of symptom onset and delays access to early intervention or genetic services. Early detection and diagnosis of FXS is critical to ensuring optimal health and well-being. Additional work remains on determining how best to identify and diagnose young children with FXS, such as through newborn screening or routine developmental surveillance. Further work is needed to understand the early phenotype of infants and toddlers with FXS. Finally, research documenting the impact of early intervention services on child and family outcomes will provide evidence on best practices to supporting development. Examining Determinants of Health There are many factors that affect health outcomes for all individuals, including personal, genetic, and environmental/social factors. Although the literature to date has provided a wealth of knowledge about the FXS phenotype, there is still much to learn. Given the wide variation in individuals with FXS, it is important to better understand personal or individual determinants of health. Studies are needed to document individual differences in FXS, such as phenotypic variation within male patients and female patients, and identify subgroups of FXS beyond simply classifying those with FXS only or those with FXS and comorbid ASD. Moreover, longitudinal studies into adulthood are needed to determine the lifelong impact of FXS on individuals. For example, little is known about the physical and mental health needs of adults with FXS. This information is helpful not only for further understanding the wide variation in functioning in individuals with FXS but, importantly, for determining inclusion/exclusion criteria for clinical trials and the potential impact of therapeutics. Despite being a single-gene disorder, there is much to learn about the genetics of FXS. Genotype-phenotype association studies, in particular, are needed to understand how molecular and biochemical functioning has differential impact on individuals with FXS. For ex-ample, recent work has shed light on the relationship between FMRP and cognitive and behavioral functioning. However, more studies are needed to understand how the transcription of the FMR1 gene and downstream regulation affect the daily life of individuals with FXS. Additional research is needed on the extent to which cell or methylation mosaicism contributes to development and functioning and how behavioral characteristics cluster together and are related to molecular indicators. There is a significant lack of knowledge regarding the environmental and social determinants of health in individuals with FXS. Only a handful of studies have examined the access to health care services and the educational needs of individuals with FXS. Critically, there remains a lack of information about the potential differential needs and access to health care for those from minority backgrounds. Racial inequalities for individuals with intellectual and developmental disabilities have been described for other conditions but are lacking in FXS. More information is needed to understand the community and social factors and their impact on individuals with FXS. These could include studies on access to leisure activities and the quality of friendships for individuals of all ages with FXS. Finally, research on employment and housing of adults with FXS would provide additional insights into the impact of aging. Developing and Implementing Pharmacological and Other Interventions It is clear that a large focus over the past 6 years has been on the development of targeted therapeutics and identification of appropriate outcome measures for use in clinical trials. Advances in understanding molecular pathways have paved the way for promising new treatments. However, there is more work to do in understanding the molecular biology and systemic function of FMRP. 119,120 A deeper understanding of the expression of FMR1 and the role FMRP plays in regulation of cellular activity will provide new insights into potential pharmacological treatment targets. Moreover, many outcome measures still require additional evaluation to determine their sensitivity to change, a quality necessary for being able to assess the efficacy of a given treatment. Research into effective educational and behavioral interventions is also needed. The handful of studies conducted to date provide a foundation on acceptable, feasible, and efficacious approaches. However, many are focused on small samples and target very specific skills or behaviors. Next steps should include large-scale replication studies to build the evidence base and demonstrate scale-up of implementation. In addition, continued development and testing of new interventions that address broad areas of development are needed to complement targeted interventions. Ideally, the field will create a variety of intervention approaches to support individuals with FXS and their families. e62 Research Gaps in Fragile X Syndrome CONCLUSION Despite significant progress made in addressing clinical and public health research gaps identified in our earlier review, much work remains for the FXS field. Early diagnosis and understanding of the FXS phenotype, from infancy through adulthood, will provide insights in variation in individuals with FMR1 mutations. Detailing the impact of personal, genetic, family, social, and environmental factors on individuals with FXS will not only advance our understanding but also enable more targeted treatment approaches. With additional research, clinicians and other professionals will be well poised to meet the needs of individuals with FXS and their families.
2022-10-12T06:18:03.703Z
2022-10-11T00:00:00.000
{ "year": 2022, "sha1": "d23e25faa75efdf10532d310f747b2ea63789896", "oa_license": "CCBYNCND", "oa_url": "https://journals.lww.com/jrnldbp/Fulltext/9900/Research_Gaps_in_Fragile_X_Syndrome__An_Updated.63.aspx", "oa_status": "HYBRID", "pdf_src": "WoltersKluwer", "pdf_hash": "ee47713165e1a741ac72f969c436c237844941b1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119204561
pes2o/s2orc
v3-fos-license
CHIANTI - An atomic database for emission lines. X. Spectral atlas of a cold feature observed with Hinode/EIS In this work we report on a cold, bright portion of an active region observed by Hinode/EIS. The emitting plasma was very bright at transition region temperatures, and the intensities of lines of ions formed between 10^5-10^6 K were enhanced over normal values. The data set constitutes an excellent laboratory where the emission of transition region ions can be tested. We first determine the thermal structure of the observed plasma, and then we use it 1) to develop a spectral atlas, and 2) to assess the quality of CHIANTI atomic data by comparing predicted emissivities with observed intensities. We identify several lines never observed before in solar spectra, and find an overall excellent agreement between CHIANTI predicted emissivities and observations. Introduction The CHIANTI atomic database (Dere et al. 1997(Dere et al. , 2009) provides up-to-date, assessed atomic data for most astrophysically useful ions as well as software for deriving emission line emissivities and synthetic spectra. It has been used to model and interpret emission from a wide range of objects in astrophysics including the Sun's outer atmosphere, the Jupiter-Io plasma torus (Steffl et al. 2008), T Tauri stars (Günther & Schmitt 2008), the interstellar medium (Sallmen et al. 2008) and supernova remnants (Reyes-Iturbide et al. 2008). A vital part of maintaining CHIANTI is the assessment of data quality through comparisons of the -2atomic models with observed spectra. The Sun's atmosphere is a natural target for such studies as there have been several high resolution spectrometers flown on both rockets and satellites that have produced high signal-to-noise data over large wavelength ranges in the ultraviolet and X-ray regions. In addition, the wide range of structures offered by the Sun -coronal holes, quiet Sun, active regions, flares -yield very different spectra that allow particular atomic models to be studied in different physical conditions. Three previous data assessments have been performed: the comparison of the SERTS-89 rocket flight spectrum with version 1 of CHIANTI by Young et al. (1998); the Landi et al. (2002a,b) comparisons of version 3 of CHIANTI with off-limb quiet Sun spectra obtained with the SUMER and CDS instruments, respectively, on board the SOHO satellite; and the study of an X-ray spectrum obtained with the Flat Crystal Spectrometer on board the Solar Maximum Mission by Landi & Phillips (2006) using version 5 of CHIANTI. For the present work, an unusual spectrum obtained with the EUV Imaging Spectrometer (EIS, Culhane et al. 2007) on board the Hinode satellite (Kosugi et al. 2007) has been found that shows strongly enhanced lines from the upper transition region corresponding to temperatures log (T /K) = 5.0-5.9. EIS takes high resolution spectra in the wavelength ranges 170-211 and 246-291Å and the data represent an excellent opportunity to study atomic physics properties of a group of ions that normally emit weak lines, yet yield valuable information about the emitting plasma. The paper is structured as follows. First, details of the observation and the procedure for extracting, calibrating, and fitting the spectrum are described. Using lines from all the ions observed by EIS, we determine a first, approximate, differential emission measure (DEM) curve. This is used, together with the L-function method of Landi & Landini (1997), to identify blends or atomic physics issues and select a set of lines free from problems. These are used to determine a more accurate DEM curve. This new curve is used 1) to derive a synthetic spectrum that is used to confirm line identifications in the atlas, and 2) to compare CHIANTI emissivities and observed line intensities for all the ions in the log T = 5.0-5.9 temperature range that have lines identified in the atlas. The comparison between CHIANTI emissivities and EIS observations will be split between three separate papers. In the present paper we will consider all elements except iron. In a second paper ) we will discuss the three iron ions Fe vii-ix whose emission is very prominent in the EIS observations we use here, but require special attention due to the large number of lines and of new identifications we made. In a third paper we will carry out the comparison between the CHIANTI emissivities for coronal ions and another set of observations, carried out with a special observing sequence and on a solar target specifically chosen to enhance coronal emission. Thus, we will not consider coronal ions in -3the present dataset. Observation The dataset studied here is the same as that analysed by Young (2009) -a single EIS raster obtained on 2007 February 21 and pointed at active region AR 10942. The complete EIS spectral range is obtained over a 128 ′′ × 128 ′′ spatial area with a 25 s exposure at each slit position. Young (2009) selected a spatial area where newly-identified lines of Fe ix were espectially strong. For the present work we choose a region of 30 pixels that corresponds to a bright point apparent in Fe viii images centered at X=-335 ′′ , Y=-30 ′′ (Fig. 1). The bright point appears to be related to the footpoint regions of coronal loops. A SOT magnetogram obtained at 02:00 UT shows that the bright point lies within a unipolar plage region. The data were calibrated using the standard calibration routine EIS PREP which is available in the Solarsoft software distribution. EIS PREP is described in detail by , and the routine has been expanded since that work with the following features. Anomalously bright pixels referred to as warm pixels are now directly removed by EIS PREP through comparison with warm pixel maps obtained by regular engineering studies. For the present observation the warm pixel map was obtained on 2007 March 3. Previously, warm pixels were flagged via the cosmic ray detection algorithm. For full CCD spectra such as those analysed here the method for estimating the CCD background (consisting of the pedestal and dark current) is different to that described by . The two 2048×1024 CCDs that measure the two different EIS wavelength bands are each read out as two halfs of size 1024×1024, the four 'halfs' being referred to as quadrants. For each quadrant an area of 46 pixels wide in the CCD X-direction (corresponding to wavelength) has been identified as being relatively free of emission lines. In each case these areas are where the effective area of the instrument is low. The median value of the data number (DN) values of each pixel in these 46 pixel wide regions are treated as the CCD background for that quadrant, and thus subtracted from the data by EIS PREP. As the intrinsic EUV spectrum background is very low in the EIS wavelength bands, a consequence of this method of background subtraction for full CCD data is that a large number of pixels (up to 50 %) can end up with a zero or negative DN value. By default EIS PREP treats such data points as 'missing' data since it is not possible to assign a photon statistics error to them. However, by specifying the keyword /RETAIN the software will assign an error to these points that is simply the estimated dark current error estimate and treat the photon statistics error as zero. This is the option that has been used for the present work. The temperature of maximum abundance, from the CHIANTI 6 ionization equilibrium (Dere et al. 2009) is reported as log T for each line. -5 -Following the calibration by EIS PREP, the next step is to average the spectra from the 30 pixel spatial region of interest to yield a single spectrum for analysis. This procedure is complicated by both the CCD spatial offset (Young et al. 2007a) and the EIS spectrum tilt identified by , whereby a given spatial feature appears at different CCD Y-positions depending on the wavelength. The routine EIS CCD OFFSET in the EIS software tree yields the spatial offset relative to He ii λ256.32 as a function of wavelength. This routine uses the spectrum tilt gradient derived by for the EIS short wavelength (SW) band and assumes that the same tilt applies to the long wavelength (LW) band. In addition the offset between the SW and LW bands was derived by co-aligning images in Fe viii λ185.21 and Si vii λ275.35, which are observed to be very similar to each other. The spatial offsets can be as large as 21 pixels for the shortest and longest wavelength lines observed by EIS. By specifying a spatial mask in the Fe viii λ194.66 line, the routine EIS MASK SPECTRUM takes the level-1 FITS file output by EIS PREP together with this mask and derives a single spectrum averaged over the specified spatial region. The routine goes through each wavelength in the spectrum and computes an adjusted pixel mask based on the offset relative to λ194.66 specified by EIS CCD OFFSET. The pixels identified by this adjusted pixel mask are then averaged to yield an intensity value for that wavelength. The resulting spectrum is calibrated in erg cm −2 s −1 sr −1Å−1 units and has an associated error array. The complete spectrum is displayed in Figs. 2-5. As EIS does not have an internal calibration lamp, it is not possible to derive an absolute wavelength scale for spectra obtained from the instrument without some physical assumption about the observed plasma. The emission line wavelengths given in the present atlas are simply those measured from the spectrum output by EIS MASK SPECTRUM and no attempt has been made to adjust them onto a reference scale. Sect. 5 and the individual ion sections of Sect. 6 discuss the velocities derived from particular emission lines. Relative wavelength comparisons for EIS emission lines are estimated to be accurate to a level of ±0.002Å (or to within 2.1-3.5 km s −1 , depending on wavelength) based on the work of Brown et al. (2008). Following creation of a single 1D spectrum for each of the EIS wavelength bands by EIS MASK SPECTRUM, the Gaussian fitting routine SPEC GAUSS EIS was used to manually derive line fit parameters for each emission line in the spectrum. SPEC GAUSS EIS makes use of the MPFIT procedures of C. Markwadt 1 . Depending on the density of lines in the spectrum, either lines were fitted individually with single Gaussians, or multiple Gaus-sians were fit simultaneously to a group of lines. In cases where good fits were not obtained, it was sometimes necessary to force the widths of the Gaussians to be the same. For two particularly complicated spectral features -the Fe ix- , and the region around the Si viii lines at 276-278Å (Sect. 6.14), a customized fit function was necessary to yield accurate line parameters. The complete list of line fit parameters is given in Table 4. Method of analysis The present spectrum displays a large number of lines that have either not been identified before or not been studied in any detail. The aim of the present work is thus threefold: (i) to use the lines to determine the density, temperature and DEM of the plasma, (ii) confirm line identifications and check atomic physics properties, and (iii) build the spectral atlas. A variety of techniques are used in the present work and are summarised below. The most basic method for identifying new lines and line blends is to study an intensity map formed from the line and compare it with a map from an unblended line with a known temperature of formation. The intensity maps that we have taken as a reference are displayed in Figure 1, and the comparison allows us to associate each line to a temperature class. The temperature classes are listed in Table 1, together with the ions most representative of that class, for which we mostly chose iron ions. Note that the temperatures given in Table 1 do not necessarily agree with the temperatures of maximum ionization derived from theoretical calculations (e.g., Bryans et al. 2009) as we believe these calculations are not accurate for some ions. The partition of the ions in so many classes was made possible by the excellent signal-to-noise ratio provided by EIS for some of the lines of each representative ion. In the case of weaker lines, sometimes the small details that discriminate two adjacent classes were lost in the noise: in these cases, both classes are listed in the atlas separated by a dash line: for example, "C-D" means that this line could either belong to class C or to class D. Intensity map classes also helped us discriminate the cases were lines emitted by ions formed at much different temperatures were blended together. These cases could be easily identified, for example, when intensity maps showed features of both a cold line and a hot line. These cases have been marked by listing in the atlas both classes, separated by a comma: for example "C,L" marks a blend between a C and an L line. For a number of lines in the atlas it was not possible to assign a temperature class, as the intensity maps were too noisy. For some weak lines the bright knot of emission seen in the cool lines Fig. 1 could be clearly discerned, but it was not possible to clearly assign the -7line to a definite class. Such lines are indicated by a spectral class of "A-D" since the knot of emission is a strong feature at each of these temperatures. Once lines have been identified, or provisionally identified, a first determination of the DEM of the plasma is made using the iterative technique by Landi & Landini (1997). The resulting curve is used in combination with the L-function method by Landi & Landini (1997) to simultaneously compare all the lines from a given ion, yielding density estimates when density sensitive lines are available, and highlighting lines discrepant with theory. The method is described in more detail in Sect. 3.2. Once a set of emission lines free of blending or atomic data problems has been identified, then these are used to derive a final, more accurate DEM curve. This in turn is used to derive complete CHIANTI synthetic spectra for the EIS wavelength bands, allowing a final check of line blending and identification, and carry out a detailed comparison between CHIANTI emissivities and observed intensities for ions in the log T = 5.0-5.9 temperature range. For calculating line intensities and computing the DEM curve, all atomic data were taken from CHIANTI version 6.0 (Dere et al. 2009). The DEM curve has been derived adopting the ion abundances in Dere et al. (2009) and the Feldman et al. (1992) element abundances. DEM measurements The DEM diagnostic technique we use is described in Landi & Landini (1997) and is briefly summarized here. The line flux emitted by an optically thin plasma observed at distance d is given by where the Contribution Function is defined as and the volume Differential Emission Measure (DEM) is defined as -8 -An initial, arbitrary DEM ϕ o (T ) is first adopted; using a correction function ω(T ), the true DEM curve is given by If we define the effective temperature T ef f as it can be easily shown that, as long as the correction function is slowly varying, From equation 6, each observed line flux can be used to determine the correction function at temperature T ef f ; if lines from many ions are available the ω(T ) curve can be sampled at many temperatures, interpolated, and used to calculate ϕ(T ). The resulting ϕ(T ) curve is taken as the new trial DEM. Then the procedure is repeated until either the ω(T ef f ) are all equal to 1 within the errors, or the best χ 2 is reached. L-function method of analysis We used the temperature and density diagnostic procedure that was first introduced by Landi & Landini (1997). This technique relies on the fact that the G ij (T, N e ) curve can be expressed as where g(T ) is the ion abundance, it is function of temperature alone, and it is identical for all the lines of the same ion; while f ij (N e , T ) is the population of the upper level and can be approximated with a linear function of log T in the temperature range where the line is formed. Landi & Landini (1997) showed that an effective temperature T ef f can be defined as -9and used to calculate the effective emission measure L ij (N e ) (L-function) as If we plot all the L-functions measured for the same ion versus the electron density, all the curves should meet in a common point (N ⋆ e , L(N ⋆ e )); the L-functions of density independent lines are overlapping and they also cross the same point as the others. An example is shown in Figure 6. Landi & Landini (1997) showed that the abscissa N ⋆ e of the common point is the density of the emitting plasma. L-function/DEM results The DEM diagnostic technique described in Sect. 3.1 was applied first to a set of bright emission lines selected from a wide range of ions. The resulting DEM curve is displayed in Figure 7 where it is seen that the emitting region is characterized by a very large and rather broad maximum at around log T = 5.7 that dominates the DEM at all temperatures larger than log T = 5.0. The colder regions of the DEM are similar to the DEM curves available in the literature (and shown in Figure 7, taken from the DEM quiet Sun curve available in the CHIANTI database), but they are much larger. Only in the corona the present DEM shows the same values as the CHIANTI quiet Sun DEM. With the first-guess DEM defined, the L-functions could be calculated for each line (Sect. 3.2). For each of the ions O iv-vi, Mg vi-vii, Al v,viii,ix, Si vi-x, S viii,x, and Fe viii-xiv more than one emission line is available in the spectrum and so the L-function method was applied to check for discrepancies and identify density diagnostics. A summary of derived densities is given in Table 2. Details of the results of the L-function technique are given in Table 3. Filtering out emission lines that are clearly discrepant by the L-function method, a new DEM is calculated and the results are shown in Fig. 8. For this DEM we have also added the Fe xv 284.16Å and Fe xvi 262.98Å lines to constrain the high temperature part of the DEM curve. Although these latter lines could not be tested using the L-function method as they are the only lines detected from those two ions in the present spectrum, they have been shown in the past to be free of problems (Young et al. 1998) and so their use does not introduce any additional uncertainty. In computing the second DEM curve, a constant density of log N e = 9.15 was assumed. This value was selected as best value based on the density measurements from each individual -10ion listed in Table 2. The final DEM is quite different from standard solar DEM curves, exhibiting a large maximum at around log T = 5.65 that is responsible for the strongly enhanced lines from the upper transition region ions. A secondary maximum, corresponding to the temperature of the maximum of standard quiet Sun DEMs (also shown in the figure), is located just above log T = 6.0. Some plasma at active region temperatures is also present along the line of sight, but its importance is limited as the DEM curve decreases very rapidly beyond the maximum at log T = 6.0. At nearly all temperatures, the final DEM is larger than the standard quiet Sun DEM from the CHIANTI database. Atlas The DEM curve shown in Figure 8 was used to calculate a synthetic spectrum that was used to help in the identification of the lines measured in the observed spectrum. Table 4 presents the Gaussian line fit parameters for every emission line in the spectrum, together with line identifications and predicted line intensities for the identified transitions. The coolest ion line in the spectrum is He ii Lyman-β, formed at around 80,000 K, and the hottest line is Fe xvi λ262.98, formed at around 2.5 million K. Many of the line identifications were given in the spectral atlas of Brown et al. (2008), but some typographical errors in that work have been corrected and, additionally, the DEM analysis has demonstrated that some of the proposed lines of Brown et al. (2008) do not make any significant contribution to the present spectrum. In total there are 277 emission lines listed in Table 4, 103 of which are unidentified. The EIS dispersion relation was derived by Brown et al. (2007) using mainly strong emission lines from the iron ions Fe ix-xvi and, in particular, none of the cool oxygen, magnesium and silicon species discussed in the following sections were used. Since these lines are very strong in the present spectrum, we can use the measured wavelengths to investigate the accuracy of the reference wavelengths for these ions. Table 6 compares velocities derived using the reference wavelengths given in Table 4 and reference wavelengths given in version 3 of the online NIST Atomic Database. The Table 4 wavelengths are mainly from the work of B. Edlén (Edlén 1979(Edlén , 1983(Edlén , 1984(Edlén , 1985. Individual ions are discussed in Section 6, but we note that the Edlén wavelengths generally give a more consistent set of velocities than the wavelengths obtained from the NIST database. In particular, the strong lines of Mg vii and Si vii which are formed at a very similar temperature, show much better agreement with the Edlén wavelengths. -11 - Table 6 reveals that lines formed between log T = 5.2 to 5.8 generally have a consistent velocity of around +40 km s −1 , while the slightly hotter lines formed between log T = 5.8 to 6.0 are around +20 km s −1 . Inspection of the images in Fig. 1 shows that the bright knot of emission from which the spectrum is obtained is most clearly visible from O v to Fe ix, corresponding to temperatures log T = 5.3 to 5.8. The change in line velocities is thus likely due to an increasing contamination of the bright point spectrum by other active region emission in the line of sight which is at a different velocity. Individual ion details In the sections below we discuss identifications and diagnostics, and compare CHIANTI emissivities and observed line intensities for species found in the spectrum. The discussion is focussed towards ions with log T eff ≤ 6.0 as these lines are more intense in this spectrum than other types of solar spectra. For ions outside this temperature range, particularly the iron ions Fe xi-xiv, a detailed study of line ratios and identifications is deferred to a future paper. The results of the L-function diagnostics are given in Table 3, where we provide the L-function values of all the lines and their ratios to the lowest one, calculated at the crossing point. If all lines are density insensitive relative to each other, or the L-functions do not provide a clear indication of log N e , the ratios were calculated assuming log N e = 9.15. Each line intensity has been associated to an uncertainty of 20%, to account for atomic data uncertainties. -13 - Table 4 is represented by a vertical blue line, then length of which corresponds to the peak of the fitted Gaussian. For identified lines, the emitting ion is shown. The Y-scale varies for each panel according to the strengths of the lines in the spectrum section. The spectral ranges of - The CHIANTI quiet Sun DEM is superimposed for comparison purposes. The lines used for generating this DEM curve had been previously selected with the L-function method and the first-cut DEM curve in Figure 7. -21 - Table 3, if a ratio is less than 1 it indicates the measured line intensity is weaker than predicted by theory, while a ratio greater than 1 indicates the measured line is stronger than predicted by theory. Usually the latter case is due to a blending line. O iv A large number of n = 2 to n = 3 transitions lie in the wavelength range 170-292Å, with the brightest two lines falling between the two EIS wavebands at around 238.5Å. The next strongest is the 2s 2 2p 2 P 3/2 -2s 2 3s 2 S 1/2 transition at 279.933Å, which is seen in the EIS spectrum. Another decay from this upper level to 2s 2 2p 2 P 1/2 falls nearby at 279.631Å and this is also seen in the spectrum. While the observed separation of the two lines is in excellent agreement with the wavelengths of Edlén (1934), the branching ratio of the lines shows a significant discrepancy with the prediction from CHIANTI: the predicted ratio being 0.50 compared to the observed ratio of 0.34 ± 0.05. The line widths show no indication of blends (Table 4), and the L-function method suggests the λ279.93 yields the best agreement between theory and observation (Tables 3 and 4). A number of transitions between excited configurations are predicted in the EIS wavelength range, the strongest of which is potentially observable. The 2s2p 2 2 D 5/2 -2s2p( 3 P )3d 2 F 7/2 transition was found at 260.389Å by Edlén (1934). A possible candidate is the line measured at 260.292Å for which the intensity is in excellent agreement with λ279.93 (Table 3), however the wavelength shows a significant discrepancy: the measured wavelength implying a velocity of −112 km s −1 compared to +43 km s −1 for λ279.93. The width of the line is also significantly broader than λ279.93. The ratios of the observed 260.29Å line relative to either λ279.63 or λ279.93 are excellent temperature diagnostics with little sensitivity to density and the derived values are log T = 5.18±0.05 and 5.30±0.07 for λ279.93 Additional weak lines are expected to fall in the EIS wave bands, but they are less intense than those reported in the atlas, and they are not found in the spectrum. A line with rest wavelength 203.044Å is a good wavelength match for the line observed at 203.064Å but the observed intensity is a factor 5 larger than predicted, so O iv only provides a minor contribution to the observed feature. O v There are several O v lines in the EIS wavelength range, but most are affected by blending. A few of these lines are density sensitive relative to each other, but they only allow to determine an upper limit to the plasma electron density, log N e < 10.5. The strongest line by intensity is the 2s2p 1 P 1 -2s3s 1 S 0 transition at 248.46Å, which is blended with an Al viii line. Table 3 shows that the contribution of Al viii to the line is ≃ 15% in agreement with the L-function results for Al viii. A group of six transitions from the 2s2p 3 P -2s3d 3 D multiplet are found between 192.75 and 192.91Å and have been discussed by Young et al. (2007b). They are partly blended with Fe xi and Ca xvii lines and a method to extract the intensities of the individual component lines has been described by Ko et al. (2009). In the present case a slightly modified treatment is used since there is very little high temperature emission in the spectrum and so Ca xvii can be safely ignored. In addition a nearby line at 192.64Å, which we believe is due to Fe ix ) is quite strong and needs to be accounted for in the fit. We include six Gaussians for the six O v lines, with the separations being fixed to the separations of the CHIANTI wavelengths and the widths forced to be the same. Although there is some density sensitivity amongst the lines, it is small and we force the lines to have the relative strengths predicted by CHIANTI at a density of 10 10 cm −3 . In summary, then, the only free parameters for the O v lines are taken to be the wavelength, width and amplitude of the λ192.904 line (the strongest of the group), with the parameters for the other lines all fixed relative to this. For Fe xi λ192.83 and the line at 192.64Å we fit two independent Gaussians. Note that the additional Fe xi λ192.90 line discussed by Ko et al. (2009) is not included as it is very weak. The resulting fit parameters for the Gaussians are given in Table 4. Note that the fit parameters for each of the O v lines except λ192.904 are derived from the λ192.904 fit parameters as described above. Confidence in the derived fit parameters is obtained by comparing the velocity shifts of λ192.83 and λ192.904 (−36 km s −1 and +42 km s −1 , respectively) with Fe xi λ188.23 and O v λ248.46 (−35 km s −1 and +35 km s −1 , respectively). -25 -Three further O v lines are predicted in the EIS wavebands. λ172.17 is comparable in strength to λ192.90 but the instrument effective area is much lower in this part of the spectrum. A couple of lines are indeed barely visible at around λ172.0-172.3 but they are too weak to provide a reliable measurement of their parameters. The observed line at 185.780Å is a good wavelength match for O v λ185.745 however the intensity map for the line indicates that it is emitted by an unidentified hotter ion formed at temperatures closer to Mg v and Fe vii. Indeed, Table 3 indicates that O v provides only ≃ 30% of the observed intensity. The other O v line identified in the long wavelength section of the atlas is λ271.068, which sits in a rather broad spectral feature to which Fe vii λ271.074 also contributes ). We estimate O v accounts for ≃ 43% of the measured intensity. O vi The 2p 2 P 1/2,3/2 -3s 2 S 1/2 transitions, λλ183.94, 184.12, are seen in the present spectrum and their intensities are reproduced reasonably well by the DEM (Table 4). However their L-functions show a small, but significant, discrepancy with theory (Table 3), with λ183.94 observed to be too strong compared to λ184.12 by a factor 1.35. This is surprising for such a simple ion and the obvious solution is that λ183.94 is blended. However, images formed in both lines are very similar and show no evidence of a contribution from a line formed at a different temperature. In terms of line widths, λ183.94 is actually found to be a little narrower than λ184.12 and so any blending line must lie at almost exactly the same wavelength as λ183.94. Comparing the measured wavelengths with the rest wavelengths of Edlén (1979) and converting to velocity units gives +19.6 ± 3.3 and +35.8 ± 3.3 km s −1 for λ183.94 and λ184.12, respectively. The separation of the lines is thus not consistent with their rest wavelengths. The velocity of λ184.12 is more consistent with other ions formed at a similar temperature (Table 6), suggesting problems with the λ183.94 line. A study of λ183.94 and λ184.12 in a range of different conditions would be valuable for further investigating these problems. The only other O vi lines expected in the EIS spectrum are the three transitions of the 2p 2 P J -3d 2 D J ′ multiplet. The strongest line (3/2-5/2) has a rest wavelength of 173.080Å and is partly blended with the 3/2-3/2 transition at 173.095Å although the latter is predicted to be a factor 0.17 smaller. The 1/2-3/2 transition is at 172.936Å. The EIS effective area is very low at these wavelengths, but two lines can be seen close to these wavelengths (Table 4). Converting the measured wavelengths to velocity units gives −17 ±25 and 69 ±29 km s −1 for λ172.94 and λ173.08, respectively, and so only the latter line is consistent with the velocities of λ183.94 and λ184.12 presented above. The intensities of these lines are a little larger than expected relative to the λ184.14 ( Table 3), but the uncertainties are larger than the difference. Ne v A number of weak Ne v transitions are predicted through the EIS wavebands and the strongest in terms of counts expected on the detector is λ184.735 which is a possible wavelength match for an observed line at 184.777Å although the implied velocity of +68 km s −1 is larger than found for O v (Table 6) which is formed at a similar temperature. The intensity predicted from the DEM shows that the Ne v line cannot fully account for the 184.777Å line's intensity and other contributions come from Fe vii and Fe xi (Table 4). A Ne v line at 173.932Å is predicted to be stronger than λ184.735 by a factor three, but the EIS sensitivity is low at this wavelength and the line can not be seen. No trace is found of the other Ne v lines predicted by CHIANTI, with the only exception of λ274.090, which lies close to an observed and unidentified line at 274.119Å, whose intensity image is consistent with a cool line. However, we hesitate to identify this line as Ne v since the predicted intensity is factor 4 lower than observed, and other Ne v lines predicted to be brighter are not observed. Ne vi The only Ne vi lines predicted to be observable in the EIS wavelength range belong to the 2s2p 2 2 S 1/2 -2s 2 2p 2 P 1/2,3/2 doublet at 185.056 and 184.945Å, respectively. The strongest of the two lines is close to the observed line at 184.922Å, however the velocity of −37 km s −1 is discrepant with the O vi λ184.12 velocity of +39 km s −1 . The intensity prediction from the DEM analysis (Table 4) shows that Ne vi can only account for ≃ 50% of the observed intensity, and the remaining contribution is due to Fe vii λ184.886. The Ne vi λ185.056 line is predicted to be around half the strength of λ184.945, but it is not found in the spectrum. Mg v Mg v provides one strong line in the EIS spectrum, given by the allowed 2s 2 2p 4 1 D -2s2p 5 1 P transition observed at 276.625Å. This line is very prominent in the present dataset, as Mg v is formed at temperatures close to the peak temperature of the DEM curve and the intensity predicted from the DEM is very close to the measured intensity (Table 4). -27 -Using the rest wavelength of Edlén (1983) gives a velocity of 49.9 ± 0.05 km s −1 which is around 10 km s −1 larger than the average velocity of the lines formed below log T = 5.8, suggesting the rest wavelength may be slightly in error. No other Mg v line is identified in the atlas, because the few other lines available in the EIS range are much weaker than λ276.625. Only two lines are potentially observable: they are weak, but are predicted to be at ≃197Å, where the EIS coating reflectivity is high. However, their wavelengths are based on calculated rather than laboratory level energies, so that it is difficult to associate them with any spectral line with certainty. Table 1 lists Mg v as class B, but the few unidentified lines listed as class B or even C are brighter by an order of magnitude than the predicted Mg v lines. Mg vi The 2s 2 2p 3 2 D J -2s2p 4 2 P J ′ multiplet gives rise to two strong lines in the present spectrum: the 3/2-1/2 transition at 269.020Å and the 5/2-3/2 transition at 270.426Å. The latter is blended with the 3/2-3/2 component which is predicted by CHIANTI to be 13 % of the 5/2-3/2 component, and the fitted Gaussian at 270.426Å includes both components. A further blend is with Fe xiv λ270.52 which normally dominates in active region conditions. In the present spectrum, however, the Fe xiv component is weak and can be fit with a separate Gaussian . Table 3 demonstrates that the two observed lines are in good agreement with each other, however the DEM over-predicts the strength of both lines by around 20 % (Table 4). Table 6 shows that the velocities derived using the reference wavelengths of Edlén (1984) are consistent with the other cool species in the spectrum. Note that the slightly larger velocity for the 270.426Å line is likely due to the weaker blending line which has a slightly longer rest wavelength. The CHIANTI Mg vi model predicts many n = 2 to n = 3 transitions in the EIS wavebands, but all are very weak and can not be observed in the present spectrum. Mg vii Only four lines of significant strength are expected in the EIS wavebands, and each is bright in the current spectrum. The three members of the 2s 2 2p 2 3 P J -2s2p 3 3 S 1 multiplet are expected at 276.14, 276.99 and 278.39Å, but only the weakest line, λ276.14, is unblended. λ276.99 is blended with Si viii and a method for extracting the line intensities in this difficult part of the spectrum is described in Sect. 6.15. Although λ276.99 is listed in Table 4 we note that the parameters were completely determined from the λ276.14 parameters and so this is not an independent measurement. λ278.39 is blended with Si vii λ278.45 but by fitting two Gaussians forced to have the same width, the two lines' intensities can be extracted. The 2s 2 2p 2 1 D 2 -2s2p 3 1 P 1 transition is found at 280.72Å, and forms an excellent density diagnostic with any of the 2s 2 2p 2 3 P J -2s2p 3 3 S 1 multiplet. Table 3 shows that λ276.14, λ278.39 and λ280.72 are in good agreement with each other, however the DEM underpredicts the lines' intensities by around 40 %. Using the reference wavelengths of Edlén (1985) yields velocities for λ276.14, λ278.39 and λ280.72 that are in good agreement with the other ion species formed below log T = 5.8. However, we note that using the reference wavelengths from the NIST database gives significantly lower velocities. The best agreement among Mg vii lines is found at log N e = 9.05 ± 0.30. Al v The 2s 2 2p 5 2 P 3/2,1/2 -2s2p 6 2 S 1/2 transitions at 278.69 and 281.39Å, respectively, are the only Al v lines visible in the EIS wavelength band. They are emitted by the same upper level so they can not be used for temperature or density diagnostics. These two lines disagree with each other, because the 278.73 line is affected by a relatively weak blend due to a previously unidentified Ni xi line, expected to account for ≃ 25% of the observed intensity. The predicted Ni xi line intensity is a bit lower than needed, but the combined intensity of the two lines is reasonably close to the observed value. The λ281.438 line is very close to a S xi feature prominent in active region plasmas, but in the present dataset the contribution of this line is negligible. The velocities of the two Al v lines are consistent with other species (Table 6) when using the reference wavelengths of Artru & Brillet (1974), and the DEM predictions for the two lines are in good agreement with measurements. Al vii Four transitions of the 2s 2 2p 3 2 P J -2s2p 4 2 P J ′ multiplet are predicted to lie between 259 and 262Å. The strongest is λ261.208 which is found in the spectrum; the velocity is consistent with Mg vii which is formed at the same temperature (Table 6) and the DEM predicts the strength of the line to be in excellent agreement with observations. Another line from the same upper level, λ261.030, is blended with Si x λ261.05 but we estimate that it makes less than a 3 % contribution to this line. The 2 P 1/2 level gives rise to two lines at -29 -259.020 and 259.196Å, the latter of which is a good wavelength match for the observed line at 259.226Å. However, Table 3 shows that the observed line is much stronger than expected. There is a known blend with a Cr vii line, but this accounts for only ≃ 20 % of the total observed intensity, with Al vii accounting for ≃ 15 %. Al vii λ259.020 is predicted to be 83 % of the strength of λ259.196, but no line can be measured at this wavelength, consistent with λ259.196 providing only a small contribution to the measured line at 259.226Å. The 2s 2 2p 3 2 P 1/2,3/2 -2s2p 4 2 S 1/2 transitions at 278.960 and 279.164Å are density sensitive relative to the lines discussed above, but can not be found in the present spectrum. This is consistent with a density of ≤ 10 10 cm −3 . Al viii Al viii is isoelectronic with Mg vii and the four strong Mg vii transitions between 276 and 281Å discussed earlier are found between 247 and 252Å for Al viii, although much weaker due to the lower element abundance. The strongest of the 2s 2 2p 2 3 P J -2s2p 3 3 S 1 multiplet is present in the EIS spectrum at 250.155Å: the line velocity is in good agreement with the Si viii lines which have a similar T eff value (Table 6) and the intensity is well reproduced by the DEM. The next strongest line is blended with O v λ248.46 and Table 3 shows that Al viii contributes ≃ 15% of the total intensity. The third and weakest line of the multiplet is blended with an unknown line at 247.426Å, which is expected to provide ≃ 85% of the total intensity. The density sensitive 2s 2 2p 2 1 D 2 -2s2p 3 1 P 1 transition is expected at 251.36Å but can not be found in the spectrum. Another density sensitive line is 2s 2 2p 2 1 D 2 -2s2p 3 1 D 2 at 285.46Å which is around a factor two stronger than λ251.36 but it also can not be found in the spectrum. This is consistent with an electron density lower than log N e < 10. Al ix The only Al ix transitions which provide observable lines in the EIS ranges are those from the 2s 2 2p 2 P -2s2p 2 2 P multiplet, observed in the 280-287Å wavelength range. The strongest of these lines ( 2 P 3/2 -2 P 3/2 ) lies close to the very strong Fe xv line at λ284.15 and in active region conditions it is lost under the profile of the latter line. The 2 P-2 S transitions are found just outside the EIS wavelength range at around 300Å. These lines provide density sensitive intensity ratios in the 7.0 < log N e < 8.5 range. Three out of four Al ix transitions are identified in the present spectrum, since the -30intensity of the weakest line of the multiplet is predicted to be 4.4 erg cm −2 s −1 sr −1 and is not observed. Table 3 shows that the other three lines are only in partial agreement with each other. There is a factor ≃1.7 difference between the L(T ef f ) values of the λ284.06 and the λ286.38 lines which can not be accounted for by the uncertainties. The third line falls in between these two so that it is not easy to understand whether the strongest line of the multiplet is blended, or some problem affects the λ286.38 line. Density sensitivity is not the cause of the problem, as the L(T ef f ) values of the three lines are closest to each other for log N e > 8.2, and diverge at lower densities. The Fe xv line is moderately weak at the locations we have selected for the present atlas, so it should be well resolved from the Al ix transition. Si vi The atomic model for Si vi in CHIANTI predicts only two bright emission lines in the EUV, both of which are observed by EIS. A large number of additional lines are predicted to be three or more orders of magnitude less intense and are too weak to be observed by EIS. The strongest line is at 246.01Å and we perform a simultaneous three Gaussian fit here in order to pick out two weak lines in the wings of the Si vi line. The longer wavelength line is due to Fe xiii while the short wavelength line is unknown. Note that the widths of each of the three Gaussians were forced to be the same in the fitting process, and thus the Si vi width will dominate, making the line fit parameters for the two weak lines uncertain. Si vi λ249.12 is close to the hot Ni xvii λ249.18 line which is very strong in the cores of active regions, but can be neglected in the present spectrum. Another nearby line we believe is due to Fe vii but is clearly separated from the Si vi line. The two Si vi lines form a branching ratio and the predicted intensities are in excellent agreement with the measured values (Table 3). Si vii The only significant lines predicted by CHIANTI in the EIS wavebands belong to the 2s 2 2p 4 3 P J -2s2p 5 3 P J ′ multiplet, which yields six lines between 272 and 279Å. Three of these lines are blended but in the case of λ278.45 a two Gaussian fit can be used to separate the line from Mg vii λ278.39 if both lines are forced to have the same width (Table 4). There are three lines that are emitted by the 2s2p 5 3 P 1 level (λλ272.65, 275.68, 276.85): λ276.85 is blended with two Si viii transitions and Sect. 6.15 demonstrates how Si vii λ275.68 was used to estimate the Si vii contribution to this blend. The two unblended lines, λ272.65 and 275.68, are in good agreement with theory (Table 3). The three lines from the 3 P 1 upper level are weakly density sensitive relative to those from the 2s2p 5 3 P 2 level (λλ275.35, 278.44). Agreement is found for any density log N e ≥ 7.5. The remaining Si vii line at 274.18Å is emitted from the 2s2p 5 3 P 0 level and shows greater density sensitivity than the other lines. It is blended with Fe xiv λ274.20 which is generally much stronger in active regions, but in the present spectrum Si vii dominates, and provides ≃ 57% of the observed intensity (Table 3) if we assume a density of log N e = 9.15. Two more lines are predicted to be bright enough to be observed, and they are both emitted from the 2s2p 5 1 P 1 level. These two lines provide excellent density diagnostic ratios when compared to the 3 P-3 P lines discussed above. However, one of them falls in the wavelength gap between the two EIS bands, while the other is expected at 246.12Å and is lost under two stronger blending lines of Si vi and Fe xiii. Si viii Two groups of Si viii lines are expected in the EIS wavelength ranges: the four 2s 2 2p 3 2 D J -2s2p 4 2 D J ′ transitions between 276.8 and 277.1Å, and the two 2s 2 2p 3 2 P J -2s2p 4 2 S 1/2 transitions between 250 and 251Å. The latter two lines are very weak, but can be measured in the present spectrum and will be discussed towards the end of this section. The four 2 D-2 D transitions consist of a pair of strong transitions (3/2-3/2, 5/2-5/2 at 276.85 and 277.06Å, respectively), and a pair of weak transitions (3/2-5/2, 5/2-3/2 at 276.87 and 277.04Å, respectively). These lines are blended with lines of Mg vii and Si vii making it difficult to extract the Si viii line intensities, and we describe in detail below the method used here. Fig. 9 shows the EIS spectrum in the vicinity of the Si viii 2 D-2 D transitions, with the different ion species and lines indicated. Attempts to fit the lines simultaneously with multiple Gaussians each with three free parameters (line peak, centroid and width) fail due to the number of lines (7) between 276.8 and 277.3Å. The fitting process can be simplified significantly by making use of the nearby Si vii λ275.68 and Mg vii λ276.14 lines, which have fixed separations and ratios relative to the Si vii and Mg vii lines blending with the Si viii lines. The Si vii λ276.85/λ275.68 branching ratio is 1.3. Also the separation of the two lines is accurately known from measurements of the 2s 2 2p 4 3 P 1 -3 P 0 transition at infrared wavelengths (Feuchtgruber et al. 1997). We thus allow the isolated λ275.68 line to be freely fit, and then force λ276.85 to have the same width (since the lines arise from the same ion), a peak 1.31 times that of λ275.68, and a separation of 1.176Å. Similarly, Mg vii λ276.99/λ276.14 has a branching ratio of 2.99, and the wavelength separation is accurately known from infrared measurements of the 2s 2 2p 2 3 P 0 -3 P 1 wavelength (Kelly & Lacy 1995). The isolated λ276.14 line is then used to determine the λ276.99 parameters. For Si viii, each of the four emission lines is forced to have the same width, but this width is free to vary. The peaks and centroids of the two strong lines, λ276.85 and λ277.06, are free to vary, but those of the two weak lines, λ276.87 and λ277.04, are fixed relative to these lines. λ276.85 and λ277.04 share a common upper level and they have a branching ratio of 0.087 (using the atomic data of Zhang & Sampson 1999), while their wavelength separation is accurately determined to be 0.193Å from the separation of the Si viii, λλ1440, 1445 transitions in the far ultraviolet. Similarly λ277.06 and λ276.87 share a common upper level and have a branching ratio of 0.040, while their wavelength separation is also 0.193Å. Two additional Gaussians are added to fit Mg v λ276.58 and Si x λ277.26, each having completely free parameters. In summary, the spectral region between 275.55 and 277.50Å is fit with 10 Gaussians together with a straight line for the background. There are 21 free parameters in all, and the reduced χ 2 value for the fit is 2.3. The complete fit function is displayed in Fig. 9 and Table 4 gives the line fit parameters for each of the 10 emission lines. The wavelength separation of the two strong Si viii lines is 0.201 ± 0.003Å which is close to the value from Edlén (1984) of 0.207Å, while Table 6 shows that the velocities of the two lines are in good agreement with Al viii. Note that there appears to be a jump in the spectrum redshift between the ions formed at log T < 5.8 and those formed at log T > 5.8 of about 20 km s −1 . The two 2s 2 2p 3 2 P J -2s2p 4 2 S 1/2 transitions have rest wavelengths of 250.47 and 250.81Å and two weak lines are found close to these wavelengths in the spectrum. The velocity of λ250.81 is consistent with λ276.86 and λ277.06 (Table 6), giving confidence in the identification. λ250.47 is predicted to be a factor 0.65 of the strength of λ250.81, in good agreement with observations. However, the line is very narrow and the velocity is significantly discrepant with the other Si viii lines (Table 6). There is considerable density sensitivity in the Si viii lines, and the L-function method shows best agreement among all lines for log N e = 9.05 ± 0.30. Si viii emits several other strong spectral lines in the 214-236Å wavelength range that are useful for plasma diagnostics. Unfortunately, they all fall in the wavelength gap between the two EIS bands and can not be used. Si ix The only lines expected to be found in the EIS wavebands based on the CHIANTI atomic model are at 258.08 and 290.69Å, and both are measured here. λ258.08 is unblended but λ290.69 lies in the wing of a stronger Fe vii line, and the combined feature was fit with two Gaussians forced to have the same width. These lines are strongly density sensitive relative to each other and the CHIANTI Si ix model yields a value of log N e = 9.20 ± 0.30. Other strong lines are predicted to fall in the λλ220-230 wavelength range, that can not be observed with EIS. S viii The 2s 2 2p 5 2 P 3/2,1/2 -2s2p 6 2 S 1/2 doublet lines, λλ198.55, 202.61, are found in the EIS short wavelength band and both are blended with Fe xi transitions. Thus, the results in Table 3 can not be put in an absolute scale. However, the derived DEM demonstrates that -34 -S viii provides the dominant contribution in both cases (Table 4). Brown et al. (2008) also listed a Fe xii line as blending with S viii λ198.55, but we find the predicted intensity for this line is negligible when using the DEM. If we use the reference wavelengths of Robinson (1937) for S viii, then the derived velocities are +24.1 km s −1 and +22.2 km s −1 for λ198.55 and λ202.61, respectively, which are consistent with the Si viii and Al viii velocities (Table 6) and also confirm that S viii provides a dominant contribution to both lines. Note that S viii has an effective temperature of log T eff = 5.83, placing it between Si vii and Si viii. Since the S viii lines may be valuable for abundance studies (e.g., Feldman et al. 2009) we note that the Fe xi contributions can more generally be estimated through branching ratios. Using the atomic data from CHIANTI, λ198.55/λ189.14 has a theoretical ratio of 0.80, while λ202.63/λ188.23 has a ratio of 0.016. λ188.23 is the strongest Fe xi line observed by EIS, and λ189.14 appears to be unblended. Cr vii Cr vii lines have been identified in laboratory spectra Ekberg 1976), but never previously seen in solar spectra. The ion is isoelectronic with Fe ix and the analogous transition to the strong λ171.07 line of Fe ix is found at 202.83Å, which matches a strong line at this wavelength. An image formed from the line looks similar to Fe vii and Fe viii images, which is consistent with the predicted temperature of maximum ionization, log T max = 5.7, of Cr vii. In addition, using the laboratory wavelength of Ekberg (1976) yields a velocity of +42.9 km s −1 , in good agreement with Si vii which is formed at a similar temperature (Table 6). Brown et al. (2008) identify the O iv λ202.885 line in their spectra which is close in wavelength to the Cr vii line, however the CHIANTI model for O iv predicts this transition to be around a factor 40 weaker than O iv λ279.933 and so in the present spectrum it can safely be ignored. have computed new atomic data for Cr vii and the emission measure distribution yields a line intensity in reasonable agreement with the observations considering the uncertainties in the atomic calculations. We are thus confident that the 202.83Å emission line is due to Cr vii. By comparing with the Fe ix atomic model, we expect also to find additional lines at 258. 65, 259.18, 261.31 and 288.90Å that will be around 5-20 % of the λ202.83 line. There are indeed lines at these wavelengths except at λ288.90; Table 3 shows that all these weaker lines are brighter than predicted. Two of these lines are predicted to be strongly density sensitive but no common crossing point can be identified: Table 3 results have been obtained assuming that log N e = 9.15. No clear candidates for blending are available for any of the lines except λ259.18,, which is blended by an Al vii line (Section 6.10). Also, they are too -35weak to provide an intensity map that can be matched with a temperature class, so the identification of these lines remains to be confirmed. Cr viii With the identification of Cr vii λ202.83, we also expect to find Cr viii λ205.01 which is the analogous transition to the strong λ174.53 line of Fe x. A line is found at 205.05Å which is around a factor 2 weaker than λ202.83 and, based on image inspection, is formed close to Fe vii and Fe viii. We thus identify this with Cr viii. Converting the measured wavelength to a velocity gives +62.9 km s −1 , significantly different to Si viii and Al viii which are formed at a similar temperature (Table 6). However, the reference wavelengths for Cr viii come from the laboratory spectra obtained by but are only accurate to ±0.05Å (see also Gabriel et al. 1966). Other Cr viii lines are found in the EIS range, and the comparison with λ205.01 is reported in Table 3. Good agreement is found; the lines are also density sensitive relative to each other, and the best agreement is found for log N e = 9.45, a bit higher than the values found for other ions. The λ211.48 is also reported as Ni xi: while Ni xi is predicted to provide ≃ 50% of the total intensity, Cr viii accounts for all of it. The reason for this discrepancy is not clear, but it is likely to be found in the atomic data. Mn viii Mn viii is isoelectronic with Fe ix, and its spectrum is also dominated by the strong singlet transition 3s 2 3p 6 1 S -3s 2 3p 5 3d 1 P, which is identified as λ185.462. Using the new atomic data from , the predicted intensity of this line agrees well with observations, however the velocity derived using the reference wavelength of Smitt & Svensson (1983) is discrepant with other ions of the same temperature such as Si vii and Mg vii by around 20 km s −1 . The velocity instead is more consistent with the hotter ions Si viii and Al viii, perhaps suggesting the ion balance for Mn viii is yielding too low a temperature. Other, weaker Mn viii lines are predicted to fall in the EIS wavelength ranges, the strongest of which is identified at 263.200Å with a weak line whose intensity map is consistent with a cool line. Table 3 reports the comparison between the two lines. They are strongly density sensitive for log Ne > 9.5, but at lower densities their ratio is more constant; λ263.20 is brighter than predicted by the comparison with the strong singlet line, so that some unidentified blend has to provide almost 60% of the total observed intensity. Mn ix Using the atomic data of , the DEM yields three Mn ix lines that are potentially observable in the present spectrum, and wavelength matches are found for each. λ188.48 is the strongest line and is the analogous transition to Fe x λ174.53. It lies in a crowded part of the spectrum and it could provide a contribution to either Fe vii λ188.40 or Fe ix λ188.50 (observed at 188.425 and 188.507Å, respectively). The reference wavelength (due to is only accurate to ±0.05Å and so it is not possible to clearly identify Mn ix with either of the observed lines. We note, however, that find that Fe vii does not fully account for the strength of the observed line at 188.425Å and so in Table 4 we identify Mn ix with this line. The two other potential Mn ix identifications are λ191.60 and λ199.32 which are the analogous transitions to Fe x λ177.24 and λ184.54, respectively. Using the reference wavelengths yields velocities of +18.8 and +7.5 km s −1 , respectively, which give confidence in the identifications although we note again the low precision of the laboratory wavelengths. The intensities predicted from the DEM are significantly below the observed intensities, however images formed in both the lines are consistent with the expected formation temperature of Mn ix (log T eff = 5.86). Given the uncertainties in the identifications and blends, the results in Table 3 can not be put on an absolute scale. New, more precise, laboratory wavelength measurements would be valuable for confirming the Mn ix identifications. Ni xi Ni xi has the same atomic structure as Fe ix and its spectrum is dominated by the strong singlet line, analogous to Fe ix λ171.07, which lies outside the EIS wavelength range. Three Ni xi lines are identified in this spectrum, one of which for the first time as blending an Al v transition at λ278.73. Ni xi is expected to provide ≃ 13% of the total intensity, bringing the combined predicted intensity of the blended feature much closer to observations. Due to the Al v blend, the identification of this line does not allow us to determine an accurate wavelength for this transition and calculate from it the energy of the upper level. The other two lines are identified near the edge of the EIS short wavelength band. These two lines are analogous to Fe ix λλ241.74, 244.91, which have been observed in the past and used for density diagnostics due to the strong density sensitivity of their intensity ratio. In the present spectrum, λ211.48 is blended with a Cr viii transition. Some problem is found here, as the Cr viii line accounts for all the observed intensity, while Table 3 predicts Ni xi -37to provide ≃ 50% of the total intensity. Conclusions In the present work we have analyzed a full EIS spectral scan of a portion of an active region where the plasma emission was enhanced at transition region temperatures. We first measured the DEM and electron density of the plasma, and used the results to develop a complete atlas of the emitting spectrum, and to compare observed line intensities with predicted values from the CHIANTI database. The Fe vii-ix lines measured in the present spectrum have been analyzed in a separate paper ). While most of the lines identified in the spectrum were observed in other occasions by EIS, the strong enhancement of the emission at temperatures in the log T = 5.5-5.9 range has allowed us to identify several lines never observed in solar spectra. These lines, sometimes identified in laboratory spectra, are usually too faint to be detected but in special plasmas like the one we studied can provide valuable diagnostic tools to measure the physical properties of the emitting plasma. The observed spectrum was also used to carry out a systematic assessment of the accuracy of CHIANTI emissivities, as well as of the diagnostic application, for transition region ions. The brightness of the transition region emission has allowed us to carry out such a comparison including more lines and ions and with better accuracy than possible with standard active region or quiet Sun spectra. We find that CHIANTI emissivities are almost always in excellent agreement with observations. We identified blends for several lines, and discussed the diagnostic application of many of the lines reported in the present atlas.
2009-07-20T20:33:56.000Z
2009-07-20T00:00:00.000
{ "year": 2009, "sha1": "36e068d9ac07553493461eaa93b050bde0e0644d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0907.3490", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "36e068d9ac07553493461eaa93b050bde0e0644d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
214680658
pes2o/s2orc
v3-fos-license
Glucose point-of-care meter operators competency: An assessment checklist Background and objectives Glucose point-of-care testing meters are essential technology ubiquitous in hospitals. They are operated by non-specialized staff who are assessed through an auto-recertification process that is dependent on operators successfully producing expected outcomes. Alternatively, we suggest that operator practices be directly observed using a competency assessment checklist. Method We designed a checklist based on literature and manufacturers’ instructions and tested it by observing 30 operators at two sites (three hospitals) over two months in 2018. Results Despite all operators being auto-recertified, the checklist revealed that only 20% met the 80% threshold of compliance to standards. Moreover, the site with a POCT coordinator had a compliance rate of 82% versus 67% for the site that did not. Discussion The checklist is more reliable than auto-recertification in assessing operators’ competence. It also highlights areas for process improvement and provides an opportunity to give personalized feedback to operators. However, one challenge of POCT is that results generated by meter operators are typically less accurate than the central laboratory method because of errors in the pre-analytical and analytical phases of testing, which corresponds to POCT maintenance, Quality Control (QC) testing and the patient testing process [6]. Consequently, the accuracy and precision of a result is sensitive to the variability in skill level of a POCT operator [7][8][9][10]. To address these issues, accreditation programs require operators to complete training before being authorized to perform POCT; this commonly involves hands-on training and learning the relevant principles of the test, the importance of quality assurance, how to properly interpret results, and the limitations of the test [10]. Following this initial certification, user competency must be re-assessed periodically to ensure satisfactory levels of competence; with the most common period for recertification being annually [11]. Some accreditation standards also require a quality assurance program to monitor the performance and compliance of operators with policies and procedures [12], but they do not provide specific guidelines on how to meet this standard. For that purpose, many institutions rely on the auto-recertification feature available in most POCT data management systems. To be auto-recertified, meter operators must successfully perform a specified number of QC runs and/or patient tests within a defined period of time [8]. As such, certification does not expire for operators who regularly perform POCT as they can easily meet the minimum criteria for auto-recertification [7]. It remains unclear, however, whether these are adequate measures of compliance with standards of operations. As an answer to the limitations of auto-recertification, some accreditation programs suggest complementing them by using direct observation of routine patient testing to assess operator competency [13]. The methods for doing so, however, remain fledgling. A study by Tongtoyai, Tientadakul, & Chinswangwatanakul in Bangkok, Thailand described the development and use of two forms, one for on-site inspection and one for staff competency assessment to directly evaluate the implementation of ISO 22870:2006 standards [14]. The observations helped to identify common non-compliance issues, knowledge gaps and areas for improvement. However, no comparable attempt in a North American setting has been published to our knowledge and it remains unclear how this approach compares to other reassessment approaches such as auto-recertification. Another intervention to ensure effective compliance is to assign a senior level laboratory technologist as the POCT coordinator. The POCT coordinator is responsible for resolving commonly encountered problems associated with instrument maintenance, QC, procedural issues, reporting results, training and they also provide guidance to POCT operators [4]. In one US study, 59% of institutions had a full-time POCT program coordinator [8], suggesting popularity but not unanimity over the value of such a role. It remains unclear whether the guidance provided by a dedicated POCT coordinator has a significant impact on compliance with recognized standard practices versus the alternative of spreading the responsibility among several laboratory technologists trained in POCT. In line with these uncertainties, this study has two objectives. First, to develop a Competency Assessment Checklist (CAC) to assess compliance in POCT operation procedures with recognized standards, pilot it in a North American context and assess its results in comparison to auto-recertification. Second, to identify organizational factors that may contribute to compliance rates, such as the presence of a dedicated POCT coordinator. Material and methods This study was carried out at two hospital organizations using the Abbott FreeStyle Precision Pro® glucose meter. Site A has 318 beds and adopted the glucose meter in October 2017, three months before the study. Site B is composed of two hospitals, forming a single organization with a combined total of 510 beds and has been using the same meter for several years. Although the same equipment and auto-certification feature are used at both sites, the practices, policies and standard operating procedures vary between the two sites. Site A has a dedicated Senior POCT technologist acting as POCT coordinator responsible for the daily operations of the POCT program. She also conducts informal mini-audits of POCT operators as a complement to the auto-recertification feature of the POCT data management system. She also develops POCT educational materials and provides training at nursing orientations. Site A also has dedicated nursing staff members called Clinical Practice Leaders (CPLs) in charge of education and training for their respective nursing units. The nursing staff at Site A also appeared to have had an existing positive relationship with the POCT technologist that developed from interactions during informal mini-audits in the past. In contrast to Site A, laboratory charge technologists from Site B are collectively responsible for the daily operations of the POCT program in addition to the daily operations of the clinical laboratory. The POCT program at Site B also did not include random auditing of nursing staff and relied solely on auto-recertification to evaluate operators' competency. Site B also had dedicated nursing staff in charge of education and training, called Clinical Resource Leaders (CRLs). Table 1 summarizes the features of each site as it relates to POCT procedures. Table 2 Competency assessment item achievement levels. Glucose POCT results are less accurate compared to the central laboratory method which can be attributed to using different test methods and/or interfering substances from sampling whole blood versus plasma, however literature suggests that accuracy and precision errors can also be attributed to meter operators' skills [7][8][9][10]. To measure these, the Competency Assessment Checklist (CAC) was modelled after the one designed and used in Tongtoyai, Tientadakul, & Chinswangwatanakul [14] The checklist items developed were reviewed by a subject matter expert for content validity and pre-tested on three glucose meter operators. The subject matter expert used in this study was the POCT coordinator from Site A. She has over 10 years of experience working in POCT, and is involved in nurse orientation and training, the development of educational materials, and has an in-depth understanding of the accreditation standards and requirements associated with POCT. The subject matter expert also acted as the assessor conducting the observations. A pre-test was conducted, involving the evaluation of three glucose meter operators' competency with the first draft of the CAC and observing how the data collection process unfolded. This led to modifications to the CAC. Some items were never observed during the pre-test and were thus replaced by standardized questions assessing the operators' knowledge. The pre-test also helped refine the CAC delivery process, as the same assessor was used to ask all of the questions, eliminating any prompting from the assessor to facilitate the meter operator into providing the correct answer, and by streamlining the assessment time from 40-60 min to 20-30 min. After these revisions, the final CAC contained 61 assessment items; 44 items of which involved direct observation and 17 items were linked to the operator's knowledge. See Appendix A for the complete Competency Assessment Checklist. To select the sample of meter operators for assessment, glucose test volumes from December 2017 were retrieved at the two sites and used to identify nursing units with high-volume POCT activity. For an equal comparison across sites, nursing units with similar patient acuity levels were targeted. All glucose meter operators encountered on the nursing units at the time of the assessor's visit were included in the sample for assessment; participation was voluntary. In total, 10 certified glucose meter operators (all nurses) were observed at Site A and 20 were observed at Site B (10 from each of the two hospitals) from January to March 2018. One assessor, the subject matter expert, conducted the assessments of meter operators at all sites, with the first author of the paper present during these observations. All glucose meter operators provided verbal consent to be observed and were assessed using the CAC on a one-on-one basis as they performed glucose POCT on a patient. Patients were informed of the study, but consent was not required, as patient information was not collected. Actual behaviors and responses were not documented, only whether or not the meter operators complied with standards or knew the correct response. The achievement of a competency assessment item was represented as a "yes" answer percentage in a table format in the final analysis, where "yes" indicated staff compliance. To determine the average staff compliance level by site, the "yes" answer percentages from the assessments performed at each site were averaged. Results During the pre-test, it was noted that meter operators could not by-pass or perform certain steps out of sequence to what the meter manufacturer recommends, thereby producing mandatory sequential responses from meter operators. For example, the test strips are designed to allow blood to be applied in a top-fill or end-fill manner and will not allow the meter operator to begin testing until sufficient blood is applied (assessment item 39). Another example of this inability to by-pass steps is linked to scanning or entering the meter operator ID number (assessment item 9 and 25); a failure to do this step would lock-out the meter operator. Such steps were necessary in the process of glucose meter testing for the Abbott FreeStyle Precision Pro® glucose meter and make the checklist easier for the assessor to follow. In total, 17 CAC items were mandatory and have been codified with an asterisk (*). Table 2 displays the competency assessment items and achievement levels by hospital site. An assessment item was considered nonconformant or deficient in practice if less than 80% of operators achieved that assessment item and are marked in red in the table [14]. Overall, operators met the 80% compliance threshold on 28 items (including the 17 mandatory items marked with an *) but failed to meet the threshold on 33 items. When we look at the differences between sites, glucose meter operators from Site A achieved an average score of 82% on all items, while operators from Site B achieved 67%. On an individual basis, only 6 out of 30 (20%) glucose meter operators met the 80% competency threshold. All six meter operators were from Site A (See Table 3). The process of observing and assessing POCT operators also led to the identification of process issues in the organization. First, item 2 helped identify that many operators had difficulty scanning the barcoded armbands and resorted to using the admission labels from the patient's chart. The organization's policy allows this practice provided that the meter operator uses the admission label to confirm the information on the armband. In the end, operators checked for two patient identifiers ( Table 2, Assessment Item 2) only in 75% of cases. The difficulty in scanning barcoded armbands identified during the observations prompted further investigation as to why the meters cannot scan the barcoded armbands. Second, items 59, 60 and 61 showed issues of double charting. Glucose meters at Site A are not connected with the Electronic Medical Record (EMR) system and results are not automatically uploaded to the patient's EMR. Meter operators maintained paper charts but also manually added the result to patients' EMR. Despite not having a direct electronic connection to the patient's EMR, meter operators knew to dock the glucose meters to transmit QC results to the POCT system (100%), to check the patient's EMR for the manually entered result (86%) and who to contact in the event a result discrepancy was found (100%) ( Table 2, Assessment Items 59, 60 and 61, respectively). In contrast, at Site B, POCT results are automatically uploaded into the patient's EMR but meter operators do not regularly check the EMR for the proper documentation of POCT results (75%) and do not know who to contact in the event that a discrepancy is encountered (67%) ( Table 2, Assessment Items 60 and 61, respectively). Upon further investigation and discussion with the laboratory director, it was found that many nursing units at Site B still performed paper charting, so the practice of checking a patient's EMR for proper documentation is not common or mandatory. Third, users were found to be non-compliant with maintenance and quality control tasks. Basic maintenance of the glucose meter, where the meter operator is expected to clean the surface of the meter with an approved disinfectant was performed poorly at both sites; 60% and 30% at sites A and B, respectively ( Table 2, Assessment Item 6). Meter operators at both sites failed to perform the majority of the manual steps involved in QC testing such as checking the QC expiry dates, mixing the solutions, inverting and tapping the bottle to expel air bubbles, and discarding the first drop of QC material before use ( Table 2, Assessment Items 10, 16, 17, 18, respectively). Finally, results in the Critical Value (33%) and Performance (35%) knowledge questions were particularly low, as many nurses failed to know the critical values for glucose and the performance range of the meter. It was found that nurses considered critical ranges to be different according to which doctor he/she reported to or what the patient's medical condition was, rather than referring to standard practices. Discussion Our first objective was to develop and pilot a Competency Assessment Checklist (CAC) to assess compliance with POCT operating procedures using an observational approach and compare the results to auto-recertification. The results support the CAC as an appropriate measure of compliance among POCT operators revealing that POCT operators failed to meet the competency assessment threshold on 33 items. On the contrary, the findings from this study using the CAC suggest that the standard auto-recertification approach to competency assessment does not necessarily confirm a meter operator's actual competency skills. All meter operators in this study were certified operators according to the auto-recertification feature of the POCT system, but only 20% met the 80% threshold upon observation using the CAC. This points to several limitations associated with using the standard auto-recertification process as a measure of POCT competency. Auto-recertification does not assess the process of POCT, it only assesses the outcome, whether or not a meter operator is able to generate a patient or a QC result. In contrast, a standardized approach to compliance auditing can provide direct feedback about the actual processes and procedures being practiced by meter operators. Auditing using a standardized site-specific assessment checklist can help identify inadequate processes in an organization, knowledge gaps amongst operators and specific opportunities for improvement in a POCT program, in addition to non-conformances/deficiencies, that auto-recertification fails to reveal. There is also a formative potential in direct observational assessment of meter operators; this study found that auditing meter operators in real-time provided an opportunity for direct one-on-one feedback. Nurses in general were receptive to the assessments, and this approach generated informal training and education sessions at the nursing stations. To sustain these benefits, it may be worthwhile for healthcare institutions to consider augmenting their current POCT training strategies with small scale direct observational auditing at a frequency that suits their assessment needs. Our second objective was to identify organizational factors that may contribute to compliance rates. Results show significant differences in compliance rates between sites and between items. Site A out-performed Site B overall, with a compliance rate of 82% compared to 67% (Table 3). Several reasons may account for the better compliance at Site A. First, the glucose meter had been adopted within the year and staff had received training approximately three months before the observations took place at Site A. With the training being more recent at Site A, operators may have been more likely to remember standard procedures than at Site B, where it had been in place for years. Second, Site A had a dedicated POCT coordinator to communicate with the POCT operators. The POCT coordinator also performed random auditing of nursing staff. The occasional face-to-face interaction between nursing staff members and the POCT coordinator may have contributed to the development of a strong rapport and understanding about the importance of adhering to the testing steps. During a site visit, the dedicated Senior POCT technologist was observed to be working with the CPLs and operators to create new elearning materials, performing linearity studies to troubleshoot problematic meters, reviewing POCT data and sending personal emails to each meter operator who failed to properly document meter cleaning (Items #20 and #23). Compliance rates on these items were 90% and 80% versus 70% and 50% at Site B ( Table 2, Assessment Items 20 and 23). In contrast, the laboratory charge technologists from Site B were responsible for the daily operations of the POCT program in additional to their clinical laboratory duties, which likely meant that they did have enough time to dedicated to the POCT program. For instance, they relied solely on the auto-recertification feature of the POCT management system to evaluate operators' competency on an annual basis. The presence of laboratory charge technologists at Site B does not seem to have acted as an equally efficient substitute to the POCT coordinator. The long-term success of a glucose POCT monitoring program requires continued communication and cooperation between physicians and nurses at the patient's bedside and the clinical biochemists and technologists behind the scenes, in the laboratory [4]. The results of this study suggest that a dedicated POCT technologist acting in the role of a POCT coordinator may be an effective way to support this communication. It may also point to the necessity of high level support for POCT coordinators, since more than half of the respondents in a survey conducted by the POCT interest group of the Canadian Society of Clinical Chemists (CSCC) cited a lack of staff to support their POCT program [8]. Beyond the contribution of a dedicated Senior POCT technologist, using a standardized checklist helped identify several areas for improvements that contributed to some low compliance rates. As observed in other settings [7], POCT operators may fail to confirm two patient identifiers (Item #2), leading to risks of measuring the wrong patient. Issues with the barcode on the armbands are the likely cause for these issues. Double charting and using paper charts also emerged as problems (Items #59, #60 and #61). While it is not uncommon for healthcare institutions to use a combination of paper and electronic documentation methods, accreditation standards require all results to be documented in the patient's medical record if they are to be used for patient management decisions [15]. Paper-based charting can result in 30% of POCT glucose results being transcribed incorrectly [16]. The CAC may help identify problematic and informal practices such as paper charts being maintained instead of electronic charts. Maintenance and Quality Control issues may generate delays and inefficiencies. First, improperly trained POCT operators may send out devices for repair that do not actually require a repair [9]. Second, devices may automatically lock out operators if scheduled maintenance and QC is not performed (Abbott, 2012), which can lead to a delay in patient testing. Failure to clean the device is a key maintenance issue and it may be widespread; as 25% of meter operators did not follow manufacturers' instructions with appropriate sanitization prior testing [8], and a study on blood gas POCT devices suggests that cleaning was not performed in 75% of POCT blood gas tests and that operators were unwilling to perform maintenance [17]. Items 6,10,16,17,18 in the CAC may help proactively identify such operators and fix these problems ahead of time. Knowledge questions in Critical Value and Performance revealed knowledge gaps. Accreditation standards do not specify that meter operators need to know the critical values of a POCT test because such values are often unreliable and should be retested for confirmation [18]. Item 4 and 5 under the Preparation category, identified another non-conformance: the lack of hand hygiene and donning disposable gloves prior to patient testing. Basic hand hygiene compliance among healthcare workers is a universal problem within many healthcare facilities [19]. Direct observation studies found that hand hygiene compliance in inpatient units and emergency departments prior to patient contact occurs slightly over 80% of the time [20], which compares similarly to the findings of this study where meter operators would perform hand hygiene 70% and 79% of the time at Site A and B, respectively. Similar results for hand hygiene were observed in the Safety category (Table 1, Assessment Item 44), 80% and 68% for Sites A and B, respectively. Direct observation studies found that hand hygiene compliance after patient contact occurs approximately 80% of the time, and after removing gloves, slightly over 95% of the time [20]. An opportunity exists for the organization to improve infection control at the point of care and perhaps during other opportunities of patient interaction. Regular hand hygiene requires "behavioural change, culture change, and training that may take many years and require ongoing actions across all organization levels" [21] (p262). A possible limitation to this pilot study was the inability to assess the competency skills of meter operators on the night shift. QC runs at the sites were typically performed by night shift nursing staff. It was unclear if nursing staff assessed in this pilot study participated in regular shift rotations and had the opportunity to perform QC testing. Despite this possible limitation, it is still imperative that all meter operators are competent in QC testing as 20% of QC tests are performed by day shift nurses because testing was missed by the night shift nursing staff [9]. Another limitation is the use of the 80% threshold. It is a useful heuristic but it appears arbitrary in the study by Tongtoyai et al. It is recommended that healthcare institutions set their own pass criterion based on performance targets, industry benchmarks research and manufacturer recommendations for optimal meter performance. The Angoff method is commonly used in competency skill testing to set a pass rate, where the pass rate of each assessment differs according to the difficulty level of the test [22]. Given that multi-site healthcare institutions may use a variety of glucose meters, subject matter experts such as clinical biochemists from each institution should review their site-specific assessment checklist and identify mandatory items for competency. Items that are not mandatory for competency may be included to help identify inadequate processes or knowledge gaps. Also, these study results were only tested on two sites. Nevertheless, results, such as those in the "Interfering Substances and Precautions" category are similar to the results in the study conducted by Tongtoyai et al., 2012, suggesting reliability of the CAC across organizations. The checklist should be adapted and tested in other settings to further confirm both the reliability of the CAC and to assess if the compliance issues identified in the current study are generalizable to other sites. Future studies should consider the validity of each items. For instance, items that cannot be bypassed could be removed to keep the assessment tool lean. This partly depends on the technology used and suggests that compliance could be improved by making more steps required by the POCT system. Moreover, emphasis could be put on items with great performance variability between users such as disinfecting the meter, mixing QC solutions, or wiping away the first drop of blood. Conclusion The glucose meter Competency Assessment Checklist developed in this study proved to be a useful tool in the assessment of POCT operators' competences. It revealed that auto-recertification using a POCT data management system failed to reliably assess meter operator competency. It also helped identify areas for improvement in organizational POCT practices and made it possible to provide useful personalized feedback to POCT operators. The findings of the checklist also suggest that a dedicated POCT coordinator, as well as random auditing of users may help lead to better compliance among operators. Practical application of direct observation using a CAC is challenging especially in a large facility. However, this tool can be used to target compliance issues at the micro and macro level. POCT programs can use this tool to audit meter operators on a schedule that suits their daily operation and organizational needs. Audits can be performed once a week or as resources permit. Organizations may choose to target specific wards or units with suspected performance issues or more generally by randomly selecting operators from all areas of their institution. On a micro level, the individual assessments will allow meter operators to receive direct one-on-one feedback. Organizations auditing specific units may wish to address common compliance issues collectively with in-person re-training sessions or with online re-training modules. It is recommended that user specific issues be addressed on a one-on-one basis under the discretion of the POCT program administrator. In this study, there was once instance in which we encountered a meter operator that used a coworker's access code to perform glucose POCT on patients under his care. The POCT coordinator assessed the meter operator and then followed up with the user by email. The user was provided his own access code after the POCT coordinator was able to locate records of his initial training and confirm that he obtained a passing score on his initial competency test. On a macro level, the cumulative assessment data can provide an overall sense of how well a unit or ward is performing and what areas of the POCT process needs to be modified or studied further. For example, barcode scanning and double charting emerged as problems specific to these two sites which will require further examination. In addition, comparing cumulative data from year-to-year can be used to evaluate a POCT program before and after a major intervention or to measure year-to-year performance or improvement of a POCT program. Declaration of competing interest There are no known conflicts of interest associated with this publication and no financial support program for this work could have influenced its outcome.
2020-03-12T10:58:19.601Z
2020-03-05T00:00:00.000
{ "year": 2020, "sha1": "e7f6d071a4d6ff2b5f115252ad9c03e1c58ecb9a", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.plabm.2020.e00157", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b22850871ee3233dcc7295c1d006075e9afff862", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
7634415
pes2o/s2orc
v3-fos-license
Energy-delay bounds analysis in wireless multi-hop networks with unreliable radio links Energy efficiency and transmission delay are very important parameters for wireless multi-hop networks. Previous works that study energy efficiency and delay are based on the assumption of reliable links. However, the unreliability of the channel is inevitable in wireless multi-hop networks. This paper investigates the trade-off between the energy consumption and the end-to-end delay of multi-hop communications in a wireless network using an unreliable link model. It provides a closed form expression of the lower bound on the energy-delay trade-off for different channel models (AWGN, Raleigh flat fading and Nakagami block-fading) in a linear network. These analytical results are also verified in 2-dimensional Poisson networks using simulations. The main contribution of this work is the use of a probabilistic link model to define the energy efficiency of the system and capture the energy-delay trade-offs. Hence, it provides a more realistic lower bound on both the energy efficiency and the energy-delay trade-off since it does not restrict the study to the set of perfect links as proposed in earlier works. Introduction Energy is a scarce resource for nodes in multi-hop networks such as Wireless Sensor Networks (WSNs) and Ad-Hoc networks [1]. Therefore, energy efficiency is of paramount importance in most of their applications. Regarding energy efficiency, there are numerous original works addressing the problem at the routing layer, MAC layer, physical layer or from a cross-layer point of view, e.g. [2,3,4,5,6]. Routing strategies in multi-hop environments have a major impact on the energy consumption of networks. Long-hop routes demand substantial transmission power but minimize the energy cost for reception, computation and etc. On the opposite, routes made of shorter hops use fewer transmission power but maximize the energy cost for reception since there is an increase in the number of hops. M. Haenggi points out several advantages of using long-hop routing in his articles, e.g. [7,2], among which high energy efficiency is one of the most important factors. These works reveal the importance of the transmission range and its impact on the energy conservation but don't provide a theoretical analysis on the optimal hop length regarding various networking scenarios. In [3], P. Chen et al. define the optimal one-hop length for multi-hop communications that minimizes the total energy consumption. They also analyze the influence of channel parameters on this optimal transmission range. The same issue is studied in [4] with a Bit-Meter-per-Joule metric where the authors study the effects of the network topology, the node density and the transceiver characteristics on the overall energy expenditure. This work is improved by J. Deng et al. in [5]. Since the data transmitted is often of a timely nature, the end-to-end transmission delay becomes a an important performance metric. Hence, minimum energy paths and the trade-off between energy and delay have been widely studied, e.g. [6,8,9]. However, unreliable links are not considered in the aforementioned works. In fact, experiments in different environments and theoretical analyzes in [10,11,12,13,14] have proved that unreliable links have a strong impact on the performance of upper layers such as MAC and routing layer. In our previous work [14], we have shown how unreliable link improve the connectivity of WSNs. In [15], S. Banerjee et al. take unreliable links into account in their energy efficiency analysis by introducing a link probability and the effect of link error rate. The authors derive the minimum energy paths for a given pair of source and destination nodes and propose the corresponding routing algorithm. However, the energy model used in this paper includes the transmission power only and does not consider circuitry energy consumption at the transmitter and receiver side. In fact, such a model leads to an unrealistic conclusion which states that the smaller hop distance, the higher energy efficiency. As we show in this paper, considering a constant circuitry power according to [16] results in completely different conclusions. Furthermore, we propose to evaluate the effect of fading channel on the energy efficiency. In this work, we do not consider any specific protocol and assume the corresponding overhead to be negligible. Depending on the application, the energy efficiency has a different significance [16]. A periodic monitoring application is assumed here where the energy spent per correctly received bit is a crucial energy metric. Moreover, in wireless communications, the energy cost augments with the increase of the transmission distance. Hence, we also adopt the mean Energy Distance Ratio per bit (EDRb) metric in J/m/bit proposed in [4]. A realistic unreliable link model [14] is introduced into the energy model. The purpose of this work is to provide a lower bound on the energy efficiency of both single and multi-hop transmissions and derive the corresponding average transmission delay. As such, we are able to show the theoretical trade-off between the energy efficiency and the delay for single-hop and multi-hop transmissions. The multi-hop case is analyzed in a homogeneous linear network. Both studies are performed over three different channels (i.e. AWGN, Rayleigh flat fading and Nakagami block fading channel). Theoretical results are then validated in 2-dimensional Poisson distributed network using simulations. The contributions of this paper are: The close-form expressions for the optimal transmission range and for the corresponding optimal transmission power are derived in AWGN channel, Rayleigh flat fading channel and Nakagami block fading channel employing both a comprehensive energy model and an unreliable link model. The definition of a closed form expression for the lower bound of energy efficiency of a multi-hop communication is obtained in a linear network over three types of channel and is validated by simulation in 2-dimensional Poisson networks. The definition of a lower bound for the energy-delay trade-off for a linear and a Poisson network in the three types of channel aforementioned. This paper is organized as follows: Section 2 concentrates on presenting the models and metrics used in the paper. Section 3 derives a closed form expression of optimal transmission range and optimal transmission power for one-hop transmission. In section 4, the minimum energy reliable path for linear networks and its delay are deduced. In section 5, we focus on the optimal trade-off between the energy consumption and the delay in linear networks. Simulations are given and analyzed in section 6 for a 2-dimensional network. Finally, section 7 concludes our work. Models and Metric In this section, the energy model, the realistic unreliable link model, the delay model and the metric EDRb used in this work are introduced. Energy consumption model We consider energy efficient nodes, i.e. nodes that only listen to the transmissions intended to themselves and that send an acknowledgment packet (ACK) to the source node after a correct packet reception. As such, the energy consumption for transmission of one packet E p is composed of three parts 1 : the energy consumed by the transmitter E T x , by the receiver E Rx and by the acknowledgement packet exchange E ACK : The transmission energy model [16] is given by: where P t is transmission power, the other parameters are described in Table 1 and P txElec is considered as constant. Similarly, the energy model on the receiver side includes two parts: the startup energy consumption, which is considered identical to the one of the transmitter, and the circuitry cost [16]: where P rxElec is the circuity power of the receiver which is considered as constant. In the acknowledgment process, it is assumed that the ACK packet can be successfully transmitted in a single attempt which is based on the following facts: firstly, since ACK packets are much smaller data packets, their link probability is greater than that of data packet. For instance, for respectively ACK and Data packets of 80 and 320 bytes each, if the successful transmission probability of the data packet is 80%, the link probability for the ACK packet is 95%. Secondly, assuming a symmetric channel, if the data packet experienced a good channel, the return path experiences the same beneficial channel conditions. Hence, we can assume that only one ACK packet is sent with high probability of success to the source of the message. Since the energy consumed by the transmission power P t for the ACK packet has small proportion in its total energy, P t is neglected in the energy expenditure model given by: where T ACK is the average time during which the transmitter waits for an ACK packet. The analysis of E p shows that the energy consumption can be classified into two parts: the first part is constant, including T start · P start , P txElec , α amp , P rxElec and E ACK , which are independent of the transmission range; the second part is variable and depends on the transmission energy P t which is tightly related to the transmission range. Accordingly, the energy model for each bit follows: where E b , E c and K 1 ·P t are respectively the total, the constant and the variable energy consumption per bit. Substituting (1) (2) (3) (4) into (5) yields: For a given transmitting/receiving technology, E c and K 1 are constant because all parameters in (6) and (7) are fixed. Then E b becomes a function of P t , i.e., E b (P t ). Realistic unreliable link model The unreliable radio link model is defined using the packet error rate (PER) [14]: where P ER(γ) is the PER obtained for a signal to noise ratio (SNR) of γ. The PER depends on the transmission chain technology (modulation, coding, diversity ... ). And γ x,x ′ is calculated by [16]: with where d x,x ′ is the transmission distance between node x and x ′ , α ≥ 2 is the path loss exponent, P t is the transmission power, G T ant and G Rant are the antenna gains for the transmitter and receiver respectively, B is the bandwidth of the channel and is set to B = R, λ is the wavelength and L ≥ 1 summarizes losses through the transmitter and receiver circuitry. Similar to E b , for a given technology, K 2 becomes a constant. And p l (γ(x, x ′ )) can be rewritten as a function of d and P t , i.e., p l (d, P t ). Mean energy distance ratio per bit (EDRb) The mean Energy Distance Ratio per bit (EDRb) [4] in J/bit/m is defined as the energy consumption for transmitting one bit over one meter. The mean INRIA energy consumption per bit for the successful transmission over one hop E 1hop including the energy needed for retransmissions is given by: where n is the number of retransmissions. According to its definition, EDRb is given by: Delay model The average delay for a packet to be transmitted over one hop, D onehop , is defined as the sum of three delay components. The first component is the queuing delay during which a packet waits for being transmitted. The second component is the transmission delay that is equal to N b /R. The third component is T ACK . Note that we neglect the propagation delay because the transmission distance between two nodes is usually short in multi-hop networks. Without loss of generality, D onehop is set to be 1 unit. However, one-hop transmission may suffer from the delay caused by retransmissions. According to (11), the mean delay of a reliable one-hop transmission is: One-hop Transmission: Energy Efficiency and Delay The one-hop transmission is the building block of a multi-hop path. In this section, we derive the optimal transmission range and power that minimizes the energy expenditure of the one-hop transmission by introducing three different channel models. Optimal transmission range d 0 and optimal transmission power P 0 are calculated according to: The optimal transmission power P 0 and range d 0 exist because for smaller values of d, the transmission power P t is low in terms of a certain link probability and the constant energy component E c is dominating in EDRb consequently; for higher values of d, the variable energy consumption K 1 · P t is dominating EDRb since P t increases proportionally to d α in order to reach the destination. 3.1 Energy-optimal transmission power P 0 Substituting (8) into (14) and (15) and simplifying, according to the derivation in Appendix A, we obtain: . In (17), it should be noted that P 0 is independent from p l (γ) and consequently independent from modulation and fading. In general, N b /R ≫ T start . Following, the first part of (17) can be neglected. On the opposite, the characteristics of the amplifier have a strong impact on P 0 . When the efficiency of the amplifier is high, i.e. β amp → 1, P 0 reaches its maximum value resulting in a longer optimal transmission range d 0 . It tallies with the result of [17]. It is clear that when the environment of transmission deteriorates, namely, α increases, P 0 decreases correspondingly. Energy-optimal transmission range d 0 and its delay According to Appendix A, d 0 follows: where p ′ l (γ) is the first derivative of p l (γ). Equation (18) indicates that d 0 depends on p l (γ) and hence has to be analyzed according to the type of channel and modulation as proposed next. This expression is meaningful since it can be used to estimate the optimal node density in a wireless netework depending on pl(γ). AWGN channel The optimal transmission range in AWGN channel, which is derived in Appendix B, is obtained by: where W −1 [·] is the branch satisfying W(x) < −1 of the Lambert W function [18]. INRIA Substituting (16) and (19) into (9), the optimal SNR γ 0g is given by: Meanwhile, the optimal BER is obtained by (20) and (48): Depending on γ 0g and BER 0g , the receiver can decide whether it is in the optimal communication range or not by measuring its channel state. The delay and the energy efficiency of the one-hop communication can be analyzed by expressing respectively the delay D g and the energy metric EDRb as a function of the transmission range d as detailed in Appendix B. Hence, substituting (48) into (13), the delay of the reliable one-hop transmission in AWGN channel as a function of d is given by: Substituting (46) into (14), the optimal transmission power P 0g as a function of the transmission distance d achieving energy efficiency in AWGN channel follows: Substituting (23) and (48) into (12), EDRb as a function of d in AWGN channel is expressed by: Fig. 1 shows the variation of EDRb with the transmission range d in AWGN channel as an example according to (24), where BPSK modulation is adopted. The related parameters are listed in Table 1. It should be noted that the value of p l is close to 1, which shows that energy optimal links in AWGN channel are reliable. Rayleigh flat fading channel [19] The optimal transmission range d 0f in Rayleigh flat fading channel, which is derived in Appendix C, is obtained by: The expression of d 0f shows that it decreases with the increase of α or N b . Figure 1: EDRb of the one-hop transmission as a function of the range d in AWGN channel where d 0g = 172.31m, P 0 = 180.51mW , γ 0g = 9.34dB, BER 0g = 1.37e − 5, p l = 96.55% and D g = 1.04 unit. The exact EDRb is obtained with (47) and the approximation of EDRb is obtained with (48). Therefore, the approximation is feasible. . Substituting (16) and (25) into (9) provides the optimal SNR in Rayleigh flat fading channel: Then substituting (26) into (50), the optimal BER in Rayleigh flat fading channel is: From a cross layer point of view, the routing layer can identify if a node is at the optimal communication range according to the values ofγ 0f or BER 0f . Similarly to the study in AWGN channel, we derive here the expression of the delay D f and the energy metric EDRb as a function of the transmission range d which is detailed in Appendix C. Substituting (50) into (13), the delay of the reliable one-hop transmission in a Rayleigh flat fading channel as a function of d is given by: Substituting (51) into (14), the optimal transmission power P 0f as a function of the transmission distance d achieving energy efficiency in a Rayleigh channel follows: Hence, for a given transmission distance, the optimal transmission power can be derived according to P 0f (d) in an adaptive power configuration. Finally, EDRb as a function of d is computed by substituting (29) into (12): shows that an energy optimal link in Rayleigh channel is far less reliable than the link in AWGN channel. This result claims for using unreliable links in the real deployment of wireless network. Nakagami block fading channel The link model in Nakagami block fading channel, as shown in (53), is too complex to obtain the closed form expression of the energy optimal transmission distance d 0b . Therefore, two scenarios are taken into consideration in the following. Firstly, when m = 1 and α m = 1 (e.g., for BPSK, BFSK and QPSK), according to the derivation in Appendix D, the optimal transmission range d 0b in Nakagami block fading channel is: Substituting (16) and (31) into (9) yields the optimal signal to noise ratio in Nakagami block fading channel: For a given transmission range, we can obtain the optimal transmission power P 0b in Nakagami block fading channel using (14) and (55): Finally, EDRb as a function of d is obtained by substituting (33) into (12): Substituting (53) into (13), we get the delay of a reliable one-hop D b in Nakagami block fading channel: For the other scenarios, the sequential quadratic programming (SQP) method algorithm in [20] is adopted to solve the optimization problem related to the computation of the optimal EDRb. Fig. 3 shows how EDRb varies with d according to Eq. (34) in Nakagami block fading channel using BPSK modulation. The related parameters are presented in table 1. Having p l = 0.72 reveals that energy optimal links in Nakagami block fading channel are even more unreliable than those in Rayleigh flat fading channel. From Fig. 1, Fig. 2 and Fig. 3, it can be concluded that: firstly, the optimal transmission power P 0 corresponding to the optimal transmission range is the same for all channels which concises with the result of Eq.(16); secondly, the optimal transmission range decreases when fading becomes stronger, namely, from AWGN, Nakagami block fading channel to Rayleigh flat fading channel; thirdly, EDRb increases with the enlargement of fading , i.e., more energy has to be consumed to counteract the effect of fading. Impact of some physical parameters This section studies the impact of some physical parameters such as the pathloss exponent α, the strength of fading, the circuitry power, N b , the transmission rate R and the modulation technique. For all the results provided hereafter, the values of physical parameters that are not analyzed are given in Table 1 Impact of fading The sequential quadratic programming (SQP) algorithm described in [20] is implemented to analyze the impact of strength of fading on the optimal EDRb and corresponding optimal transmission range in Nakagami block fading channel. The results are shown in Fig. 4. Similarly to our previous analysis, the increase of the strength of fading leads to the increase of the optimal EDRb and shortens the optimal transmission range. In that case, more energy is consumed to overcome the destructive effect of fading. Impact of the path loss exponent Fig. 5 shows that EDRb greatly increases with the strength of the path loss, i.e., more energy is consumed to make up for the path loss. Meanwhile, path loss shortens the optimal transmission range which induces more hops and higher delay for a given transmission distance. Fig. 6 shows the effect of circuity power on EDRb and d 0 , where the whole circuity powers P txElec , P rxElec , α amp and P start decrease by the coefficients 0.5 and 0.1. Since the reduction of circuity powers results in the decrease of P 0 which leads to shorten d 0 . When the circuity powers are set to 0, the shortest hop distance has the high energy efficiency [15]. Meanwhile, the energy efficiency is improved with the reduction of circuity power. Hence, the effect of circuity energy consumption should be considered in the design of WSNs. Impact of the circuitry power Impact of the modulation The effect of modulation on the optimal EDRb for three kinds of channel is shown in Fig. 7. It should be noted that the optimal EDRb monotonously decreases while the optimal transmission range monotonously increases with the decrease of the order of the modulation for the three different channel types. 4QAM or BPSK are the most energy efficient among the MQAM modulations which can be explained by BER. BER increases with the order of the modulation for an identical SNR, which leads to a reduced optimal transmission range. Due to the reduction of the transmission range and duration, E c has a bigger proportion in the total energy consumption, which results in the increase of EDRb. Impact of the packet size Fig. 8 shows how the optimal EDRb varies with N b and the corresponding optimal transmission range for the three kinds of channel. In AWGN channel and Nakagami block fading channel, the optimal EDRb and the optimal transmission range decrease with the increase of N b . In contrast, for Rayleigh flat fading channel, there is an optimal N b that originates from the trade-off between the variable transmission energy (K 1 · P t ) and E c . The proportion of K 1 ·P t rises in the total energy consumption with the increase of N b , which trades off E c . The increase of N b results in the decrease of the link probability, which leads to the decrease of the optimal transmission range. It can be deduced from Fig. 8 that larger packets need less energy but more hops and higher delays. Impact of the rate In Fig. 9, the increase of transmission rate leads to the decrease of the link probability according to Eq. (9) which brings forth the INRIA reduction of the optimal transmission range. Meanwhile, the reduction of the total energy consumption results in the decrease of the optimal EDRb. Multi-hop Transmission: Energy Efficiency and Delay In this section, a multi-hop transmission along a homogeneous linear network is considered. Nodes are aligned because a transmission using properly aligned relays is more energy efficient than a transmission where the same relays do not belong to the straight line defined by the source and the destination. In this section, we first prove that the transmission along equidistant hops is the best way for saving energy in a homogeneous linear network. Next, the optimal number of hops over a homogeneous linear network is derived for a given transmission distance according to the optimal one-hop transmission distance. Finally, a lower bound on the energy efficiency and its delay is obtained for the considered multi-hop transmission. Minimum mean total energy consumption Proof The mean energy consumption for each hop of index m is set to E m = N b · EDRb(d m ) · d m , m = 1, 2, . . . , n. Since each hop is independent from the other hops, the mean total energy consumption is Hence, the problem of finding the minimum mean total energy consumption can be rewritten as: minimize Etot where λ = 0 is the Lagrange multiplier. According to the method of the Lagrange multipliers, we obtain Eq. (37) shows that the minimum value of F is obtained in the case ∂E1 ∂d1 = ∂E2 ∂d2 = . . . = ∂En ∂dn = −λ. Moreover, in a homogeneous linear network, the properties of each node are identical. Therefore, where m = 1, 2, . . . , n. Because ∂E ∂d is a monotonic increasing function of d when the path-loss exponent follows α ≥ 2, the unique solution of Eq. (37) is Finally, we obtain: Optimal number of hops Based on Theorem 1 and the analysis in Section 3, the optimal hop number can be calculated from the transmission distance d and the optimal one-hop transmission distance d 0 . When d/d 0 is an integer, [d/d 0 ] is the optimal hop number N hop0 as each hop has the minimum EDRb according to Theorem 1. When d/d 0 is not an integer, setting ⌊d/d 0 ⌋ = n, the optimal hop number is N hop0 = n or n + 1, which can be decided by: where ⌊x⌋ provides the largest integer value smaller or equal to x. The transmission range of each hop is now d/N hop0 . Lower bound on EDRb and its delay Substituting the formula P 0 and d 0 in three kinks of channel into (12) yields: Equation (39) provides the exact lower bound of EDRb on the basis of Theorem 1 and the analyzes of section 3 for a multi-hop transmission using n hops. Its corresponding end-to-end delay is computed as: INRIA where D ch is the one-hop transmission delays and ch respectively stands for Eq. (22) with respect to AWGN channel, Eq. (28) for Rayleigh flat fading channel and Eq. (35) for Nakagami block fading channel. Fig. 10 represents the theoretical lower bound on EDRb and its corresponding mean delay over AWGN, Rayleigh flat fading and Nakagami block fading channel. The corresponding mean delay is obtained by Eq. (40). It can be noticed that the minimum value of EDRb can be reached by following for each hop the optimum one-hop distance. It is shown in section 6 that this lower bound is also valid for 2-dimensional Poisson distributed networks using simulations. Energy-Delay Trade-off A trade-off between energy and delay exists. For instance when considering long range transmissions, a direct single-hop transmission needs a lot of energy but yields a shorter delay while a multi-hop transmission uses less energy but suffers from an extended delay as shown in Fig. 10. This section concentrates on the analyses of the energy-delay trade-off for both the one-hop and the multi-hop transmissions. 28) and (35) and the mean energy consumption is calculated by Eq. (12) over each kind of channel. The lowest points on the three curves represent the minimum energy consumptions possible for each type of channel. They correspond respectively to the energy-optimum power values P 0g = 535.87mW , P 0f = 734.12mW , P 0b = 535.87mW which are the same than the ones obtained with Eq. (23), (29) and (33) in section 3. Energy-delay trade-off for one-hop transmissions In Fig. 11, each curve can be analyzed according to the transmission power used to obtain the energy-delay value. On each curve, the points on the left of the minimum energy point are obtained with transmission powers higher than the energy-optimum power value P 0 . The points on the right (i.e. experiencing higher delays) are obtained for transmission powers smaller than the energyoptimum power value P 0 . When P t is increasing and P t > P 0 , the energy consumption increases drastically while the mean delay decreases as the link gets more and more reliable. On the contrary, when P t is decreasing and P t < P 0 , the energy consumption is increasing with a slower pace while the mean delay increases as the link gets more and more unreliable. More and more retransmissions are here performed, using more energy and increasing the one-hop transmission delay. Energy-delay trade-off for multi-hop transmissions In section 4.2, the lower bound on the energy efficiency for a given transmission distance and its corresponding delay are analyzed determining the point of minimum energy and largest delay for a multi-hop transmission. However, in some applications subject to delay constraints, the energy consumption can be raised to diminish the transmission delay. Therefore, the energy-delay tradeoff for multi-hop transmissions is analyzed in the following. To determine the energy-delay trade-off for multi-hop transmissions, we still consider a linear homogeneous network and show in Theorem 2 that the minimum mean delay is also obtained for equidistant hops. Theorem 2 In a homogeneous linear network, a source node x sends a packet of N b bits to a destination node x ′ using n hops. The distance between x and x ′ is d. The length of each hop is d 1 , d 2 , . . . , d n respectively and the mean end to end delay is referred to as D(d). The minimum mean end to end delay Dtot min is given by: if and only if d 1 = d 2 = . . . = d n . Proof The mean delay of each hop is defined by D m , m = 1, 2, . . . , n. Since each hop is independent of the other hops, the mean end to end delay is obtained by: Hence, the problem can be rewritten as: minimize Dtot where λ = 0 is the Lagrange multiplier. According to the method of the Lagrange multipliers, we obtain: Eq. (42) shows that the minimum value of F is obtained in the case ∂D1 ∂d1 = INRIA Based on Theorem 1 and Theorem 2, we conclude that, regarding a pair of source and destination nodes with a given number of hops, the only scenario, which minimizes both mean energy consumption and mean transmission delay, is that each hop with uniform distance along the linear path. Fig. 12 shows the relationship between the mean energy consumption and the mean delay for a certain transmission distance in AWGN, Rayleigh and Nakagami block fading channel. The mean delay is computed with Eq. (40) and the mean energy consumption is calculated with Eq. (24), (30) and (34). According to the Theorems 1 and 2, each relay of the multi-hop transmission adopts the same transmission power according to the optimal hop distance. No maximum limit for the transmission power is considered in the computation. However, it has to be taken into account in practice. As shown in Fig. 12, we use d = 380m for the AWGN and the Nakagami block fading channel and d = 50 for the Rayleigh fading channel. Corresponding optimal number of hops is respectively 2 hops, 2 hops and 3 hops respectively, which corroborates the results of Fig. 12. The bold black line gives the mean energy-delay trade-off. Knowing this particular trade-off, the routing layer can decide how many hops are needed to reach the destination under a specific transmission delay constraint. The trade-off curve reveals the relationship between the transmission power, the transmission delay and the total energy consumption: 1. For smaller delays (fewer hops), more energy is needed due to the high transmission power needed to reach nodes located far away. 2. An increased energy consumption is not only triggered by communications with few hops but also arises for communications with several hops where the use of a reduced transmission power leads to too many retransmissions, and consequently wastes energy, too. Hence, the decrease of the transmission power does not always guaranty to a reduction of the total energy consumption. 3. For a given delay constraint, there is an optimal transmission power that minimizes the total energy consumption. Though the lower bound on the energy-delay trade-off is derived for linear networks, it will be shown by simulations in the following section 6 that this bound is proper for 2-dimensional Poisson distributed networks. Simulations in Poisson distributed networks The purpose of this section is to determine the lower bound on the energy efficiency and on the energy-delay trade-off in a 2-dimensional Poisson distributed network using simulations. The goal is to show that the theoretical results obtained for a linear network still hold for such a more realistic scenario. We introduce this section by defining the characteristic transmission range. Characteristic transmission range The characteristic transmission range is defined as the range d c where EDRb 1hop(d c ) = EDRb 2hop(d c ), i.e., the total energy consumption of a two-hop transmission is equal to that of a one-hop transmission [21] as shown in Fig. 13. In a geographical-aware network, the knowledge of d c at the routing layer is very useful to decide whether the optimal transmission can be done in one or two hops. Hence, when the transmission distance d is greater than d c , the use of a relay node is beneficial, on the contrary, a direct transmission is more energy efficient. Simulation setup In the simulations, the lower bound on EDRb and on the energy-delay trade-off are evaluated in a square area A of surface area S A = 900 × 900m 2 . The nodes are uniquely deployed according to a Poisson distribution: where ρ is the node density. All the other simulation parameters concerning a node are listed in Table 1. We set the node density at ρ = 0.001/m 2 to ensure a full connectivity of the network [14]. The decode and forward transmission mode is adopted in the simulations. The network model used in the simulations assumes the following statements: The network is geographical-aware, i.e. each node knows the position of all the nodes of the network, A node can adjust its transmission power according to a given transmission range, which is determined by the routing layer using Eq. (29), (23) or (33) with respect to AWGN, Rayleigh or Nakagami block fading channel respectively. A Time Division Multiple Access (TDMA) policy is assumed. Simulations of the lower bound on EDRb In these simulations, a very simple routing strategy is adopted as follows: Step 1: The source node estimates if the distance between the source and the destination node is smaller than d c ; if YES, transmit packet directly, if NO, go to step 2. Step 2: Select the nodes whose distance from the source are in the range d/N hop0 ± (d c − d 0 ). If no node is chosen, expand the range step by step (size d 0 ) until reaching the destination node. Step 3: Choose the node closest to the destination node among the nodes chosen in step 2). Step 4: Repeat step 1) to step 3) until the destination node is reached. In the simulations, we test all pairwise source-destination nodes. The packet size is of 2560 bits. Then, for each pair of nodes, we calculate the end to end energy consumption and its Euclidean distance. The simulation is implemented for 1500 times, 600 times and 100 times respectively corresponding to the node density 0.0001node/m 2 , 0.0002node/m 2 and 0.001node/m 2 . Fig. 14 shows the simulation results for the energy efficiency EDRb considering different node densities, a Nakagami block fading channel and a BPSK modulation. We have d 0 b = 134.16m and d c = 187m in this case. These results show that: INRIA 1. The theoretical lower bound on EDRb is adequate to a 2-D Poisson network although its derivation is based on a linear network. When the node density is of 0.001/m 2 , the theoretical lower bound and the one obtained by simulations coincide. For this density, a full connectivity of the network exists. Hence, we can conclude that our theoretical lower bound for the average energy efficiency is suitable for Poisson networks. When the node density is reduced, theoretical and simulation based curves for the mean EDRb diverge when the end to end transmission distance d increases. In that case, the source node can not find a relay node in the optimal transmission range and has to search for a further relay node which increases the energy consumption. Unreliable links play an important role for energy savings. In the simulations, the transmission power is adapted according to the transmission distance on the basis of the analysis of Section 3. Hence, unreliable links also contribute to attain the lower bound on EDRb (as presented in Fig. 2 and 3, the optimal link probability is about 0.72). Adaptive transmission power is not available in many cheap sensor nodes. Therefore, we consider a fixed transmission power for each node in the simulation which is set to the energy-optimal transmission power of Eq. (16). Simulation results for a fully connected network are shown in Fig. 15. Compared to the adaptive transmission power mode, nodes with fixed transmission power show a slightly higher EDRb, i.e., lower energy efficiency. Nevertheless, the advantage in terms of simplicity due to the use of fixed transmission powers makes it worthwhile the little increase in energy consumption. Simulations of the energy-delay trade-off The simulations regarding the energy-delay trade-off are also implemented for a Nakagami block fading channel and for a fixed end-to-end transmission distance of 380m. Regarding each pairs of nodes, the source nodes try to use 1 to 5 hops in turn. The following relay selection strategy is adopted knowing the number of hops: Step 1: Calculate the hop range according to the hop number, i.e., 380m/hop number. Step 2: Select the set of relay nodes that belong to the 1-hop transmission range (1-hop length). If the set of relay nodes is empty, extend the range by 1-hop length until reaching the destination node. Step 3: Choose the node closest to the destination node among the nodes chosen in step 2). Step 4: Repeat Step 2) and Step 3) until the destination node is reached, then, return selected relay node. The source node and the selected relay node(s) will transmit the packet with the same transmission power and the value of transmission power starts form 1dBm and increases by 1dBm until 40dBm. Each simulation is repeated 50 times. Then, we compute the delay and the energy consumption for each routing. Finally we obtain the mean delay and mean energy consumption for the same hop number. In this way, we obtain the low bound of energy-delay trade-off for three different node densities. In Fig. 16, simulation results are given for different node densities. For a node density of 0.01/m 2 , the lower bound on the energy delay trade-off is reached since there are enough nodes to find a suitable relay given the delay constraint. This result indicates that the theoretical lower bound on the energydelay trade-off is valid for a Poisson network though its derivation is based on a linear network. For smaller node densities, the energy delay trade-off obtained by simulations diverges from the lower bound since non energy-optimal relays have to be used which increases the energy consumption and the transmission delay. Conclusions This paper, using realistic unreliable link model, explores the low bound of energy-delay trade-off in AWGN channel, Rayleigh flat fading channel and Nakagami block fading channel. Firstly, we propose a metric for energy efficiency, EDRb, which is combined with the unreliable link model. It reveals the relation between the energy consumption of a node and the transmission distance which may contribute to determine optimal route at the routing layer. By optimizing EDRb, a closed form expression of the energy-optimal transmission range is obtained for AWGN, Rayleigh flat fading and Nakagami block fading channel. Based on this optimal transmission range, the lower bound on EDRb for a multi-hop transmission using a linear network is derived for the three different kinds of channel. In addition, the lower bound on the energy-delay trade-off is studied for the same multi-hop transmission over a linear network. Results are then validated using simulations of a 2-D Poisson distributed network. Theoretical analyses and simulations show that accounting for unreliable links in the transmission contributes to improve the energy efficiency of the system under delay constraints, especially for Rayleigh flat fading and Nakagami block fading channel. A Derivation of P 0 and d 0 Substituting (8) and (9) into (14) and (15), we obtain: where p ′ l (γ) is the derivative of the function p l (γ). Because E c + K 1 P t is greater than 0, simplifying the equation set (44) yields: Solving the equation set (45) and substituting γ with (9), we have: and . B Derivation of the optimal transmission range in AWGN channel According to (8), the link model in AWGN channel is given by: where BER(γ) is the Bit Error Rate (BER). A closed form of BER is described in [22] for coherent detection in AWGN channel: The closed form expression of d 0 can not be obtained using the exact BER(γ). A simplified tight approximation of BER(γ) is obtained when β m · γ b ≥ 2 by using the method proposed in [23]: where exp(·) represents the exponential function. Fig. 17 shows the relation between the approximation and the exact values of the BER. Therefore, the optimal transmission range d 0g is obtained by substituting (48) and (16) into (15): where W −1 [·] is the branch satisfying W(x) < −1 of the Lambert W function [18]. C Derivation of the optimal transmission range in Rayleigh flat fading channel There is a general expression for the BER in Rayleigh flat fading channel in case ofγ ≥ 5 in [22]: where α m and β m are the same as those in (48). Substituting (50) and (9) into (12), we have: Substituting (51) and (16) into (14), the optimal transmission range d 0f in Rayleigh flat fading channel is obtained: D Derivation of the optimal transmission range in Nakagami block fading channel The exact link model in Nakagami block fading channel is [14]: where BER(γ) refers to (48). When m = 1 (Rayleigh block fading) and α m = 1, the approximation of (53) is found: Substituting (55) and (16) into (14) yields the optimal transmission range d 0b :
2008-07-29T06:32:50.000Z
2008-07-29T00:00:00.000
{ "year": 2008, "sha1": "071e888d136153493f15cedc52d87d1e1be26535", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b1826a8f0b8b03c5797a94049ff5df271a0e4855", "s2fieldsofstudy": [ "Computer Science", "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
235313853
pes2o/s2orc
v3-fos-license
Timing Configurations Affect the Macro-Properties of Multi-Scale Feedback Systems Multi-scale feedback systems, where information cycles through micro- and macro-scales leading to adaptation, are ubiquitous across domains, from animal societies and human organisations to electric grids and neural networks. Studies on the effects of timing on system properties are often domain specific. The Multi-Scale Abstraction Feedbacks (MSAF) design pattern aims to generalise the description and understanding of multi-scale systems where feedback occurs across scales. We expand on MSAF to include timing considerations. We then apply these considerations to two models: a hierarchical oscillator (HO) and a hierarchical cellular automata (HCA). Results show how (i) different timing configurations significantly affect system macro-properties and (ii) different regions of time configurations can lead to the same macro-properties. These results contribute to theory, while also providing useful insights for designing and controlling such systems. I. INTRODUCTION Multi-scale systems are those systems where different scales of time, space, or information granularity interrelate via information flows. If information cycles through the system leading to the adaptation of system entities, they become multi-scale feedback systems [42]. For example, workers in an organisation send information about their state to their managers, who then send back commands leading to changes in their behaviours. Similarly, foraging ants lay pheromones, forming a trail that affects their behaviour. In autonomic systems, managed resources are monitored and control commands issued for self-adaptation [28]. In these examples, information from the micro-scale (workers, ants, resources) is abstracted onto a macro-scale, and some adaptation at the micro-scale occurs based on information flowing back down. Such feedback cycles can be repeated at recursively higher scales, with increasing abstraction tied to ever larger system parts, e.g., multi-level management organisations, autonomic systems [32], [47], or plants 'controlling' foraging ants to disperse their seeds, by attaching food packages to their grains [40]. In this paper, we use both level and scale to express the idea of multiple amounts of time, space, or information granularity in a system. While multi-scale feedback systems can be found across all domains, their generic properties remain under-explored. In previous work, we introduced the Multi-Scale Abstraction Feedbacks (MSAF) design pattern, as a means to generalise feedback cycles of information flows operating at multiple abstraction levels, in systems with different types of entities, structures, and functions [10] [9]. In this pattern, scales are identified in relation to information abstraction and are orthogonal to how such abstractions are implemented. A macroproperty at a higher scale can be tied to an exogenous macroentity (e.g., a manager in an organisation, different from the workers) but can also be micro-distributed among microentities at a lower scale (e.g., knowledge of power relations distributed across members of an animal society [21]), or composed from the collective structure of micro-entities (e.g., forest patch shapes affecting tree growth [18]). Such multiscale design allows coordinating increasingly large-scale systems, via a divide-and-conquer approach. Each scale may process similar amounts of information by making a different trade-off between information accuracy and control scope. Another important trade-off is between a system's reaction time and the control scope considered, at different scales. Such timing aspects depend on inherent communication and processing delays, process execution frequencies, and adaptation lags (i.e., how long before adaptation takes effect). The question of how such timing configurations affect the behaviour of multi-scale systems, in particular, has been approached primarily in domain-specific ways. General insights that would facilitate cross-domain transfer remain vague and untested. E.g., it is often said that higher levels must operate slower than lower ones, to ensure system stability [48], [32], [47]. Yet, excessive communication delays between macro-and microscales can cause dysfunction depending on system goals [20]. This paper aims to reduce the gap between highly generic remarks and application-specific practices in matters of time concerns. First, we expand the MSAF pattern with domainindependent time-related aspects (sec. III). We then illustrate the impact of chosen timing aspects on system macroproperties via two generic case studies with multiple potential applications (secs. IV and V). The studies expand previous multi-scale oscillator models: a biochemical model of hierarchical oscillators (HO) [29] and a hierarchical cellular automata (HCA) [11]. We focus on these examples as many real-world systems are characterised by oscillating patterns, from collective behaviour in animal groups [15] [37] and circadian cycles in the brain [27] to opinion dynamics in social networks [38], clock synchronisation in distributed computing systems, and the coupled motion of pendulum clocks [13]. The contribution of our results is two-fold. First, we show how different timing configurations can significantly affect system macro-properties. This allows using time delays as configuration parameters for changing system behaviours. Second, we show how different regions of time configurations lead to the same macro-property. This can be used to improve system robustness to time disturbances. These general principles are relevant to domain practitioners, as key factors to be considered when modelling, designing, configuring, or managing multi-scale feedback systems within each specific domain. A. Timing in Multi-Scale Systems The role of timing in multi-scale systems has been explored mostly in either domain-specific ways (e.g., hierarchical smart grids [43], houses [3] and vehicles [1]) or in generic terms (e.g., multi-level design patterns in self-adaptive systems [47], autonomic systems [32], organic computing (OC) [41], selfaware systems [12], and multi-agent systems [34]). In both cases, results are difficult to reuse and transfer across domains. It is generally considered that lower levels should execute faster than higher ones. While this applies to most systems, the underlying constraints and variants are rarely discussed. Exceptions may also exist depending on desired behaviour (e.g., stock markets may not aim to reach steady state). Similar examples can be found in natural system studies. In the field of ecology, multi-scale systems are usually nested. As macro-properties at higher levels arise from the composition of levels below, it is often taken for granted that higher levels operate slower than lower ones, and that this is necessary for system stability [2]. Similarly, research in biology and paleontology that has focused on nested hierarchies assumes that different timescales (with higher levels operating slower than lower ones) are inherent to such systems [44]. Institutional and policy studies, on the other hand, tend to focus on multiscale systems with exogenous macro-entities (e.g., higher-level bureaucracies send commands to lower ones). Here, delays are often described as dysfunctional, as they can lead to policy ineffectiveness (as upper levels send out-dated commands to controlled resources) [6] [33] [46]. Building on the examples of coral reef formation and power dynamics in Macaque societies, [20] argue that slow variables (at the macro-scale) lead to the adaptation of micro-entities by reducing environmental uncertainty, but that if these variables are too slow, they cannot be detected by micro-entities, leading to a slow variable lock-in. The fact that the slowness of macro-entities allows for them to be perceived as constant by micro-entities is also highlighted by [17] in the context of adaptive neural code, showing how in the vision system of flies adaptation occurs at different timescales, with longer ones providing a separate information transmission channel. [5] links organism motor functions to primitive language, indicating that macroproperties (or 'symbols') allow to delay immediate reactions to external changes, so as to take into account previous experiences and generate more complex behaviours. The control systems community has studied timing in multiscale systems using different terminology, e.g., hierarchical, singularly perturbed, multiloop, nested, and cascade control systems. Hierarchical control systems are addressed in [19], where hierarchies are defined by functions or time horizons of the multilayer configuration. The highest layer necessarily has the longest time horizon to achieve optimal control for the system. Applications of singular perturbation theory to control systems were reviewed in [31], where systems are decomposed into parts with fast and slow dynamics. Multiloop [8], nested [4], and cascade [22] systems generally refer to systems in which multiple feedback loops control variables of importance at different scales. For example, in aerospace applications the different loops address (from micro-to macro-scale): attitude, attitude rate, and guidance. In these applications it is generally assumed that higher levels (corresponding to outer loops in the nested system) operate at a slower rate, or that it can be shown in specific situations what the relative rates should be for stability and optimal performance. B. Coupled Oscillators Our two case studies build on existing models of coupled oscillators. Kim et al.'s model [29] explores the synchronisation of coupled biochemical oscillations in cellular systems. Synchronisation is affected by the coupling strength, with two thresholds defining three different behaviours: oscillation without synchronisation, oscillation with synchronisation, and no oscillation. We expand on this model by connecting oscillators in multi-scale configurations, and adding micro-macro communication delays that affect synchronisation (sec. IV). For the HCA case, we expand on the model in [11], where Cellular Automata (CAs) were organised in a multiscale configuration, generating macro-structures from uniform micro-scale conditions. We expand on this model to explore how different combinations of execution frequencies at various scales affect micro and macro oscillating behaviours (sec. V). As highlighted in [35], natural oscillatory processes tend to follow a multi-scale organisation, with macro-scale frequencies affecting micro-scale behaviour. In most oscillators, communication and adaptation are not instantaneous [39] [16]. E.g., biological systems require a minimum interval to transmit information [7]. The same applies to most artificial systems. Hence, time-related questions are relevant both in the context of oscillator behaviour, within a wide range of applications, and for multi-scale feedback systems, more generally. Several studies discussed the impact of time delays on the behaviour of coupled oscillators (e.g., [7] [26] [30]). These studies suggest that time delays can significantly affect system dynamics [14]. A. Overview of the MSAF Design Pattern The MSAF design pattern [10] [9] models feedback loops in multi-scale systems in terms of information flows that merge, split, and cycle through different abstraction levels ( Fig. 1). Information flows are streams of changes (attached to a material substrate) which can be observed, interpreted, and used for adaptation in line with semantic definitions of information [25]. Such information flows merge and aggregate information at increasingly higher abstraction levels (bottom-up), then split and reify information again at more detailed levels (top-down), forming multi-scale feedback cycles. A single feedback loop consists of the following steps: 1) collection and abstraction of state information; 2) information processing (e.g., decision); 3) information reification (control command); and 4) adaptation. These steps match existing feedback designs in autonomic (MAPE-K) [28] and organic computing [41], or feedback control systems [23]. Extending this design to multiple scales implies adding further feedback loops on top of each other. This involves two extra steps for connecting feedback loops between levels (in green): a) sending state abstraction of L k to upper level L k+1 ; and b) receiving control information from L k+1 , to be used as control input, or goal, in L k 's processing step (2). From L k 's perspective, all upper-level feedbacks can be modelled as a single one (dotted green arrow), at L k+1 ; and all lower levels as one adaptation process (dotted blue arrow), at L k−1 . Hence, a managed resource (at L 0 ) receives feedback controls that merge information from several scales, covering increasingly larger system scopes. Such multi-scale feedback design helps to control large-scale systems by limiting the amount of processed information at each level and by mixing quick local reactions with slower coordinated responses. B. Time Considerations in Multi-Scale Feedback Loops Generalising from feedback systems [23] and control theory [36], we distill several key timing considerations impacting system behaviour: communication delay; processing time; adaptation lag; sample time (for digital systems). To simplify, we merge these into two main timing aspects, applicable to all MSAF steps: i) execution delay (τ ), the step execution duration (including communication and processing); and ii) execution interval (∆t), how often the step executes. We group MSAF steps (1-3) (abstraction, processing, and control) into a single 'management flow' (including inter-level abstraction (a) and control (b) for higher levels), featuring an execution delay τ mng and interval ∆t mng . The adaptation step (4) also features an execution delay τ adpt and interval ∆t adpt . All timing considerations from 'classic' feedback control systems apply here. We highlight some of these below without aiming for a comprehensive review. Delay in the management flow τ mng implies the risk of providing a control command (output) based on an outdated monitored state (input). It may lead to oscillations, longer settling times, or instability [23]; and decrease reactivity to state disturbances. Yet, if τ mng << τ adpt there is a risk of overreaction from the management flow, i.e., repeating or exacerbating a control command as it fails to perceive the effects of a previous command. This risk is removed when controls are not 'cumulative' (e.g., goaloriented commands can be repeated with the same effect). With respect to execution intervals, the smaller the ∆t mng (i.e., the management flow executes more often), the more reactive it can be to state changes, while again, risking to overreact if it executes before previous controls take effect. Overreaction is avoided if ∆ adpt < ∆ mng (also considering delays); or when controls are merely repeated (without increased amplitudes) and the adaptation flow only executes the last one (if ∆ adpt > ∆ mng ). Ideally, the management flow would be fast to execute (τ mng → 0) but only execute at intervals large enough to allow for the effects of its commands to take effect in the adaptation flow (∆ mng ∼ τ adpt ). Other combinations of execution delays and intervals are also viable (domain and application-specific). The above considerations become more complex when feedback cycles extend across multiple scales, incurring further cross-scale delays and combinations of their relative values. When management flows at different scales execute in parallel (common case), each flow at L k gets abstract state information from L k−1 and control information from L k+1 , to issue commands back to L k−1 . For L k , information from L k−1 is more recent than from L k+1 , as the latter would have crossed at least an extra scale. Yet, information from L k+1 includes abstractions about broader system scopes (under control levels from L k+1 to L M −1 ). This allows L k managers to coordinate their local actions based on wider system views. Hence, control information for L 0 entities merges information flows from all system scales, with lower-scales information being narrower but more recent (or accurate) and higher-scales information being broader but more outdated. Hence, higher-level management flows would have larger delays than lower ones, as it takes more time for their input and output flows to travel to and from L 0 . This situation is due to system implementation constraints (i.e., inherent communication and processing delays, at each level), rather than being a desirable system design property. Still, in case of rapid management relative to adaptation delay (τ mng <<τ adpt ) it makes sense to execute higher managers less often than lower ones (∆ mng,k >∆ mng,k−1 ), to avoid overreactions or instability. However, increasing ∆ mng,k may decrease the system's coordinated responses. Typical solutions combine fast, accurate, localised reactions from lower-scales, for avoiding disaster (e.g., reflexes in organisms, obstacle avoidance in autonomous cars) with slower, more context-aware responses, for coordinated behaviour (e.g., strategic planing in organisms, rerouting autonomous cars). Various combinations of crosslevel execution delays and intervals lead to different system behaviours (macro-properties). We focus on a few examples illustrated via our two applications (HO and HCA). A. HO Overview We use the coupled biochemical oscillator model from [29] which is extended to: i) a flat network of more than two coupled oscillators; and ii) a hierarchy of oscillators (HO). In coupled biochemical oscillators, each oscillator consists of two interacting components X and Y , with coupling between the X components as shown in Fig. 2. To accommodate multiple oscillators, we generalise this model to include more oscillators arranged in a "flat" configuration (i.e., where all the oscillators are peers and there is no hierarchy present in the network). Fig. 3(a) shows this system with four oscillators, which can have either P or N type coupling. Each X i promotes or inhibits X i+1 depending respectively on P or N type coupling. The HO uses a repetitive design, with oscillators being the entities repeating at different levels. When integrated within a multi-scale structure, these entities differ from their standalone forms by taking into account (detailed) state information from the lower levels and (abstract) control information from the upper levels. The macro-entities are oscillators whose models are modified from [29]. Fig. 3(b) shows the structure of the hierarchy of oscillators for three levels with two children per oscillator. The coupling occurs between the X components of the oscillators between levels. There is no direct communication between oscillators at a given level. B. Flat and HO Models The flat network of oscillators is modeled by the following differential equations; for i = 1..N , where N is the number of oscillators and i − 1 is within the range 1..N (so if i = 1, then i − 1 = N ). It is assumed that all oscillators are coupled with the same strength F , type P or N, and time delay τ . For the HO model, the differential equations for X depend on their level. At the highest level, oscillators only act as aggregators of information from the lower level. At the lowest level, they only receive feedback from their corresponding macro-entity. Oscillators at middle levels receive information from above and below. While the equation modelling the Y component of all oscillators is analogous to (2), those for the X components of the bottom, top, and middle levels respectively are: where In the HO model, the abstracted information is the average concentration of a substance X in micro-entities given bȳ X m,i (·). The feedback information that is sent down from macro to micro is the concentration of X in the macrooscillator, which impacts the concentration of X in the micro-oscillators through (3) and (5). Function Γ m,i (t − τ m ) in (6) communicates both the abstracted information from the lower level and feedback information from the higher level, with W controlling the relative importance of the feedback signal. C. Simulation Time and Sequence The oscillators are continuous time systems described by differential equations, digitized using a numerical solver. The coupled oscillator systems were simulated in MATLAB using the dde23 function to numerically solve the delay differential equations [29]. Each differential equation was solved simultaneously for specified interval of time [0, T end ], resulting in a discrete time series of X and Y concentration values. With respect to timing aspects described in Section III-B, there is a micro-to-macro abstraction delay and macro-tomicro feedback delay, both characterised by τ m , which reflects the time it takes for the concentration of X to be transmitted (i.e., transmission delay, in literature). This means that the abstracted state and feedback is based on old micro-state information. These semantics simulate communication delays τ mng for abstracted states, with negligible adaptation delay (τ adpt = 0) and continuously executing feedback cycles (∆ mng and ∆ adpt ). Delays at higher levels are always higher than at lower levels due to delay accumulation. NN). Each configuration was simulated for 10 runs, with the initial X values of an oscillator randomly chosen from the interval (0,1) and the initial Y values set to 1. Values of oscillation frequency, amplitude, and synchronisation time were averaged over the 10 runs. E. Overall Behaviour The coupled oscillator systems exhibit three basic types of emergent behaviour: unsynchronised oscillation, no oscillation, and synchronised oscillation with all levels in phase. In the case of the HO, there is an additional type: synchronised oscillation with levels out of phase. These behaviours are shown in Fig. 4 (for M = 3 levels and C = 2 children). The three stacked time series plots show the oscillator X concentrations at each level plotted together, with the top level having one oscillator, the middle level having two oscillators, and the bottom level having four oscillators. F. Experimental Results In flat networks, it was found that synchronisation occurred consistently in PP coupled systems having no more than 5 oscillators and for NN coupled systems having no more than 10 oscillators in the network. Furthermore, the region of the parameter space that achieved synchronisation was a relatively small subset. To achieve consistent synchronisation in systems with more than 10 oscillators and for a larger range of F and τ values, it is necessary to have a hierarchical structure. For HO systems, we present two kinds of results relevant to our contribution (with analogs in the HCA model): the effect of system parameters on (1) generated macro patterns and (2) oscillation periods. Other results on oscillation amplitudes and synchronisation settling time are also briefly discussed. 1) Impact of time on generated macro patterns: In HO systems, Fig. 5 shows the emergent behaviour of the system for all configurations tested. In all cases, the bottom level synchronised for the middle range values of F (yellow region). For low F values, there was oscillation, but no synchronisation (light blue region). For high F values, there were no oscillations (dark blue region). The smaller hierarchy (M = 2) achieved synchronisation in a larger part of the parameter space. There are two distinct transition regions: from unsynchronised to synchronised oscillations and from synchronised oscillations to no oscillations. The transition from synchronised to no oscillations was deterministic. For both PP systems, oscillations only occurred for 0 ≤ F ≤ 3.5. This transition was unaffected by time delay. In contrast, for both NN systems, the transition from oscillations to no oscillations occurred for F between 3 and 6, depending on the value of time delay. The transition from unsynchronised to synchronised oscillations was stochastic, as indicated by the color transition from light blue to yellow: yellow, synchronisation happened in each run; light blue, it did not happen in any run; in-between colors, synchronisation occurred only in some runs, according to the color scale to the right of the plot. 2) Impact of time on oscillation periods: Fig. 6 shows how the period of oscillation varies with the coupling strength and time delay. Time delay has a larger impact on the period for PP coupled systems while the effect is negligible for NN coupled systems. The number of levels had no effect on the period. Oscillation amplitude varies with time delay within the synchronised region, with a larger impact occurring in NN systems (in contrast to the effect on period). The number of levels (M = 2 and M = 4) had a negligible impact on amplitude; but fewer levels lead to faster synchronisation, for both PP and NN coupling. The effect of time delay on synchronisation time is more pronounced for NN coupling. Full results are omitted due to space constraints (Cf. https://gitlab.telecomparis.fr/ada.diaconescu/msaf (acsos21 directory)). G. Discussion These results show the effect of time delay on the system's macro patterns and their properties. For NN coupling τ affects the type of synchronisation, but not the amplitude. Conversely for PP coupling, τ affects the oscillation period, but not the synchronization type. For both types, the number of levels M affects unsynchronised to synchronised transition, due to the increased time delay caused by larger M . Further these results indicate that HO systems are advantageous compared to flat networks: (1) HO systems are able to synchronise more oscillators: 64 oscillators for HO, compared to a maximum of 5 and 10 for flat network (with PP and NN, respectively). (2) The desired synchronisation behaviour occurs in a larger region of the parameter space as noted by the large yellow regions in Fig. 4. In contrast, at their maximum size, flat networks achieved synchronisation for a single combination of coupling strength and time delay. (3) Due to their large synchronised region, HO systems are more robust to parameter variations. Moreover for NN coupling, a change in one parameter (time delay) can also be compensated by changing another parameter (coupling strength) to achieve synchronisation without affecting the amplitude. V. HIERARCHICAL CELLULAR AUTOMATA CASE STUDY A. HCA Overview Cellular Automata (CA) are discrete models where the state of each entity (cell) at t depends on the cell's previous state and on its neighbours' states, at t-1. Cells are usually arranged in a grid and their inter-dependency modelled via a rule set. CA, including coupled CA, have been employed to model a wide range of complex systems, including multiscales [24]. To analyse timing effects on such multi-scale systems, we reuse the Hierarchical Cellular Automata (HCA) simulator in [11]. It organises multiple CA into several scales (levels). Cross-level CA interactions follow the MSAF pattern: a) abstract state-information (bottom-up); and b) control commands, or goals (top-down). Each CA (except the top) has two rule sets: Expansive rules (R E ) increase the CA's number of live cells; Regressive rules (R R ) decrease them. The control goal from above dictates the CA's active rules (to execute). CAs at different levels have different R E -R R rule-pairs. Each CA at a lower level L k is mapped bidirectionally to a single cell of a CA at a higher level L k+1 . In the bottom-up mapping (a), the entire state of a lower CA is abstracted (based on the percentage of its live cells relative to a threshold T h k ) and sets the binary state of its mapped cell in a higher CA. In the top-down mapping (b), the state of each cell in a higher CA controls the rule activation of its mapped lower CA (i.e., sets R E or R R ). These bidirectional interactions form inter-level feedbacks, replicated at successive levels, up to the top (which only executes static rules). Simulations are deterministic. Table I summarises the main HCA concepts and notations (details in [11]). HCA consists of several levels (L k ), each with one or several CA (CA k,i ) (Fig. 7). Each CA k,i at a micro-level is mapped bidirectionally to one cell C k+1,j,s of a CA k+1,j at the macro-level: (a) the state abstraction (AS k,i ) of CA k,i (micro) is set as the state (CS k+1,j,s ) of its mapped cell C k+1,j,s (macro) (Eq. 7 and Eq. 8); and (b) the control goal (G k,i ) from the cell state CS k+1,j,s (macro) sets the active rule of its mapped CA k,i (micro) (Eq. 9 and Eq. 10). B. HCA Notation & Inter-level Mapping Level k, with k = 0..M -1, M the N • of HCA levels CA k,i Cellular Automata i at level L k , i = 0..N k -1, N k the N • of CA at L k CA k,i ⇒ < state > CA k,i converges to steady state < state >: either O P (oscillate with period P ) or S X (stuck with X live cells) Active Rules (executing) of automaton CA k,i map(C k,i,s ; CA k−1,j ) Mapping between cell C k,i,s and automaton CA k−1,j ; implies transfer of abstract state (up) & cntrl. goal (down) F q k Activation frequency of level L k -the number of activations of L k after which CA k,i actually execute R k,i . F q = ... C. Simulation Time & Sequence A HCA simulation proceeds in discrete cycles, each one executing all levels successively, from bottom L 0 to top L M −1 . A cycle consists of M discrete steps t k (k=0..M-1), each one executing all CAs at a corresponding level L k . Each CA k,i in an active level L k : i) exchanges information with its macro-CA; (sends AS k,i , Eq. 8; gets G k,i , Eq. 9); ii) sets its active rules (R k,i ) depending on its goal (G k,i , Eq. 10); and, iii) steps (executes its active rules). As exceptions, CA 0,i (bottom) do not get abstracted states from below, using their previous state instead; and CA M −1 (top) do not get goals from above, using a static rule. During a step t k , all CAs at L k execute in parallel; the step ends when all CA k,i have finished executing. With respect to timing aspects in subsec. III-B, HCA considers state abstraction delays as negligible; and controls incurring delays of one cycle between each two levels (i.e., τ mng,k =1 cycle). Hence, abstract state is always up to date (i.e., travels across all levels in one cycle) yet controls take M steps to arrive from top to bottom. Adaptation delay is also negligible (τ adpt =0). Each level has an activation frequency (F q k ) (i.e., execution interval ∆ mng,k ): F q k =d means that L k only activates at every d cycles. Finally, control commands G k are cumulative (repeating them exacerbates the effect), yet do not increase their values if inactive micro-CAs ignore them. D. Experimental Settings We set-up a three-level HCA: L 0 (bottom), L 1 (middle) and L 2 (top), Fig. 7. L 0 has 32 CAs (4x8 matrix), of 441 (21x21) cells each. This maps to a 32 (4x8) cells CA at L 1 ; which maps to a one-cell CA at L 2 . To simplify HCA behaviour and analysis, we only experiment here with inversible rule-pairs: from any CA state, executing R E and R R (or R R and R E ) leads to the same state. Non-inversible rules were exemplified in [11]; results presented here do not apply to these. All experiments start with CA k,0 in the same initial state, executing R 0,E ; and CA 1 and CA 2 in dead state (sending G=1 control goals until first changing to live states). Experiments vary in configurations for: the two thresholds (T h 0 and T h 1 ) for calculating abstract states for L 1 and L 2 (Eq. 8); and the three activation frequencies (F q 0 , F q 1 and F q 2 ), setting the delay between subsequent level activations (for L 0 , To show the rule-independence of our results, we tested two inversible rule-pairs at L 0 : 1) Diamond ( in-between the four CA 1 states; all other values are redundant (i.e., same results). Within each set, we run tests with varying activation frequencies: F q 0 =1..2, F q 1 =1..5, F q 2 =1..5. A test with e.g., F q=1-3-5 means that F q 0 =1, F q 1 =3, F q 3 =5. This means about 300 tests (2 rules x 3 T h 1 vals. x 50 F q vals.). E. Overall Behaviour A finite CA can only converge (⇒) to three behaviours: i) dead (S 0 ), all cells set to 0; ii) live-stuck (S X ), blocked in a state with X live cells (set to 1); iii) oscillating (O P ), cycling through a set of states, with the state sequence repeating every P steps. At L 0 , a CA 0,i 's behaviour depends on the goal pattern received from L 1 (i.e., 1 & 0 sequence activating R E & R R ). If a goal pattern has more 0s than 1s, activating R R more than R E , then CA 0,i ⇒S 0 . If R E activates more than R R , then CA 0,i ⇒S 441 . For 'balanced' R E &R R patterns, CA 0,i ⇒O P . Superposing goal patterns from CA 1 's four states differentiates CA 0,i into a maximum of three groups (Fig. 7): 1) Core CA 0,Co , the 12 CA 0,i at the core of L 0 's 4x8 matrix; mapped to the 12 live cells in CA 1 's Core state; 2) Corner CA 0−Cr , the 4 CA 0,i at the corners of L 0 's matrix; mapped to the 4 dead cells in CA 1 's Border state; and 3) Border CA 0−Bo , 16 remaining CA 0,i on the borders of L 0 's matrix (no corners). In brief, CA 0,i have an expanding or regressing tendency (i.e., growing or shrinking N • of live cells) depending on the active rule set, R E or R R , respectively. When crossing T h 0 , this tendency is propagated (and accentuated) upwards through CA 1 . When crossing T h 1 , it reaches CA 2 , which inverses it. The inverse tendency is propagated downwards, back to CA 0,i , which crosses T h 0 the other way. The propagation process is repeated upwards with the opposite tendency, then inverses again at CA 2 . This creates an expansion-regression oscillation across levels. Because CA 1 executes its own rule-pair (R 1,E -R 1,R ), CA 0,i differentiate, following different behaviours and converging to different states (e.g., 3 CA 0,i states in Fig. 7). F. Experimental Results We present two main kinds of results, relevant to our contribution. Firstly, we show how different activation frequencies constrain the possible oscillation periods P that may occur at HCA levels. We also note that many frequency combinations generate the same oscillation period P (macro-property), though not necessarily through the same state set. Secondly, we show how different activation frequencies lead to different macro-patterns amongst CA 0,i , i.e., whether CA 0,Co , CA 0,Bo , CA 0,Cr converge to O P , S 0 or S X . The full results set is available from https://gitlab.telecomparis.fr/ada.diaconescu/msaf (acsos21 directory). 1) Impact of time on oscillation periods: Fig. 11 summarises results for Diamond rules, with T h 0 = 0.1, T h 1 = 0.7, F q 0 = 1; and F q 1 & F q 2 varying between 1 and 5. At L 0 , some CA 0,i oscillate (same O P ) and some end in a static state (S 0 or S 441 ). To simplify, we only show here the O P value for CA 0,i s that do oscillate; and discuss differentiated CA 0,i s (i.e., macro-patterns) in the next subsection. Equivalent results were obtained for F q 0 =2, in terms of obtained O P types. Similar results were obtained when increasing T h 1 to 0.9. The main difference was that for F q=1-3-4, we obtained O 12 at all levels, hence a=b=1; rather than O 24 -O 24 -O 8 as when T h 1 =0.7. Results for T h 1 =0.3 were also similar, the only difference occurring for F q ∈ {1-4-5, 1-2-3}, where the HCA ⇒ S 0 . Using Line rules produced equivalent O k,P types, when testing the same configuration ranges. A difference here is that the toroidal configuration means that a CA 0,i that grows (R E ) to all live cells can no longer regress (R R ), hence staying in S 441 . E.g., for F q 0 =1, F q 1 =5, F q 2 =1..5, we have CA 0,i ⇒S 441 , for all threshold combinations tested. G. Discussion Results on the impact of activation frequencies on the resulting behaviour show how oscillation periods P can be controlled via timing adjustments. Interestingly, P k only depends on cross-level activation frequencies (Eqs. 11 & 12), while the actual (inversible) rule-pairs and threshold configurations only impact the state set that such oscillations cycle through; and the CA 0,i macro-pattern. Also notably, a wide range of frequency combinations lead to similar oscillation behaviour, e.g., all combinations with a maximum frequency of 2 lead to O 4 forms; maximum F q of 3 lead to O 6 forms (and sometimes multiples, e.g., O 12 ); maximum of F q=4 to O 8 (and multiples, e.g., O 12 , O 24 ); and maximum of F q=5 to O 10 (and multiples, e.g., O 20 , O 30 ). These are important properties for obtaining generic oscillations (O P ), while being robust to certain disturbances (e.g., in thresholds or frequencies). Results on CA 0,i macro-pattern formation show how varying frequencies can lead to equivalent oscillation periods at L 0 (as above) yet occurring via different CA 0,i groups. Hence, activation frequencies become configuration parameters for shifting overall system behaviour (i.e., macro-pattern productions). As above, different frequency configurations can lead to the same macro-pattern, which can enhance robustness. VI. DISCUSSION, CONCLUSIONS & PERSPECTIVES This paper aimed to narrow the gap between highly general statements and domain-specific theory about timing in multiscale feedback systems. It highlighted cross-domain timing aspects, e.g., time delays and execution intervals, and their (combined) impacts on resulting system behaviour (macroproperties). Some of these phenomena were illustrated via two multi-scale oscillator simulators (hierarchical biochemical oscillators (HO) and cellular automata (HCA)) which are both generic and applicable to various domains. Experimental results from both examples show how timing confers a configuration parameter just as powerful as any other variable. Changing delays or execution intervals, in various cross-scale combinations, generates different outcomes, e.g., synchronisation type in HO; oscillation period and differentiation pattern in HCA. Several time configuration regions produce equivalent macro-behaviours, possibly improving system robustness to time disturbances. This may benefit applications that require rapid behavioural plasticity, without re-learning or reconfiguring other parameters (e.g., neuro-modulated artificial neural networks [45]). This contribution sets a basis for developing a comprehensive theory of timing in multi-scale feedback systems, helping practitioners to transfer and apply key insights across specific domains.
2021-06-04T01:15:46.502Z
2021-06-03T00:00:00.000
{ "year": 2021, "sha1": "f36d13de2c541ba9611b8db3330ed8ce3efeff95", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f36d13de2c541ba9611b8db3330ed8ce3efeff95", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Engineering", "Computer Science", "Physics" ] }
52313024
pes2o/s2orc
v3-fos-license
Assessment of Prostaglandin-Endoperoxide Synthase 2 and Versican gene expression profile from the cumulus cells: association with better in vitro fertilization outcomes Background Current methods for determining superior embryo quality (morphological assessment) are unable to compensate for poor pregnancy outcomes. Due to the importance of the cumulus-oocyte complex and the value of cumulus cells (CCs) as markers of embryo health, we determined the association between the CCs gene expression of the Prostaglandin-Endoperoxide Synthase 2 (PTGS2) and Versican (VCAN) with pregnancy. Methods One hundred forty-nine women, suffering from infertility and undergoing IVF, were included in this study (age: 29–46 years; BMI = 25.5 ± 5.0 kg/m2). Patients underwent a standard IVF protocol. CCs were isolated during oocyte retrieval, and their RNA was isolated using Trizol. The mRNA expression of PTGS2, VCAN, and L19 was measured by qPCR. The PVL index, (PTGS2 + VCAN)*L19normalized, was determined for each oocyte. Clinical pregnancy was confirmed by β-hCG and the presence of a fetal heartbeat. Associations were determined by ROC curves or logistic regression. Results There was no correlation between the PVL index and morphological scores. Using only single embryo transfers (SETs), we determined that the PVL index was associated with pregnancy (β-hCG: AUC = 0.87, 95%CI: 0.74–1.00) with an optimal cutoff value of 58.2. Using the complete cohort (consisting of SETs, and patients with 2, 3, or 4 embryos transferred), the presence of at least one embryo with a PVL index score ≥ 58.2 was associated with a greater probability of achieving pregnancy (β-hCG: odds ratio = 17.15, 95%CI: 6.82–43.18, p < 0.001). Conclusion Transferring at least one embryo with a PVL index score ≥ 58.2, generates a higher chance of achieving pregnancy. Background Selection of embryos with a higher implantation potential is a significant challenge in Assisted Reproductive Technology. Currently, embryo selection is based mainly on morphological criteria such as growth rate, early cleavage on Day 2, the degree of fragmentation, and blastocyst for-mation, to name a few [1]; however, the predictive power of this approach remains limited. With the emergence of "Omics" technologies, new biomarkers can be diagnostic tools and utilized with in vitro fertilization (IVF) to improve oocyte and embryo selection [2]. A key factor in oocyte maturation is the cumulus-oocyte complex (COC), typically found in higher mammals. This complex results from the association between an oocyte and surrounding cumulus cells (CCs) through gap junctions [3]. Moreover, the development of competent oocytes highly depends on the bi-directional communication and interactions between the oocyte and CCs [4]. Several studies have investigated the association between CCs' gene expression profiles with oocyte competence, embryo quality, and pregnancy outcomes using microarray, reverse transcriptase, and quantitative Real-Time PCR [5][6][7][8][9][10]. An indirect approach, using CCs RNA transcriptional data, was able to predict embryo quality and pregnancy outcome using gene expression signatures [11]. This raises the question of how can these technologies be used in determining embryo implantation potential. Measuring the expression levels of candidate biomarkers from the CCs can serve as a high-throughput, non-invasive approach to determine oocyte quality and successful pregnancy outcomes [8,11]. However, there remains a need to determine the optimal usage and practical application of a particular set of genes to monitor and the evaluation of patient factors, such as age, etiology, insulin resistance, etc., that can affect these genes. Previous studies have shown that Prostaglandin-Endoperoxide Synthase 2 (PTGS2) expression is associated with biological events, such as lesions, inflammation, and proliferation. A recent study demonstrated that up-regulation of PTGS2 in CCs of mice is associated with germinal vesicle to metaphase II (MII) stage transitions and oocyte competency [12]. In treated pigs, increased expression of PTGS2 resulted in improved oocyte competency [13]. In humans, PTGS2 expression in CCs is associated with the development of higher quality embryos [14]. Furthermore, PTGS2 expression levels are associated with good embryo morphology [15]. Many reports have convincingly established PTGS2 in oocyte maturation, nuclear maturation, and cumulus expansion, all predictors of clinical pregnancy [7,16,17]. However, even with the numerous studies showing PTGS2 importance, there are no studies to assess the predictability and practical use of PTGS2 levels of the CCs to estimate high-quality embryos that may achieve pregnancy. Versican (VCAN) is a major component of the COC, located in the extracellular matrix, and its CCs gene expression has been reported as one of the most promising oocyte quality marker [6,10,18]. In mice, VCAN augmented a crucial step in embryonic development, cumulus expansion, and promoted the expression of PTGS [19]. In pigs, FSH stimulation increased the expression of VCAN in the CCs, promoting oocyte maturity [20]. In humans, oocyte quality has been positively associated with augmented VCAN expression [19,21]. Lastly, the increased expression of VCAN at the oocyte stage resulted in a higher probability of pregnancy [10] and live births [18]. However, VCAN expression has yet to assess for its predictability and practical use as a marker of implantation and pregnancy. Ranking of MII oocytes based on CC-expressed genes can serve as a promising new method for the selection of good quality MII oocytes derived from a pool of oocytes collected from hormone-stimulated IVF treatments in humans [22]. The CC-candidates genes, as potential oocyte quality markers for this study, were selected based on results from previous studies performed on human CCs [5,14,15,18,23] and were shown to be involved in the process of cumulus expansion, prediction of embryo development, and pregnancy. Therefore, the purpose of this study was to assess the VCAN and PTGS2 gene expression in CCs from individual COC as markers of oocyte quality and predictors of clinical pregnancy, in a manner that could be useful to the IVF laboratory as an extra tool to choose the best combination in number and quality at transfer time. Participants and study characteristics One hundred and ninety-eight women that suffer from infertility undergoing IVF in Mexico City, Mexico were selected for this study. Some subjects were lost due to not returning for follow-ups appointments, failure to produce viable oocytes/embryos, failure to collect sufficient RNA from the CCs for analysis, or chose not to be included. CCs RNA was isolated from the individual oocytes for each IVF cycle; however, 2 IVF cycles failed to produce a signal for PTGS2, VCAN, and L19, whereas 9 IVF cycles failed to produce a signal for PTGS2, VCAN, or both, while L19 gene was amplified. Therefore, the study consisted of 31 patients who had a single embryo transferred (SETs), 41 patients with two embryos transferred, 68 patients who had three embryos transferred, and nine patients who had four embryos transferred. Clinical and IVF characteristics are presented in Table 1. There is no correlation between the PVL index and the embryo morphological assessment scores A subset of 42 patients agreed to have their complete embryo cohort that consisted of high and low-quality embryos, as determined by morphological assessment, assessed. Of the 384 embryos analyzed, the morphological assessment of these embryos ranged between 0 and 12, whereas the PVL index scores ranged between 35.4 and 80.9. There was no association between the PVL index and the embryo morphological assessment scores (ρ = − 0.013, p = 0.831, Fig. 1). PTGS2 and VCAN gene expression levels in CCs are associated with clinical pregnancy Based on their highest morphological score, only high-quality embryos were selected for implantation. Between one and four embryos were transferred per patient (Table 1); however, the effectiveness of the PVL index was assessed using the 31 SETs. ROC analysis determined that the PVL index was highly predictive for implantation (AUC = 0.87, 95% CI: 0.74-1.00, p = 0.010, Fig. 2). Using the highest Youden's index, we determined that a PVL score ≥ 58.2 was associated with clinical pregnancy (Youden index = 0.769, sensitivity = 100%, and specificity = 76.9%) and was highly accurate (test accuracy = 80.65%, positive predictive value = 45.5%, and negative predictive value =100%). The effectiveness of the PVL index was also examined in patients with multiple embryos implanted using the 58.2 cutoff value. All IVF cycles (n = 149), which consisted of SETs, two embryos transferred, three embryos transferred, and four embryos transferred, were evaluated. To correct for embryo cohorts lacking a completely positive or negative PVL group, we used a modified equation proposed by Ekart et al. and calculated the probability of pregnancy for each IVF cycle. Using logistic regression, a strong association was determined between the probability of pregnancy based on the PVL index and implantation (β-hCG: odds ratio = 11.59, 95%CI: 4.27-31.48, p < 0.001), as well as ultra-sound confirmed presence of fetal sac with a heartbeat (odds ratio = 8.40, 95%CI: 3.26-21.63, p < 0.001, Table 2). Interestingly, the implantation of at least one embryo with a PVL index score ≥ 58.2, independent of the total number of embryos implanted, was associated with a greater chance of achieving clinical pregnancy, as determined by β-hCG (Odds Ratio = 17.15, 95%CI: 6.82-43.18, Table 2) and ultra-sound confirmed presence of fetal sac with a heartbeat (Odds Ratio = 16.81, 95%CI: 6.43-43.92, p < 0.001, Table 2). Using all 149 IVF cycles, the PVL index was highly accurate (test accuracy = 78.52%, positive predictive value = 76.0%, and negative predictive value =84.4%). When the group was stratified by age and considering the transfer of at least 1 PVL positive embryo, there was an increased association for women ≥38 years of age for β-hCG (0.6-fold change) and for ultrasound confirmation (2.3-fold change, Table 2). For many of the embryos, PGT PVL Index Morphological Score Fig. 1 Correlation between embryo morphological score and the PVL index. For 42 patients, CCs were isolated from an oocyte during a standardized IVF protocol. CCs RNA was isolated using Trizol. Gene expression profile of PTGS2, VCAN, and L19 was determined by qPCR and the PVL index was calculated. Afterwards, oocytes were fertilized and morphological parameters evaluated by a specialized training Embryologist for 384 embryos. The level of association was determined by calculating Spearman's correlation coefficient (ρ) was used to confirm the embryos that were transferred were euploid. When stratified by the PGT-confirmed absence of aneuploidy embryos, there was an increased association for β-hCG (6.5-fold change) and for ultrasound confirmation (2.4-fold change) using the probability's raw score. When considering the transfer of at least 1 PVL positive embryo, the fold change could not be determined due to the lack of false negatives for the PGT confirmed euploid embryos. Lastly, 18 subjects were identified with severe etiologies (endometriosis or low response during stimulation) other than infertility. When these patients were stratified, the association between the PVL index and pregnancy was only present in the subjects absent from these etiologies. Discussion Gene expression analysis of the CCs can be a valuable tool that allows attaining an estimation of oocyte quality and embryo capabilities, especially in respect to embryo implantation. Different groups have analyzed the CCs transcriptional profile, resulting in the assembly of a group of candidate genes of which only a few have been suggested as genes that could predict oocyte quality and pregnancy success [5-8, 10, 18, 23-25]. For this study, two genes expressed in the CCs were analyzed: PTGS2 and VCAN. Even though both genes have been highly reported in the literature for their association to oocyte quality, to date, neither gene has been included in a single panel profile used to evaluate clinical pregnancy potential for embryos. Here, we show that a high PVL index, which is evaluating the expression of these two genes, resulted in increased clinical pregnancy. Many groups have assessed the gene profile of the CCs, all indicating that both genes, PTGS2 and VCAN, play an important role in oocyte maturation and are relevant indicators of competent oocytes [6,14,15,18]. Gebhardt et al. revealed the presence of a positive correlation with embryonic development and live births. Regardless that Gebhardt et al. proposed PTGS2 and VCAN as candidate genes to measure oocyte quality and embryonic development [18], there is no evidence in the literature to support their practical use in oocyte and embryo selection. In this study, these genes were therefore proposed as the first group of genes that can be used in conjunction with morphological data produced by the IVF laboratory for the selection of embryos to be transferred. A transcriptional profile was generated for the CCs using qPCR data, which allowed us to generate an expression indicator, the PVL index, for the evaluation of oocyte quality and embryo capabilities to implant. Here, only high-quality embryos were selected and transferred. Afterward, their respective PVL index scores were assessed. We determined that the PVL index scores were independent of the morphological assessment. This does demonstrate that the cellular processes vary significantly between similarly scored embryos and posits that alternative tests are required when selecting embryos. It is imperative to emphasize that until today, no reports establish a cutoff value, based on gene expression, where competent and not competent oocytes/embryos were considered. This study proposes the application of an index that relates the expression of VCAN and PTGS2, as a new tool for the evaluation of pregnancy prediction. Using SETs, which were only high-quality embryos, the PVL index, and measurements of clinical pregnancy presented with a good correlation. This led to establishing a cutoff value for the PVL index of 58.2 (Fig. 2). Afterward, IVF cycles with multiple embryos transferred were assessed, and it was determined that the cutoff value for the PVL index was highly predictive. Unfortunately, there were minimal IVF cycles with completely positive (≥58.2) or completely negative (< 58.2) PVL index score embryos. Therefore, it is difficult to determine if the embryos with the higher PVL index scores are the ones producing the pregnancy when mixed cohorts and several embryos are transferred. Even though, we demonstrated that the implantation of at least one embryo with a PVL score ≥ 58.2 was associated with clinical pregnancy. Ekart et al. was one of the first groups to propose a new classification and selection system for oocytes, based on the genetic expression shown in the CCs, specifically using molecules involved in the COC interaction that are activated during the second to last phase of folliculogenesis. In addition of developing a mathematical tool that can be applied for oocyte selection, their system allows an evaluation of the expression, followed by expression level classification of four genes from the CCs: hyaluronan synthase 2 (HAS2), follicle-stimulating hormone receptor (FSHR), VCAN, and progesterone receptor. Combination of the HAS2 and FSHR genes resulted in a predictive value of 80% when applying for the selection of three embryos. However, using this system for a single embryo selection, the predictive value decreased significantly to 48%. Ekart et al. did not include PTGS2 in their gene panel to predict oocyte quality and embryonic development [22]. Even though the PVL index was used to score each embryonic cohort, showing a strong correlation between this index and clinical pregnancy, an additional mathematical analysis was performed to support our findings. The mathematical formula created by Ekart was applied to each embryonic cohort. In theory, this would determine the probability of each embryo to produce a clinical pregnancy, only if this embryo was from an oocyte with a CCs quality index ≥58.2. Undeniably, the probability of pregnancy of a transferred embryo displayed a high correlation with the PVL index and therefore aids us in predicting pregnancy in patients. Older women have a decreased probability of achieving pregnancy and lower IVF success rates; therefore, exploiting alternative methods to improve IVF outcome remains a key factor. When the cohort was stratified by age, the PVL index was more associated with older women in achieving clinical pregnancy. This posits that using the PVL index could improve the probability of successful implantation for older women. The implantation of aneuploid embryos is associated with lower IVF success rates and the level of aneuploidy in embryo-cohorts increases with age. In Mexico, older women are suggested to complement IVF with PGT, to assess for aneuploidy; however, the benefits and pitfalls of using of PTGS2 remains under debate. Here, only 30% of the patients opted to have PGT; therefore, it is possible that some of the embryos were genetically compromised, as shown by the decreased diagnostic odds ratio when we examined embryos without confirmed euploidy. Unfortunately, with the embryos that were determined to be euploid, we were unable to determine the diagnostic odds ratio, when at least 1 PVL positive embryo was implanted. This was due to the absence of any false negative results. In other words, the presences of a PVL positive embryo was not associated with failed implantation. This posits that using both the PVL index and PGT would improve IVF outcomes. Our study has a few limitations. First, we focused on a random set of females with some level of primary and secondary female infertility factor-male factor was not considered. We can only speculate that male factor infertility will not affect the results demonstrated here, as the examined genes are from the CCs and only associated with oocyte health and competence. Second, some of the patients had endometriosis of varying degree, which was probably affecting the implantation results. Endometriosis and its location could affect and explain why some patients did not present with the clinical pregnancy even with good embryos. However, this is outside the scope of the current research and is currently being considered for future studies. Lastly, we cannot be entirely confident if the high PVL scored embryos were the embryos that achieved clinical pregnancy, but SETs results support our confidence in this possibility. This is the preliminary study, and the selection of embryos based on the PVL index is the focus of current and future studies. Conclusions The development of new tools, which allow us to obtain an approximation of the state of the oocyte and the embryo, as well as determine the clinical pregnancy potential, is of great importance for IVF treatments. Here, a valuable evaluation system was generated to measure two key genes from the CCs-PTGS2 and VCAN-to relate the ovular state and clinical pregnancy. The PVL index can indicate good quality oocytes that for after fertilization will have the highest probability of achieving clinical pregnancy. This research will allow embryologists and other IVF personnel involved in the selection process to have an alternative test to determine the best embryo to transfer over the current method of embryo morphological assessment. Study patients and ethical approval Women that suffer from infertility, undergoing IVF in Mexico City, Mexico, were asked to participate in this Crude odds ratio (OR) and 95% confidence intervals (95% CI) were determined using logistic regression. N/D = was not able to be determined. *indicates a significant result, p < 0.05 (two-tailed). b Probability = 1-(x neg /x tot ) n , where x neg = number of embryos with a PVL index score < 58.2, x tot = total number of transferred embryos, and n = number of embryos with a PVL index score ≥ 58.2. c A positive cohort has the probability ≠0.00 retrospective study (from October 2011 to May 2017). The protocol was approved by the Ethics Committee of the Ingenes Institute (number I/13/2013). Written informed consent was obtained from all patients, conducted in accordance with the Declaration of Helsinki. Patients were clinically evaluated according to a standardized protocol including personal and family clinical history. The patients' height (m) and weight (kg) were measured, and the BMI was calculated as weight divided by the height squared (kg/m 2 ). IVF, CC isolation, and pregnancy evaluation All patients were subjected to controlled ovarian stimulation for ten days with Gonadotropin-releasing hormone agonists and antagonists. Ovarian response was assessed measuring serum estradiol levels, and follicular development was evaluated by ultrasound examination. Oocyte retrieval was conducted 20 h after human chorionic gonadotropin (hCG) administration (10,000 IU Choragon or 6500 IU Ovidrel) with ultrasound guidance. Follicular puncture for oocyte collection was performed under general anesthesia at the end of hormonal stimulation (10-14 days). Transvaginal ultrasound was used to locate mature follicles, and ovulation was induced with hCG. 3-5 ml of follicular fluid containing the oocytes were extracted using a specialized suction system. Follicles aspirated from the patients ranged between 8 and 30. Samples were analyzed using a stereoscopic microscope in order to locate the oocytes, which were kept at 37.5°C in an atmosphere of 8.3% CO 2 until fertilization. Number and quality of retrieved oocytes were assessed using morphological parameters [granulosa expansion, oocyte maturity (MI, MII, and VG), quality of the cytoplasm, zona pellucida, and polar body]. Oocytes were located, numbered, and separated into drops of HTF-HEPES (Human tubal fluid/(4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid) supplemented with 10% HSA. CCs were removed using a 1-mm needle and isolated using mechanical dispersion. The isolated CCs were preserved in 20 μl of Global Total for Fertilization media (LifeGlobal) and placed into Eppendorf tubes containing 150 μl of Trizol (Ambion, Life Technologies, Carlsbad CA, USA) and stored at − 70°C until processed. To note, the patient's cumula cells were not pooled; the oocyte and its CCs were analyzed as a corresponding pair. An Embryologist monitored and recorded information about fertilization, embryo development, embryo morphology, transfer, and pregnancy for each oocyte. Morphological parameters evaluated were weighed into a matrix to rate each oocyte-embryo, with the sum of values obtained on a scale of 0 (low quality) to 12 (high quality). Selection and embryo transfer were done on Day 3 or Day 5 of development according to the embryo morphological assessment, using the criteria established by Istanbul consensus Workshop on Embryo Assessment [26]. The highest quality embryos (morphology) were transferred, and pregnancy was confirmed by β-hCG values > 10 mUI/ml (Day 14) and the presence of a fetal heartbeat, confirmed by ultrasound at 6-8 weeks. The number of embryos transferred (1, 2, 3, or 4) was determined by the number of high-quality embryos achieving full development, patient results from previous attempts, and the opinion of the clinician. RNA extraction CCs RNA extraction was carried out using the Trizol® reagent, according to the manufacturer's recommendations. Briefly, CCs samples were processed with 70 μl of chloroform for 5 min at room temperature, followed by centrifugation at 12,500 g for 15 min at 4°C. The supernatant was transferred to a new tube containing 150 μl isopropanol. Samples were incubated for 10 min at room temperature and centrifuged at 15,000 g for 15 min at 4°C. The pellet was washed with 100 μl of 75% ethanol and then centrifuged at 12,500 g for 5 min. Pellet was air-dry for 10 min. RNA was re-suspended in 0.1% of DEPC water and quantified by spectrophotometry (Epoch/Biotek, Winooski, VT, USA). Quantitative reverse transcription-polymerase chain reaction (RT-qPCR) Primers for PTGS2, VCAN, and Ribosomal Protein L-19 (L19) were designed using the Primer 3 plus v2.0 software. All primer sequences are shown in Table 3. All qPCR reactions were performed using the StepOne Plus apparatus (Applied Biosystems) with the One Step Kappa Syberfast system (KAPA Biosystems, Woburn, MA, USA). PTGS2, VCAN, and L19 genes were quantified in duplicate. Reaction mix was prepared as follows: 5 μl 2X KAPA SYBR® FAST qRT-PCR Master Mix, 0.2 μl ROX, 0.2 μl dUTP (10 mM), 0.2 μl forward and reverse primers (20 pmol), 0.2 μl KAPA RT, 100 ng of RNA sample, and DEPC water for a total volume of 10 μl. qPCR conditions were one cycle of reverse-transcription at 42°C for 5 min, one cycle of reverse-transcriptase inactivation at 95°C for 5 min, 40 cycles of amplification at 95°C for 15 s, 56°C for 30 s, then 72°C for 30 s. SYBR Green was used during amplification to construct melting curves that were analyzed to verify if the peaks corresponded with theoretical melting temperatures for each amplicon. All the PCR products were resolved through capillary electrophoresis using the BioAnalyzer Labchip GX (Caliper). The products showed a single band corresponding to the predicted base pair length and a band purity of 95% or higher. Moreover, the bands were cloned and analyzed via sequencing to verify their identity. Sequence identification of the PCR products was confirmed by direct cloning with the CloneJet system (Fermentas, ThermoFisher, Waltham, MA, USA) and sequencing using the BigDye system. Briefly, the amplicon fragments were purified using the GeneJet Gel Extraction kit and ligated into a pJET1.2/ blunt vector following the manufacturer's protocol. Ligations were transformed into TOP10 competent bacteria and grown in LB medium (Ampicillin, Pisa SA Laboratorios, Mexico; 100 mg/ml) for 16 h at 37°C. Plasmid DNA was extracted from the bacteria using the mini-prep technique. Amplicon's identity was verified by sequencing using BigDye Terminator v3.1 reagent and the RV primer 3 (3'-CTAGCAAAATAGGCTGTCCC-5′) (Applied Biosystems, Foster City, CA, USA). Samples were sequenced with the ABI PRISM 3700 analyzer (Applied Biosystems) and sequences corroborated using Blast software. PTGS/VCAN/L19 (PVL) index and probability The PVL index was calculated from the data obtained after processing the isolated CCs of each individual oocyte. L19 was determined to be the optimal housekeeping gene for our system as the variations associated with L19 in a cohort were no larger than 1 C T in more than 90% of cases. For each patient, the L19 expression was used for normalizing purposes (to the lowest C T for the patient's oocyte cohort). The index is the sum of the expression levels of PTGS2 and VCAN, normalized by L19. The probability of pregnancy for each transfer cycle using the PVL index was obtained by the modified formula for random selection described by Ekart et al. [22]: where P = probability, x neg = number of embryos with a PVL index score < 58.2, x tot = total number of transferred embryos, and n = number of embryos with a PVL index score ≥ 58.2. Embryo biopsy (day 3 and day 5) Embryos were assessed for the number of cells, symmetry, and fragmentation. For high-morphological quality embryos, the chromosomal composition was determined using Array Comparative Genomic Hybridization (aCGH). The S-biopsy method was utilized to isolate a blastomere from Day 3 embryos [27]. Briefly, a Hamilton Thorne ZILOS-tk laser (1460 nm, 300 mW) was used to create a funnel in the zona pellucida adjacent to a blastomere. Next, the blastomere was extracted by aspirating the whole embryo with a 140-μm stripper capillary micropipette, leading to the ejection of the blastomere. The blastomere was then placed into a 0. Whole genome amplification and pre-implantation genetic testing (PGT) The material obtained from each biopsy was amplified using the SurePlex amplification system (Illumina, San Diego, CA, USA) according to the manufacturer's instructions. PGT was carried out by aCGH using the 24 Sure V3 microarray (Illumina, San Diego, CA, USA) using the protocol described by Fragouli [28,29]. The amplified DNA was fluorescently labeled (Fluorescence Labelling System, Illumina). The samples were co-precipitated, denatured, and analyzed by array hybridization (for 16 h). A laser scanner (InnoScan 710, Innopsys, Carbonne, France) was used to excite the fluorophores and read the hybridization images. Hybridization images were stored in TIFF format and analyzed by the BlueFuse Multi-Analysis software (Illumina), using the criteria and algorithms recommended by the manufacturer. With this approach, it was possible to determine the chromosome constitution of each embryo. Statistical analysis The association between the PVL index and embryo morphological assessment scores was determined by calculating Spearman's rho (ρ). Receiver operating characteristic (ROC) analysis was performed to determine the specificity and sensitivity of the PVL index, by calculating the area under the ROC curve (AUC). The cutoff value was determined calculating the highest Youden Index score: sensitivity + specificity-1. Logistic regression was used to determine the association (Odds Ratio and 95% confidence intervals) between the PVL index and clinical pregnancy. P-values < 0.05 (two-tailed) were considered significant. All analyses were carried out using either the Statistical Package for the Social Sciences program, version 22 (SPSS, Chicago, IL) or Sigma Plot software (v. 12.0, San Jose, CA, USA).
2018-09-21T21:40:42.252Z
2018-09-21T00:00:00.000
{ "year": 2018, "sha1": "52e42cb74ee70d4fed17864053e869e2a413e2a6", "oa_license": "CCBY", "oa_url": "https://ovarianresearch.biomedcentral.com/track/pdf/10.1186/s13048-018-0456-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "52e42cb74ee70d4fed17864053e869e2a413e2a6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
76657979
pes2o/s2orc
v3-fos-license
Lithospheric density models reveal evidence for Cenozoic uplift of the Colorado Plateau and Great Plains by lower-crustal hydration Subduction at plate boundaries can have thermal, chemical, and physical impacts on broad regions of the continental interior, but these interactions are not as readily obvious as deformation near the continental margin. Such cryptic alteration has produced surface uplift in the Colorado Plateau and western Great Plains of North America, which have risen—largely undeformed—1.6 and 1.3 km, respectively, relative to the eastern Great Plains during the Cenozoic. Accumulation of Cretaceous–Cenozoic sediments accounts for only 300 m of uplift of the Colorado Plateau and 400 m of the western Great Plains, leav-ing 1.3 km and 0.9 km, respectively, unexplained. To determine the physical causes of this enigmatic epeirogeny, we derived three-dimensional (3-D) lithospheric density models from seismic velocity, gravity, topography, and heat-flow data. Lower-crustal density decreases systematically westward across the Great Plains, accounting nearly perfectly for the remaining 900 m of uplift of the western Great Plains and the modern east-west topographic gradient. Lower-crustal dedensification beneath the Colorado Plateau accounts for a similar 900 m of uplift. Lower-crustal xenoliths in both regions show progressive hydration-induced retrogression of garnet-bearing assemblages with increasing modern elevation, and Th-Pb dating of the Colorado Plateau retrogression gives end-Cretaceous dates (xenoliths from the Great Plains have not yet been dated). We hypothesize that lower-crustal density variations—and much of the surface relief—in North America’s Proterozoic interior terranes reflect varying degrees of metasomatic retrogression, such as by fluids exsolved from the Farallon slab. The remaining 400 m of Colorado Plateau uplift is most plausibly due to elevated mantle temperature. We present thermal models that suggest that 25–70 km of Cenozoic lithospheric thinning can explain the modern elevation and density structure. INTRODUCTION Paleozoic and Mesozoic marine sediments blanket the Colorado Plateau and Great Plains, requiring broadly uniform elevations near sea level through much of the Phanerozoic. Since the Late Cretaceous, however, these units have risen-generally undeformed-to average elevations of 1.9 km in the Colorado Plateau and 1.6 km in the western Great Plains (Fig. 1A). By contrast, on the eastern Great Plains, they remain only 300 m above modern sea level. Modern topographic relief therefore reflects differential uplift, so understanding what supports this relief will provide a window into the processes responsible for the widespread but cryptic Cenozoic modification of intraplate North America. Although the Sevier and Laramide orogenies caused substantial horizontal contraction in the modern Basin and Range and Southern Rockies, respectively, the Colorado Plateau and Great Plains experienced <5% shortening (Davis, 1978;Tikoff and Maxson, 2001). Instead, the major impact of Farallon subduction was a period of subsidence and shallow marine sedimentation (Sloss, 1963). Sediments increased crustal buoyancy beyond presubsidence values, and Tertiary removal of the Farallon slab allowed the crust to rebound to correspondingly higher surface elevation (Mitrovica et al., 1989). Digitizing the isopach maps of Cook and Bally (1975), we find that an average of 900 m of post-Jurassic sedimentary rock is preserved on the Colorado Plateau, and across the Great Plains, their average thickness grades from 1.3 km near the Rocky Mountain front to minor net erosion in eastern Kansas, Oklahoma, and Nebraska. The isostatic contribution to topography, H, of a layer with density r and thickness z, compensated by asthenosphere of density r a , is Assuming a density of 2000-2200 kg/m 3 for these sedimentary rocks (e.g., Spencer, 1996) and asthenospheric density of 3200 kg/m 3 (e.g., Lachenbruch and Morgan, 1990), they account for ~300-350 m of uplift of the Colorado Plateau relative to the eastern Great Plains and 400-500 m in the western Great Plains (Fig. 1B). In the Colorado Plateau, 1.3 km of Cenozoic uplift remain unexplained, and on the Great Plains, there is a roughly linear gradient in remaining uplift from 900 m near the Rocky Mountain front to 0 m on the eastern Great Plains. Because the Colorado Plateau and Great Plains share similar Mesozoic histories and even overlie the same Proterozoic terranes (Whitmeyer and Karlstrom, 2007), systematic differences in modern density that support modern topographic relief most logically result from Cenozoic modification/ deformation. The EarthScope Transportable Array (TA) has recently offered unprecedented seismic coverage of the central United States; here, we used TAbased seismic velocity models (Shen et al., 2013) along with heat-flow, gravity, and topographic data to develop three-dimensional (3-D) lithospheric density models. These estimates map and quantify the contributors to modern eleva- is shown for reference in gray, with vertical exaggeration tantamount to an ~400 kg/m 3 density difference between crust and mantle. Note the lack of any correlation between crustal thickness and surface elevation. The blank region from 108°W to 105°W is the Southern Rockies, which are not the subject of this paper. tion differences-and by inference Cenozoic surface uplift-across the Great Plains and Colorado Plateau. HYPOTHESIZED CAUSES OF UPLIFT Surface uplift may occur in response to changes in asthenospheric flow (i.e., dynamic topography) or in response to decreases in the density of the lithosphere, which depends on both temperature and composition. Therefore, the possible causes of uplift relative to the Paleozoic-Mesozoic can be enumerated (following McGetchin et al., 1980;Morgan and Swanberg, 1985): crustal thickening, crustal heating, crustal phase changes, advective or conductive heating of the mantle, chemical or phase changes in the mantle lithosphere, or dynamic topography (here defined as a non-isostatic contribution from mantle flow, not simply mantle buoyancy). Indeed, most of these mechanisms have been proposed for the Colorado Plateau and/or Great Plains. Nevertheless, crustal thickening (Bird, 1984;Mc-Quarrie and Chase, 2000) seems unlikely because of the minimal shortening observed at the surface (Davis, 1978;Tikoff and Maxson, 2001). Moreover, crustal thickness (gray line in Fig. 1B) does not vary systematically across the Great Plains and is not correlated with surface elevation (r 2 = 0.19). While crustal heating decreases density and increases elevation, modern heat flow is a relatively uniform 50-60 mW/m 2 across the Great Plains (e.g., Blackwell and Richards, 2004). Lower-crustal and upper-mantle seismicity in the Colorado Plateau interior (Wong and Humphrey, 1989) also suggests low thermal gradients, and joint analysis of heat-flow data and P n velocities estimates a Moho temperature of <700 °C (Schutt et al., 2018). Around the margins of the Colorado Plateau, heat flow is modestly higher, up to ~75 mW/m 2 (e.g., Blackwell and Richards, 2004), and Moho temperatures average 900 °C (Schutt et al., 2018), meaning that the average temperature of the Colorado Plateau Moho is ~800 °C. By contrast, temperatures of ~550 °C typify the Great Plains (Schutt et al., 2018). Assuming a roughly linear geotherm from the Moho to the surface, our crude calculation suggests an average difference of 125 °C. For a coefficient of thermal expansion of 3.0 × 10 -5 /°C and a reference density of 2800 kg/m 3 , this difference accounts for a 10.5 kg/m 3 density difference. By Equation 1, the 40-km-thick crust of the Colorado Plateau thus supports only 130 m of additional elevation. Each of the remaining hypotheses-dynamic topography, mantle or crustal composition/phase changes, and mantle heating-makes testable predictions about modern lithospheric seismic velocity and/or density structure, but these predictions have yet to be investigated systematically. DENSITY MODELING To discriminate among these remaining possibilities, we derived 3-D density models of the crust and upper mantle following the approach of Levandowski et al. | Colorado Plateau and Great Plains crustal hydration GEOSPHERE | Volume 14 | Number 3 , which jointly analyzed shear-wave velocity (Shen et al., 2013), gravity (Fig. 2B), topography (Fig. 1A), and heat flow. Velocity is scaled to density to create a 3-D starting density model (Fig. 3). Gravity and flexurally smoothed topography are then forward modeled (Figs. 2C-2D) and compared to observations. Finally, a random-walk Monte Carlo algorithm iteratively refines the density model until free-air gravity and flexurally smoothed topography are reproduced at all points in the study area to within 10 mGal and 100 m, respectively (Figs. 2G-2H). (L1/L2 [L1 is the mean absolute value; L2 is the root mean squared] norms are generally ~2.5/4 mGal and 30/50 m.) Shen et al. (2013) provided hundreds of velocity models at each of ~1000 TA stations. These models comprise one-dimensional (1-D) velocity profiles and include a sedimentary thickness and crustal thickness, both of which also vary among the individual 1-D models at any given station. They were derived by joint Bayesian inversion of short-period Rayleigh wave dispersion from ambient noise, long-period dispersion from ballistic Rayleigh waves (a total range of ~8-80 s), and receiver functions. Nevertheless, there is not a unique velocity model that is capable of reproducing these data (which themselves are subject to uncertainty). In an effort to minimize the bias imposed on our density models of the vagaries of a given S-velocity model, we created 1000 simulations of the 3-D density structure, and each simulation began by randomly selecting one of the acceptable Vs models at each station and scaling these to density. Initial Density Model The velocity models of Shen et al. (2013) explicitly included a crust-mantle boundary. Therefore, it is easy to separate the crust from the mantle and use different velocity-density relationships in each. In the crust, the initial velocity-density scaling uses temperature-at-depth estimates (Blackwell et al., 2011) to separate the minor influence of thermal variations on velocity and accounts for the attendant density variations (Levandowski et al., 2013. The isothermal (i.e., composition-dependent) velocity estimate is then scaled to density using a regression based on empirical data (Christensen, 1996;Brocher, 2005): This polynomial regression-or any reasonable regression-cannot faithfully reproduce the density of all lithologies because of the natural spread in density of rocks that have similar velocity; the range in density about a given seismic velocity is approximately ±150 kg/m 3 . In particular, this regression systematically underestimates the density of mafic rocks and overestimates the density of felsic units . In the mantle, the initial scaling assumes that all velocity variations are thermal in origin, and we subsequently relax this assumption. We use a velocity-temperature-density scaling that accounts for anelasticity (using scripts provided by U. Faul; Jackson and Faul, 2010) and the presence of melt for a uniform, peridotitic composition: Here, z is depth below the surface in km, and Dυ s is the perturbation (in %) relative to some reference, υ 0 , which we assume to be 4.5 km/s. Therefore, Since this reference is meant to be quite near the solidus (i.e., assuming that the adiabatic temperature in the asthenosphere is quite near the solidus), velocities below υ 0 may reflect increasing melt content, which does not significantly affect density. Therefore, there is a final implicit segment of the velocity-density relationship: We explored the effect of different choices of υ 0 in the Supplemental Material 1 , but this choice can be viewed as a hypothesis to be tested. If, for example, the predicted densities of regions such as the Basin and Range or Snake River Plain with lower velocities in the mantle are too high-as manifest in topography and/or gravity-we would conclude that a lower υ 0 would be indicated. Equation 3 was derived for a composition of 30% Fo 90 , 30% Fo 92 , 25% orthopyroxene (opx), 10% clinopyroxene (cps), 2.5% garnet (Gt), 2.5% spinel (Sp), and 1 mm grains, where Fo indicates the proportion of forsterite in olivine. Because the anelasticity data (Jackson and Faul, 2010) are for single-crystal olivine only, we assumed that anelastic effects were similar for other minerals. The effects of grain size and solidus velocity are addressed in the Supplemental Material (see footnote 1), but we found that our results were robust with respect to changes in these two factors. For each mineral, we accounted for the pressure and temperature dependence of the shear and bulk moduli (Bouhifd et al., 1996;Hugh-Jones, 1997;Jackson et al., 2003;Afonso et al., 2005) and for the temperature dependence of thermal expansivity (Afonso et al., 2005). To account for anelasticity, we also accounted for the temperature, pressure, and seismic period dependence of the dynamic compliance, or the Laplace transform of the creep function (using scripts provided by Jackson and Faul, 2010). Finally, the representative period of surface waves increases with depth (from ~30 s at 50 km to 80 s-the longest period used by Shen et al. [2013]at 150 km), so these velocity-density relationships are depth dependent, both because of increasing pressure and increasing seismic period. In other words, UNCERTAINTY IN DENSITY MODELS A more complete discussion of uncertainties in density models derived similarly to ours, and the sources thereof, is given by Levandowski et al. (2017). Here, we present a brief discussion of the variability of densities in our accepted 3-D models and then turn our attention to systematic biases that may have been introduced by our assumptions and modeling procedure. Uncertainty is readily quantified from our modeling: we have 1000 estimated densities at each point on a 20x20 km grid and in 15 different layers. Considerable care should be taken in deciding what a meaningful measure of uncertainty is, however. The density in any one 5x20x20 km cell in the lower crust of our model is not important. The meaningful quantity is the density across multiple layers in our model, say the 20-40 km range. Also, because our primary goal is to investigate the long-wavelength topographic gradient that has developed on the Plains in the Cenozoic, we are chiefly interested in average densities along a line of longitude. As such, we should consider that an individual 100-km wide swath of lower crust subsumes a grid ~55 nodes from north to south, 5 nodes from east to west, and 4 layers thick. If the densities of these ~1100 cells are independent of one another (not a terrible approximation, as determined from statistical tests that are not shown here), then the uncertainty of that volume is roughly 1/33 the uncertainty of any given cell. Considering the differences in modeled density across the Plains (>100 kg/m 3 ), one would require that the uncertainty in a typical cell be 3300 kg/m 3 , which is non-physically large: It would require the assertion that our modeling cannot tell whether there is air or asthenosphere at any given point. We now discuss the uncertainty in the density over a given depth range at any of our 20x20 km columns. Beyond the upper few km, uncertainty is highest near the Moho, so we will discuss results from this depth range. The density in, say, the 40-50 km depth range varies in any given column across the 1000 simulations, mainly controlled by how uncertain the Moho depth is. For a typical point, 950 of the 1000 density models are within 30-40 kg/m 3 of the mean value from across all of the 1000 models. If we consider the depth range from 30-50 km, uncertainty is typically ±25 kg/m 3 . The 20-50 km depth range primarily discussed in the text typically has uncertainties of 15-20 kg/m 3 . Again, these values are the 95% 1 Supplemental Material. Additional discussion of the potential influences of systematic biases in the density modeling. Please visit http://doi.org/10.1130/ GES01619.S1 or the full-text article on www.gsapubs.org to view the Supplemental Material. the thermal velocity-density relation is depth dependent because of pressureand period-dependent anelasticity. Levandowski et al. | Colorado Plateau and Great Plains crustal hydration The lithosphere was divided into 16 layers: surface to sea level, 12 layers of 5 km thickness from sea level to 60 km, and 30-km-thick layers from 60 to 150 km depth. The region was then divided into 20 × 20 km columns. Each cell (20 × 20 × 5 km above 60 km depth or 20 × 20 × 30 km below) was assigned a uniform density, interpolated from the initial estimate from the randomly selected seismic velocity model of Shen et al. (2013) and Equations 2-3. We then forward modeled gravity and flexurally smoothed topography (using the same procedure detailed by and compared the predictions of this initial model to observed gravity and flexurally smoothed surface elevation. To account for uncertainty in elastic thickness, each of the 1000 simulations randomly chose one of three two-dimensional elastic thickness models (Kirby and Swain, 2009;Lowry and Pérez-Gussinyé, 2011;Watts, 2012). We note, however, that the primary features that we discuss are suitably long-wavelength (>100 km) that the specific elastic thickness model has limited impact. The velocity-density scaling reproduces the broad patterns of gravity and topography across the western United States (L1/L2 norms of 12.2/22.3 mGal and 124/155 m). Nevertheless, some short-wavelength gravity anomalies are unexplained, and there are modest misfits to topography (Figs. 2E-2F). Shortwavelength residuals may simply reflect features below the ~100 km resolution of the TA-derived models; considering gravity as well can sharpen images of lithospheric structure (e.g., Maceira and Ammon, 2009). Broader residuals plausibly reveal compositional variations in the mantle or crustal lithologies that do not conform to our velocity-density relation. For example, mantle melt depletion lowers density substantially and slightly increases S velocity (Lee, 2003;Schutt and Lesher, 2010): Assuming uniform composition, we would underestimate the elevation in areas with depleted mantle. Similarly, hydration (e.g., serpentinization) causes a greater loss in density than estimated from our velocity-density scaling (cf. Eq. 3 with figure 1 of Christensen, 2004). Finally, the crustal velocity-density regression (Eq. 2) systematically underestimates the density of mafic units and overestimates that of many felsic rocks, with misfits as large as ~170 kg/m 3 possible for some lithologies . In order to reproduce gravity and topography, as well as to improve lateral resolution of lithospheric structure, we next allowed departures from the initial density estimate. These departures essentially relaxed the assumptions of homogeneous mantle composition and of a fixed crustal velocity-to-density scaling, and allowed imaging of shorter-wavelength features. Density Refinement Density cannot be known from seismic velocity alone, for at least three reasons. (1) Our models seek finer-scale resolution than the ~100 km horizontal resolution of the velocity models derived from TA surface wave data. (2) There is uncertainty in the velocity models, which can be quantified in terms of the range of velocity at any given depth across the hundreds of acceptable velocity models beneath any seismic station. (3) There simply is not a single-valued mapping of velocity to density that captures all lithologies or all chemical/ compositional trends (e.g., melt depletion). Additional factors, such as Vp/Vs variations, departures from the Q model chosen by Shen et al. (2013), and anisotropy would also influence the conversion of Rayleigh wave phase velocities and receiver functions to velocity profiles, and the subsequent conversion of these models to density. Put differently, there are factors that affect seismic velocity but not density, factors that affect density but not velocity, and features of the density of the crust and upper mantle-with or without a seismic signature-that are simply finer than the velocity models can resolve. Patterns and biases that are of particular importance to the present study include melt depletion, serpentinization, and the bias of Equation 2 to systematically overestimate the density of felsic material and underestimate the density of mafic material, as discussed already. The misfits between predictions of the initial density model and observed gravity and topography are generally small compared to the ~3 km of relief across the study area and the large variations in free-air gravity (Fig. 2). Nevertheless, to produce more robust density models, we employed the random walk Monte Carlo algorithm of to refine the initial density structures until the density model reproduced gravity and flexural topography to within 10 mGal and 100 m at all points in the study area. The Monte Carlo proceeds by selecting one of the nodes from the 20 × 20 km grid at random. A cell beneath that node and a density perturbation (limited to ±150 kg/m 3 in the crust and ±50 kg/m 3 in the mantle) are chosen at random, and the attendant variance reductions (or increases) of gravity and topography residuals are calculated. The algorithm is offered a number of cell/densityperturbation choices, initially two, but increasing in number through the inversion to aid convergence, and it ultimately selects the best-fitting one (even if that increases residual variance) and applies that change to the density of that cell, and updates residuals. When the model reproduces gravity and flexurally modulated topography to within 10 mGal and 100 m at all points in the study area, the 3-D density model is accepted as a member of a posterior distribution of acceptable density structures. This process-randomly selecting one of Shen's models at each TA station, converting velocity to density, selecting one of three elastic thickness models, forward modeling gravity and flexural topography, and iteratively refining the density model-is repeated 1000 times to embrace the non-uniqueness of gravity and topography. Uncertainties of density models are discussed in the Supplemental Material (see footnote 1), but a typical uncertainty of density in a region is ~15 kg/m 3 . The magnitude of the adjustments that are necessary to match gravity and topography (Fig. 4) average ~14 kg/m 3 in the crust and 5 kg/m 3 in the mantle, and adjustments are typically less than ~100 km in the lateral dimension. The crustal adjustments are small compared to the plausible range of ~170 kg/m 3 that a rock of known velocity may have. Moreover, these adjustments are well within the uncertainties of the velocity models. Because Shen et al. (2013) provided as many as thousands (generally hundreds) of velocity models at each TA station, the velocity-derived uncertainty in density at any point is readily quantified. The range on either side of the median that subsumes 95% of their models is ~0.2 km/s in the midcrust and upper mantle, ~0.3 km/s in the lower crust, and greater still in the uppermost crust. As such, the inherent range of density estimated from those velocity models exceeds ±100 kg/m 3 throughout the crust and is approximately ±30 kg/m 3 in the mantle. The two areas with large-magnitude and laterally extensive adjustments are an ~300 × 300 km portion of the upper mantle beneath the Wyoming craton that is some 25 kg/m 3 less dense than estimated (Figs. 4E-4F) and an ~100 × 200 km region near the Southern Oklahoma aulacogen in which the crust is 75-100 kg/m 3 denser than estimated (Figs. 4E-4F). These departures from the seismically derived density estimates are within the typical uncertainties associated with the velocity models themselves, but they could also represent systematic deviations from Equations 2-3 such as the compositional anomalies discussed earlier. In the Wyoming craton, we speculate that the low-density but high-velocity Archean mantle is melt-depleted residuum from the initial, high-temperature extraction of crustal material >2 Ga. In southern Oklahoma, three lines of evidence suggest that very dense crust is likely partially eclogitized mafic intrusions. First, the anomaly lies in or near the Oklahoma aulacogen, which underwent extension and associated emplacement of mafic sills and dikes in the Eocambrian. The aulacogen was subsequently the focus of early-stage NE-SW Ouachita contraction, which would have thickened the suite of mafic units under horizontal compression. Second, it is the densest material in the study area but is above the receiver function-defined Moho. Third, it is denser than expected from Equation 2, which-as argued by for the northern portion of the Midcontinent Rift (Fig. 4A)-is a hallmark of mafic crustal lithologies. HYPOTHESIS TESTS Earlier herein, we enumerated the possible causes of Cenozoic uplift of the Colorado Plateau and western Great Plains relative to the eastern Great Plains. We now discuss the testable predictions that each hypothesis makes about seismic velocity and density structure. The simplest prediction that each hypothesis makes is where in the lithosphere modern topographic relief is supported. For example, arguments in favor of mantle heating would require that support for modern relief is derived from mantle depths. In addition to the depths of density variations, each hypothesis makes certain predictions about the relationships between velocity and density (i.e., the validity of Equations 2-3 and of our underlying assumptions, manifest as whether adjustments to our initial, seismically derived models are necessary). Mantle Heating Increases in average mantle temperature could either be conductive (e.g., Roy et al., 2009) or advective, such as lithospheric thinning and replacement by warmer asthenosphere. One version of the former hypothesis-conductive reheating of unthinned lithosphere following a period of insulation by the Farallon slab-can be dismissed because such reheating does not account for any uplift relative to the Paleozoic-Mesozoic (i.e., before cooling). If Cenozoic lithospheric thinning caused differential uplift, however, the density of the lower lithosphere should decrease from east to west across the Great Plains and/or be less beneath the Colorado Plateau than the eastern Great Plains. Additionally, since our initial assumption is that mantle velocities reflect temperature variations, this systematic trend in mantle density should exist in our initial, seismically derived model. Mantle Chemical or Phase Changes In a region of substantially melt-depleted lithosphere, a velocity-temperaturedensity scaling should also lead to underpredicted elevations. Although xenoliths from the Colorado Plateau record upper mantle that is ~1% enriched in magnesium (Alibert, 1994;Lee et al., 2001), the predicted elevation in the Colorado Plateau and western Great Plains is generally well explained by our initial density models (Fig. 2), which consider only thermal variations. Moreover, melt depletion would have to have occurred during the Cenozoic, which is at odds with the paucity of volcanism in the Colorado Plateau and Great Plains. Mantle hydration (e.g., serpentinization) is also recorded in some Colorado Plateau xenoliths (Usui et al., 2003;Smith and Griffin, 2005), but the effects of fluid flux on velocity and density depend on the specifics of its chemical and/or compositional impact. As noted above, serpentinization causes a proportionally greater decrease in density for a unit velocity decrease than predicted by Equation 3 (compare figure 1 of Christensen, 2004, with our Equations 3a and 3b); our initial density estimate would therefore be too great if substantial serpentinized material is present in the upper mantle. By contrast, incorporation of hydroxyl groups into nominally anhydrous olivine can have a strong impact on velocity, but unit cell volumes increase only slightly (Smyth and Jacobsen, 2006); in this case, our density estimate would be lower than the true density. Finally, fluid flux can deliver silica to the mantle lithosphere and possibly increase orthopyroxene at the expense of olivine, but this compositional trend has little effect on shear velocity or density (Schutt and Lesher, 2010). We argue against the first two possibilities because the mantle density beneath the Colorado Plateau and Great Plains accords so well with an estimate based on the assumption that velocity reflects temperature alone. Our models would be insensitive to the latter trend, but orthopyroxene enrichment is not correlated with density decrease anyway (Schutt and Lesher, 2010), so it would not explain uplift. Although we argue against the three hydration-induced changes discussed here, many other effects of fluid flux are possible. It is crucial to note that any phenomenon that affects both velocity and density, and that does so in similar proportion to that derived in Equation 3 (i.e., with ~6 kg/m 3 density change corresponding to 1% shear velocity change) would be incorrectly ascribed to temperature variations. In other words, our null hypothesis is that all velocity Levandowski et al. | Colorado Plateau and Great Plains crustal hydration GEOSPHERE | Volume 14 | Number 3 variations reflect temperature variations. We test this hypothesis by scaling velocity to density using a thermal relationship, and then comparing predicted and observed gravity and topography. Because observations are reproduced rather well, we do not question the null hypothesis further, but we do acknowledge that if another factor controls velocity and density and produces a similar relationship to that described by Equation 3, our simple test would cause us to wrongly accept the hypothesis that mantle velocity and density variations are primarily functions of temperature variations. Crustal Phase Changes Midcrustal xenoliths from the Navajo volcanic field on the Colorado Plateau record hydration-induced retrogression of garnet-bearing crustal assemblages to less dense mineralogies. Th-Pb dating of secondary monazite associated with these assemblages suggests that the majority of the retrogression occurred in the latest Cretaceous (Butcher, 2013;Butcher et al., 2017), contemporaneous with the arrival of the Farallon slab. A similar but undated retrograde reaction is documented in crustal xenoliths near the northern Great Plains: Southward from near the Canadian border to southern Wyoming, three xenolith localities increase in elevation from 1 to 2.5 km as xenolith density decreases by ~500 kg/m 3 (Barnhart et al., 2012;Farmer et al., 2012;. Noting a collocated southward decrease in seismic velocity in an ~10-km-thick layer of lower crust, Jones et al. (2015) hypothesized (following Eq. 1) that hydration-induced retrogression produced much of the modern relief on the Great Plains. If so, then it follows that the modern density of the lower crust should decrease systematically from east to west. Additionally, the xenoliths discussed by Jones et al. (2015) and Butcher et al. (2017) have Pvelocity-density trends that are broadly concordant with that of the lithologies used to develop Equation 2. Therefore, it is possible that such an east-west gradient in density would be manifest in velocities as well, though shear-velocity measurements are not available for those xenoliths. We discuss lower-crustal hydration specifically, but we note that any other mechanism of broad, dedensifying phase changes might produce similar patterns. Dynamic Topography If surface elevation in a region were sustained by asthenospheric flow rather than lithospheric buoyancy (Moucha et al., 2008;Liu and Gurnis, 2010), seismic velocity would generally reflect the density of the crust and upper mantle: The predicted gravity would match observations, but such a region would stand at greater elevation than predicted. This pattern is not revealed by the residuals shown in Figure 2 in either the Colorado Plateau or the western Great Plains, where the initial model generally reproduces modern elevation. Therefore, we suggest that the impact of asthenospheric flow is minimal, perhaps <100 m. There is an important distinction between flow and density, however. We did not explicitly separate lithosphere and asthenosphere, and we included sublithospheric loads into our flexural-isostatic buoyant heights. We did indeed find modest variations in density at depths that likely correspond to asthenosphere, but we refer to any potential dynamic component as non-isostatic forces related to flow itself. Recently, Afonso et al. (2016) reached a similar conclusion-that dynamic topography is of secondary importanceeven though their modeling explicitly calculated the vertical normal stress imparted on the base of the lithosphere by both the buoyancy/antibuoyancy of sublithospheric loads and by buoyancy-induced flow. CAUSES OF CENOZOIC DIFFERENTIAL UPLIFT Great Plains: Lower-Crustal Hydration After accounting for the east-to-west increase in the thickness of Cretaceous sediments, a nearly linear increase in elevation across the Great Plains-more than 800 m at the Rocky Mountain front-remains unexplained. Our modeling reveals that the density of the lower crust-averaged longitudinallydecreases systematically from east to west. Little systematic difference is seen at other depths, casting doubt on the role of midcrustal alteration or mantle processes in supporting modern topographic slope. The average lower-crustal density (20-45 km) is 112 kg/m 3 less in the western Great Plains than in the eastern Great Plains (Fig. 5), supporting ~900 m of modern surface relief (Fig. 1B). Thus, density variations in the lower crust combine with Cenozoic sedimentation to account almost exactly for Cenozoic differential uplift across the Great Plains. Of the potential causes of uplift, crustal hydration is most consistent with this pattern, speculatively reflecting progressive dewatering of the Farallon slab with distance eastward under North America. Finding, not surprisingly, a similar pattern in lower-crustal density, Levandowski et al. (2017) suggested that the lowest densities may coincide with Proterozoic continental sutures (the Cheyenne belt-the suture between the Wyoming craton and Yavapai terranes-in SE Wyoming and the Yavapai-Mazatzal suture in SE Colorado). If so, it is reasonable to expect that fluids may preferentially exploit these preexisting, lithospheric-scale fracture/suture zones, and lower-crustal hydration may consequently be greatest there. In fact, studies of hydrated mantle xenoliths in the Colorado Plateau indicate that fluid flow is controlled by the presence of fractures (Nielson et al., 1993). Although this line of reasoning presents an internally consistent explanation for the heterogeneous lower crust of the western Great Plains, there is little agreement on the location and nature of Proterozoic sutures on the Great Plains. The Cheyenne belt is the topic of comparatively less debate, but the Yavapai-Mazatzal suture as sketched by Levandowski et al. (2017)-based on Carlson (2007)-is near the northern edge of the suture zone depicted by Whitmeyer and Karlstrom (2007) and is substantially north of the location given by Magnani et al. (2004). Thus, the details of lower-crustal density structure provide-at bestspeculative additional evidence in favor of lower-crustal hydration, specifically via the possibility that preexisting suture zones served as comparatively more efficient conduits for mantle-derived fluids than did intervening terranes. Our purpose in this work was to investigate the broad-scale uplift of the Great Plains and Colorado Plateau. We argue that sedimentation and lower-crustal phase changes explain nearly all of the longitudinally averaged uplift. Smaller-scale topographic features may reflect other processes. For example, low-density mantle near the Jemez Lineament in northeastern New Mexico and southeastern Colorado supports ~500 m of uplift relative to the eastern Great Plains (Fig. 3). It is compelling to note that this area also stands 200-600 m higher (up to 2200 m; Fig. 1A) than most of the rest of the western Great Plains (Fig. 1), perhaps providing evidence in favor of mantle-derived uplift (Nereson et al., 2013). We conclude that second-order topographic features may reflect other mechanisms of support, but the broad difference between the western and eastern Great Plains arises from sedimentation and lower-crustal density. Colorado Plateau: Lower-Crustal Hydration Between 20 and 40 km depth, the Colorado Plateau averages 106 kg/m 3 less dense than the eastern Great Plains (Fig. 5), supporting 660 m of relief (Fig. 1B). As discussed earlier, there is likely some contribution from higher temperatures in the Colorado Plateau; following the same line of logic, we would estimate a 250 °C temperature anomaly at the Moho (~21 kg/m 3 ) and a ~125 °C temperature anomaly at 20 km depth (~10.5 kg/m 3 ). Thus, the 20-40 km depth range may average 15 kg/m 3 less dense beneath the Colorado Plateau than the eastern Great Plains because of its temperature, but the remaining 91 kg/m 3 difference is better explained by compositional changes. In addition, the 40-50 km depth range essentially subsumes the crustmantle transition (Gilbert, 2012;Shen et al., 2013), and receiver functions generally image a gradient or low contrast in impedance across the Moho in the Colorado Plateau. The density of this transition zone is 55 kg/m 3 lower than at the same depths beneath the eastern Great Plains, and this explains an additional 170 m of differential Cenozoic uplift. After accounting for sedimentation and the effects of lower-crustal density loss, only 400 m of Colorado Plateau uplift remain. Colorado Plateau: Lithospheric Thinning The Colorado Plateau mantle is similar in density to that of the southern Rockies and Basin and Range and accommodates ~400 m of topographic relief relative to the eastern Great Plains (Fig. 1B). We infer that this density structure is thermal in origin (i.e., the Rockies, Colorado Plateau, and Basin and Range have warmer mantle than the Great Plains) because of the close correspondence between the elevation and gravity predicted by our thermal relation between velocity and density (Eq. 3) and observation. More obvious evidence comes from the elevated Moho temperatures in the Colorado Plateau relative to the Great Plains (Afonso et al., 2016;Schutt et al., 2018). We do not explicitly distinguish between lithospheric mantle and asthenosphere, so higher temperatures could reflect thinner lithosphere or warmer lithosphere; if the lithosphere and asthenosphere are in thermal equilibrium, the two are likely intertwined, because convectively thinned lithosphere will subsequently warm. A Cenozoic increase in mantle temperature is therefore a possible source of Colorado Plateau uplift, but we have already discounted purely conductive heating, so we now focus on advective heating. Advective heating may occur by removal of mantle lithosphere and its replacement with asthenosphere or by intrusion. Limited volcanic activity and intrusive activity, as well as crustal thickness estimates, are at odds with large-magnitude igneous intrusion, so we suggest lithospheric thinning. A sinking Rayleigh-Taylor instability (Levander et al., 2011), delamination (Bird, 1979), or ablation by the Farallon slab (Bird, 1988) could thin the lithosphere; we do not explicitly discriminate among these. Additionally, we did find some heterogeneity in density of the Colorado Plateau mantle, with comparatively higher density in the north-central portion, lower density around the margins, and the lowest densities in the south (Figs. 5E-5F). Heating/thinning of the Colorado Plateau lithosphere is likely heterogeneous, as also evidenced by encroachment of volcanism from the edges (Roy et al., 2009), but we will illustrate the effects of lithospheric thinning with 1-D thermal models meant to be reflective of the average across the Colorado Plateau. Because buoyancy changes are convolved with the flexural response of the lithosphere (and the Colorado Plateau lithosphere is comparatively strong, with elastic thickness ~30 km: Kirby and Swain, 2009;Lowry and Pérez-Gussinyé, 2011;Watts, 2012), it is indeed the average change in mantle buoyancy that is of interest to our present aim. If the Mesozoic Colorado Plateau did indeed resemble the modern eastern Great Plains, the lithosphere would originally have been 150-200 km thick (van der Lee and Nolet, 1997;Yuan et al., 2014). In addition, if only the lower-mantle lithosphere was removed, then remaining melt-depleted material could later be sampled in xenolith suites (as argued by Spencer, 1996). To estimate how much lithospheric thinning would be required in order to produce 400 m of modern uplift, we solved non-steady-state heat-transfer equations similar to those presented by Bird (1979). In that conception, a portion of lithosphere is removed at time 0 and replaced with uniform-temperature asthenosphere. The replacing material then remains fixed in space and cools conductively, essentially becoming thermal lithosphere. Here, we revisited this modeling and also solved similar nonsteady-state heat-flow equations for the end-member situation that after lithospheric thinning, the replacing material does not cool off but rather is maintained convectively at a constant temperature, such that the lithosphere is permanently thinned. The former setup ignores convection, but the latter requires a long-term change in heat flux; neither is a perfect formulation. We view the two conceptions as bounding the reality that lies somewhere between these two idealizations. Whatever the boundary conditions, the lithosphere (with thickness z old ) begins in equilibrium with asthenosphere of temperature T a = 1350 °C, and the surface temperature is a constant T s = 20 °C. At time t = 0, the lithosphere is thinned to a thickness z new . If we also assume an initially linear geotherm, the temperature distribution immediately after removal is piecewise continuous: If the asthenosphere cools as a semi-infinite conductive half-space (essentially becoming thermal lithosphere), the temperature profile returns to a linear geotherm from the surface (T = T s ) to z old (T = T a ). If the lithosphere thins permanently, temperatures approach a steeper geotherm from the surface (T = T s ) to z new (T = T a ). For either case, the temperature as a function of depth and time can be calculated by separation of variables wherein the temperature profile is the sum of the steady-state temperature, v(z), and a transient (decaying) perturbation, w(z,t), to that temperature: The Fourier expansion of Equation 5 is (Boyce and DiPrima, 2003): Here, L is the depth of the future conductive boundary (z new or z old ), and w(z,0) = T(z,0) -v(z). The thermal diffusivity, k, is 1 mm 2 /s. Following Equation 1, the uplift as a function of time is proportional to the integrated change in temperature from the initial state (at t < 0): ∫ U total (t,L,z old ) = αL (T(z,t) − T(z,0)) dz z old 0 . (7) The final unknowns are the initial lithospheric thickness-we used plausible values of 150 and 200 km, based on estimates of the modern eastern Great Plains (van der Lee and Nolet, 1997;Yuan et al., 2014)-and when thinning occurred. Figures 6A and 6B show a number of uplift versus time curves. (We used a constant coefficient of thermal expansion, a, of 3.2 × 10 -5 /°C.) If the lithosphere thins permanently, initial uplift is followed by a protracted period of asymptotic, ongoing surface uplift. If the replacing material cools, the elevation gain will decay away. Since there is no discernible heat flow anomaly in the modern Colorado Plateau, we also checked the predicted change in surface heat flow from each model (quantified as k[T 5km -T s ]/5, where k = 3 mW m -2 °C -1 ), shown in Figures 6C and 6D. The change for all models that produced 400 m of modern uplift relative to the initial state is a few milliwatts per square meter, which is allowed by observations. We next calculated the depth of the lithosphere-asthenosphere boundary immediately after thinning and the amount of lithospheric thinning needed to produce ~400 m of modern uplift for the various scenarios (transient vs. permanent thinning and initial lithospheric thickness of 150 vs. 200 km) for thinning at 70 Ma (the arrival of the Farallon slab), 30 Ma (slab rollback/the ignimbrite flare-up), and 0 Ma (Figs. 7A-7B). All acceptable scenarios left most of the mantle lithosphere intact to later be sampled as magnesian xenoliths (Alibert, 1994;Lee et al., 2001). Transient thinning required 45-70 km of lithosphere to be removed; permanent thinning required 25-50 km (Fig. 7B). Thus, we suggest that the Colorado Plateau and the western Great Plains experienced similar amounts of lower-crustal hydration by fluids exsolved from the Farallon slab, but the lithosphere beneath the Colorado Plateau also thinned by a few tens of kilometers during the Cenozoic, causing an additional 400 m of uplift. Because the removal of mantle lithosphere by Rayleigh-Taylor instability, delamination, or similar mechanisms should create a number of small-scale convection cells, a more appropriate treatment of the problem might hold the lithosphere-asthenosphere boundary at a constant temperature for some time before allowing the replacing asthenosphere to begin to cool. The differential equations required to solve such a problem are the same as Equations 4-7. We conducted many trials with varying durations of what is effectively forced convection, and even a comparatively short duration of the constant-temperatureasthenosphere phase greatly protracted uplift and decreased the amount of thinning necessary. As such, the true amount of lithospheric thinning and the thickness of remaining lithosphere are likely closer to the permanent thinning case, <50 km. An additional set of calculations was performed (not shown) to determine the effect of a period of refrigeration of the overlying lithosphere. Specifically (e.g., as posited by Roy et al., 2009), a shallowly subducting slab would isolate the base of the lithosphere from asthenospheric convection and cause the lithosphere to cool, densify, and subside. We find that even 20 m.y. of refrigeration (150-km-deep lithosphere-asthenosphere boundary cooled from 1350 °C to 900 °C and held fixed for 20 m.y.) would only produce ~250 m of thermal subsidence. Since we also posit that the cooled lower portion of the lithosphere was removed, if thinning occurred after or as the slab was removed, the effect on surface topography would be inconsequential. CONCLUSION Cenozoic uplift of the largely undeformed Colorado Plateau and Great Plains was mainly achieved by hydration-induced retrogression of the lower crust by Farallon slab-derived fluids and accumulation of sediments during the Cretaceous and Early Tertiary. The 1.3 km of differential Cenozoic uplift across the Great Plains is due in part to an east-west gradient in the thickness of post-Jurassic sediments, which supports 0.4 km. A separate decrease in lower-crustal density, which can be explained by hydration due to mantlederived fluids, caused the remaining 900 m. Of the 1.6 km of relative uplift of the Colorado Plateau, 250 and 950 m can be ascribed to sedimentation and lower-crustal hydration, respectively, and the final 400 m most plausibly resulted from Cenozoic removal of the lower 25-70 km of the mantle lithosphere.
2019-03-06T01:50:44.671Z
2018-06-01T00:00:00.000
{ "year": 2018, "sha1": "3cf25196a4bb6726fe1778c72b36fc8acbfecabb", "oa_license": "CCBYNC", "oa_url": "https://pubs.geoscienceworld.org/gsa/geosphere/article-pdf/14/3/1150/4181754/1150.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "60655fd906ce499dbd75d4214987f5ab8033fdfb", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology" ] }
34926467
pes2o/s2orc
v3-fos-license
Presentations of Positive and Negative Images in a. M. Kollontai's Vasilisa Maligina: a Search for an Ideal Woman This article is a contribution to the issue of possible relation between fiction and society, approached by a study of A.M. Kollontai's story, Vasilisa Maligina. The aim of the study is to examine how far the Kollontai's background, her purpose of writing, and condition of society under which she lived and wrote contribute to her concept on an ideal woman. By analyzing her story, it can be concluded that "Kollontai creates the new woman-her literary work ( Vasilisa Maligina) reflects the creation." attention, Nina flaunts her physical beauty and femininity to attract people's attention, especially that of men. However, Nina's personality is not as impressive as her figure. Kollontai portrays her as a boastful, ignorant, and selfish person. She is also described as a weak and dependent person, unlike Vasya who is strong and independent. Although Kollontai offers, through her stories, multiple examples of women from varying age groups and social strata, positive characterisations outweigh negative characterisations. Indeed, two arrays of female characters, despite their apparent varieties, merely offer the readers a simple black and white opposition. In creating some images, positive or negative, of her characters there is no doubt that Kollontai bases her creations on three basic concepts of her own. It can be said that the first influential concept is communist ideology. Since her efforts to emancipate women through communist values, Kollontai is well known as a Bolshevik feminist. Her efforts can be seen from the way she devides her characters into negatively characterised women and positively characterised women. It is obvious that Kollontai portrays communist women, a group of characters who revere the morality of the proletariat, as the representatives of positive women. In contrast, she portrays bourgeois women, a group of characters who highly praise bourgeois values, as the representatives of negative women. Since her positive women are communists, Kollontai also portrays her heroines as persons who put collective interests far above individual interests, while her negative women or antiheroines are shown to have a contrasting image. Her heroines this image comes from Kollontai's rejection of bourgeois morality which demands all for the loved one instead of all for the collective. 4 Furthermore, in her stories some of Kollontai's positive and negative characters are shown as having jobs. What makes them negative or positive is the way they value work. Negative women value work as a means to get a brief moment of luxurious life and illusory happiness; positive women, in contrast, value their jobs as tools to change their fate, and to shape their own destiny. Kollontai demonstrates the difference between Nina and Vasya in valuing a job. Vasya values work as her existence and a realisation of her independence. Because of that, she is totally absorbed in her work and gives it her first priority. In contrast, Nina values work just as a tool to sustain her life. That is why she happily quits her job after getting an offer from Vladimir to provide her with a luxurious life as long as she becomes his mistress. Giving her positive women this image is influenced by Kollontai's view that change in the economic role of a woman and her independent involvement in production are the only ways to make a woman valued as a person with individual qualities. 5 Kollontai's giving an image to her characters is also based on the type of job performed by the character. This can be seen in comparing the job of Mrs Feodoseev and Vasya. Mrs Feodoseev spends her time from morning to evening only in the interests of her family, such as providing proper food, while Vasya dedicates her time to the collective interest. Kollontai gives Mrs Feodoseev, as a representative of housewives, a negative image, while Vasya is given a positive image. Kollontai's decision in doing this is based on her view which impartially judges all types of job as long as it has value to the state and the national economy. 6 Relating to women's roles in society and private life, it is evident that Kollontai wants to change society's perceptions and expectations of women. Through her positive characters, Kollontai expresses the idea of seeing a woman as an individual character who has equal rights and responsibilities in social and private life. To support her idea, Kollontai shows women workers who are equal to their male contemporaries in selfconfidence and determination. This can be seen in the efficiency and preoccupation in doing their jobs of the communist girls Vasya meets in the local party office. Kollontai also shows that domestic work, which is assumed to be a woman's main responsibility by old society's values, is not productive work since it does not serve the community, as is shown by Mrs Feodoseev who is just busy with her domestic work. Moreover, doing only domestic work is too trivial for a woman who has equal ability with a man, as shown by Vasya herself and Dora, who is working in the Higher Education Commission. Because of that, Kollontai also gives her heroines the image of persons who try to eliminate love and passion so they can fulfil their main purpose in this life; performing their work and party duties. An extreme example can be seen in Vasya's decision to leave Vladimir. Interestingly, Vasya, who is the most perfect type of Kollontai's new woman, has the same point of view as Kollontai's. Kollontai's view can be seen from her confession: Now I had the opportunity to devote myself completely to my aims: to the Russian revolutionary movement and to the working-class movement of the whole word. Love, marriage, family, all were secondary, transient matters. 7 Communist values also influence Kollontai in giving different appearances of her positively characterised and negatively characterised women. It seems clear that Kollontai tries to give an unfeminine image to her positive characters as can be seen in the way she portrays Vasilisa. Vasilisa, as a representative of communist women, is described as a thin, undernourished-looking girl with a flat chest that makes her look like a boy. To strengthen the unfeminine image of her appearance Kollontai gives the masculine nickname Vasya to her heroine, a similar name to an errand boy met by Vasya when she visited her husband in the countryside. According to Damousi, iconography has long been viewed as a site in the 'struggle for meaning' and using the human body as the presentation of ideas, as Kollontai did, is not unusual in communist propaganda. 8 Waters has observed that "the Bolsheviks understood the power of symbol to convey and reinforce political messages". 9 Giving her positive woman this image is based on her desire that a woman should have a place in human society not because of her specific femininity, but because of her personality as a human being and citizen, and the worth of the useful mission accomplished by her. 10 Because of that, instead of giving physical beauty, Kollontai 7 Kollontai, Alexandra, 1975 In contrast, Kollontai gives a feminine image, as Nina's image, to her negative characters. These women are presented as persons who are interested in fashions, in new hair-styles, or in cosmetics. They usually appear in beautiful dresses. Unlike the positive women who have to rely on their natural beauty instead of social charm, these negative women know how to express their sensuality. In other words, Kollontai presumes that modesty, severity, and simplicity are ideal beauty and suited to communist values, while over-dressing and using make-up are the values of bourgeois femininity, and symbols of women's subordination to men. Because of that, Kollontai portrays their beauty as a symbol of forbidden fruit, and their sensuality is shown as a destructive thing. The negative women are shown as the women who use their femininity as a tool to seduce men. An example can be seen in the action of the young prostitute who seduces the 'party man' and 'commissar' with her boudoir-beauty. Furthermore, their beautiful appearances go with negative personalities. These women are presented as selfish, possessive, and weak. They are also characterized as persons who do not have solidarity with their peers. For example, Nina Konstantinovna's personality is shown as being not as impressive as her figure, since she does not care about other people's problems. She also feels that she deserves to be the focus of attention. Because of her communist principles, Kollontai also gives her positive women an image of valuing friendship between man and woman as long as the friendship is based on equality, mutual respect, and complete freedom. Kollontai's positive women believe that love and passion are not the only reason for building a relationship between a man and a woman. An illustration can be seen in the long friendship between Vasya and Mikhailo Pavlovich which is based on the similarity of their way of life. The friendship between Vasya and Mikhailo is a reflection of Kollontai's view that proletarian women consider men as their comrades in fighting for a better future since they are enslaved by the same social condition, capitalism. 11 This attitude is different from those of bourgeois women who see men as the enemy and the oppressor. 12 To strengthen her idea, Kollontai contrasts the friendship of Vasya and Mikhailo with the friendship between Nina and her male friends. Kollontai shows this relationship as a kind of business transaction: supply and demand. The bourgeois men, who see a woman as a sex object, "buy" women's attention with their power and money; in return bourgeois women "sell" their freedom and dignity to gain moral and material support from men. That is why Nina, a bourgeois woman, is so unhappy with her friendships with her male friends. The second basic concept which influences Kollontai in giving particular images to the characters in her stories is her concern about the new type of woman, man-woman relationship and family matters. 11 Holt, op. cit. p.60. 12 Buckley, Mary, 1989, Women and Who, then, are these new women? They are not the pure, "nice" girl whose romance culminates in a highly successful marriage, they are not wives who suffer from the infidelities of their husbands, or who themselves have committed adultery. Nor are they old maids who bemoan the unhappy love of their youth, just as little as they are "priestesses of love", the victims of wretched living conditions or of their own depraved natures. No, it is a wholly new "fifth"type of heroine, hitherto unknown, heroines with independent demands on life, heroines who assert their personality, heroines who protest against the universal servitude of woman in the State, the family, society, who fight for their rights as representatives of their sex. 13 For Kollontai, the new type of woman is a woman who is pure and virgin mentally. The pure and virgin woman is a woman who fight for her rights in this life. That is why Kollontai gives Nina and Vasya a different concept of virginity. Vasilia, who does not keep herself virgin for her husband, believes that a pure heart is far more worthy than a pure body. It can be seen from her praise of Zinka. Zinka was a whore, but when the revolution broke out she started working for the political police with passionate enthusiasm, taking on the most perilous and punishing assignments. Even when she was eventually shot by the Whites, she managed to shout: 'Long live the Soviet Government! Long live the revolution!'. Because of that, in Vasilia's opinion, a person like Zinka is far better than society ladies who overvalue their physical virginity. On the contrary, Nina values physical virginity more highly than a pure heart, as seen from her behaviour using her beauty to attract many men in order to get a luxurious life. It seems clear that for Nina, it is not wrong playing games with many men as long as she can keep her virginity for the man she loves. Because of that she feels that she deserves to get Vladimir's love since she can give him her virginity. Since her stories can be assumed to be propaganda for her ideas, Kollontai portrays her protagonist, who also acts as her heroine, with the images which are suited to her ideal woman, while her antagonist, who acts as her antiheroine, is portrayed with the images which are contrary to those of her ideal woman. In her article, Tezisy o Kommunisticheskoi Morali v Oblasti Brachnykh Otnoshenii (Theses on Communist Morality in the Sphere of Marital Relations) in Kommunistka, nos 12-13, Kollontai proposes the new concept of the relation between man and woman. According to Kollontai, a couple, who are liberated individuals, should be sexually active and living in a comradely marital union which is not based on economic calculations. The private family's values, which are based on bourgeois, jealousy, possessiveness, and narrow and exclusive concern for one's own child, should be replaced by the higher values of collective love. Moreover, the couple ,who now are equal workers, can stay together as long as their mutual love remains. 14 This concept becomes the basis of her heroines' attitudes towards their partners. In their personal relationships with men, Kollontai's positively characterises women tend to maintain a kind of relatioship based on equality, mutual recognition of the fact that one does not own the heart and soul of the other, mutual respect for the rights of others, and mutual caring. Moreover, they refuse the old values which give women subordinate roles as shadows of their husbands, the supplement, the "sweeteners of their homes". In fact, to some extent, instead of making them subordinates, Kollontai gives her heroines an image of having more character and higher status than their partners of life, as can be seen in Vasya and Vladimir's relationship. Moreover, for positively characterised women, to be treated as free individuals is more important than anything else. The positive women can bear their husbands' inability to provide them with material needs, and demonstrate a lack of attention of an external kind since they are also working. They even can forgive their husband's infidelity as shown by Vasya's forgiveness. However, they will never tolerate if their husbands cease to respect their existence as free and equal partners. An example of it can be seen from her reason for leaving Vladimir. Vasya decides to leave Vladimir not because of his betrayal, but because of the breaking of their equal friendship. In contrast, Kollontai gives her negatively characterised women an image of valuing the ideas of possessiveness and unequality in man-woman relationships in every way and every sphere, including the sexual sphere. This can be seen in Nina's case. Because of the need of love and security, Nina lets her freedom be tied, and proudly chooses her role as Vladimir's passive, subordinated woman. Furthermore, it can be observed that Kollontai tries to expose the ambiguity of men's characters. Through her story Kollontai shows that, partly, men need independent women whom they can rely on, as it can be seen from the attitudes of Vladimir, Mr. Feodoseev to his partner. However, partly, men also enjoy their privileges, which are given by patriarchal culture, to be dominant and superior. Because of that, they still insist that their partners serve their needs, especially in private matters. It can be assumed that Kollontai exposes the ambiguity of men's characters in her stories in order to strengthen her idea that it is necessary to change men's behaviour to "achieve relationships based on the unfamiliar ideas of complete freedom, equality, and genuine friendship". 15 Through her story Kollontai tries to show that a traditional family unit which consists of husband, wife and children is no longer appropriate to society since it has failed in fulfilling its main function as a good living environment in which to raise a strong, free, and independent person. This is shown in the way Kollontai describes Vasya, before and after her marriage. Obviously, Kollontai gives, in the unmarried Vasya, the images of a person who is healthy, active, firm, and happy. These images are in contrast with the images given to Vasya after she marries. Vasya is then shown as an unhappy woman who cannot be active very much in her work because of her sickness and marriage problems. She is also shown as not as firm and self-confident as she was before her marriage as indicated by her doubts about the idea of an equal and free relationship between man and woman. The married Vasya is even shown trying to follow Maria Semenovna's advice on how to be the good, traditional wife whose functions are to be cheerful and to brighten up the home. Fortunately, because she is a representative of Kollontai's own positive character, Kollontai does not let her be crushed. When, by accident, Vasya finds Nina's letter she becomes herself again. A second fact can be seen from Vasya's married life. Kollontai tries to demonstrate that Vasya's marriage is working well as long as she is not living with her husband. Crises occur whenever they are living together because Vasya has to face conflicts Jurusan Sastra Inggris, Fakultas Sastra, Universitas Kristen Petra http://puslit.petra.ac.id/journals/letters/ 15 between work and love. These conflicts make Vasya unable to fulfil her functions as a woman and a Party member. This makes her life miserable since she values her work as her existence, dignity, and the source of her happiness. Kollontay's rejection to traditional family relationship can also be seen from the way Kollontai portrays other women. In her story Kollontai presents married women as characters who cannot express themselves freely since they depend on their husband emotionally and economically. This can be taken in Nina's case which shows how an intelligent woman worker loses her existence as a free individual just because she maintains a relationship with a man. In contrast, Kollontai portrays unmarried women as characters who have the right and responsibility to choose their own fate. Moreover, unlike the married woman who is shown as a passive suffering person, Kollontai describes the unmarried woman as a person who is actively enjoying life. Kollontai even demonstrates how an unhappy ,weak woman can become a strong, active, and happy woman after she breaks her relationship with a man. This can be seen from the experience of Lisa Sorokina, the local women's union organizer. Relating to family life, Kollontai presumes that the traditional family in which the man is superior and has everything while the woman is subordinate and has nothing, is ceasing to be necesssary either to its members or to the nation as a whole. She adds that the reason for it is that the domestic economy is no longer profitable. The family also distracts the worker from more useful and productive labour. 16 This presumption becomes the basis of describing family life in her stories. Obviously, Kollontai shows the failure of the traditional family, as can be seen in the failure of the Feodoseev family. Through the Feodseev's case Kollontai tries to illustrate her view that an indissoluble marriage based on a church wedding and the servitude of women, as symbolised by the relationship between Mr and Mrs Feodoseev, should be replaced by a free union of two equal members of the workers state united by love and mutual respect, as symbolised by the relationship between Mr Feodoseev and Dora Abramovna, one of Kollontai's positive women. Furthermore, Kollontai also shows the severance of the relationship among family's members, as can be seen from Vasya's decision to leave her family. By showing a positive character with this image, it can be assumed that Kollontai wants to express the idea that a woman can achieve full freedom and independence by breaking her relationship with her family to which she is tied only by blood, not by a similar purpose in life. To strengthen this suggestion, Kollontai gives her negative women the image of persons who are dependent on their blood family, as can be seen in Nina's case. Like Vasya's family, Nina's family is also bourgeois, but because she is more concerned with and loyal to the blood relationship she remains a bourgeois unlike Vasya who becomes a communist. By comparing these two characters' relationships with their own families, Kollontai wants to point out why a person like Nina, who is intellectual enough to adjust to a new idea (communism), cannot emancipate. Kollontai also gives her heroines the image of persons who believe that communal life and strong solidarity among women can replace an unharmonious family relationship, as Vasya finds her happiness and support from her communal friends. The third concept which becomes a basis for creating some images to her stories characters is Kollontai's argumentation in the name of the Workers' Opposition against NEP (the New Economic Policy). NEP was introduced by Lenin at the Tenth Party Moreover, Kollontai presents NEP women, the wives or daughters of NEPmen, as the other representatives of negatively characterised women. These NEPwomen, called the bourgeois ladies during the period of NEP (roughly from 1921 to 1928), are portrayed as unemancipated women who enjoy their subordination and take it as a "privilege card" for getting a life of luxury without working at any job. Letting themselves be subordinated and allowing their fragile femininity to be trampled on are the price that they must pay for. Kollontai also shows the NEPwomen as the kind of persons who use their beauty to smooth the path of their husbands' shady transactions, as shown by the In contrast, Kollontai gives an image to her positive women as the group of characters who are never tempted by NEP people's blandishments. An example can be seen in Vasya's rejection when she realises that her husband is becoming a NEPman. Conclusion In analysing A. M. Kollontai's work, Vasilisa Maligina, there are two major points to consider: Firstly, Kollontai represents women in recognisable settings and as believable characters. This representation is based on the fact that she portrays women of various ages, social background, and occupation; which counteracts stereotypes which explore women as one-dimensional creations. Despite this rejection of stereotypes, the author chooses a strong working communist woman as a representative of the ideal woman. Her protagonist, the manifestation of "Soviet superwoman", is capable of surviving the hardship of work's demands. From the way Kollontai describes her two characters, it can be assumed that Vasilisa Malygina is the main heroine of this story, while Nina Konstantinovna is the main antiheroine whose character is in contrast with Vasya's. It seems clear that Kollontai creates Vasilisa Maligina as a symbol of ideal Soviet women. 17 Naiman, Eric, 1997, Sex Secondly, her communist ideology, her concerns on women's role and family matters, and her hostily to NEP'S idea are three concepts which have become her reasons to put certain images to her woman characters in her story. Thirdly, as a feminist Bolshevik theorist, Kollontai makes serious attempts to incorporate feminist ideas into her fiction. It seems clear that Kollontai who isinterested in the "woman question", tries to depict women's problems in general. Kollontai concentrates on the theme of emancipation in her story . It seems clear that Kollontai tries to diminish the image of domesticity and motherhood from her positive female characters in order to give a new concept of women's position in the society. In the Autobiography of a Sexually Emancipated Communist Woman Kollontai writes that, " life created the new woman -literature reflects them" 35 ; but in her case, based on the way she gives such images to her female characters, it is more appropriate to say " Kollontai creates the new woman -her literary work reflects them".
2017-09-07T05:43:47.118Z
2001-06-01T00:00:00.000
{ "year": 2001, "sha1": "f6a39be45730d4f0e46758ced0f509d77a3445e2", "oa_license": "CCBY", "oa_url": "http://kata.petra.ac.id/index.php/ing/article/download/15468/15460", "oa_status": "GOLD", "pdf_src": "Neliti", "pdf_hash": "899ad4a9ea52034dcaa1367368701315fca33fb8", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Art" ] }
42844614
pes2o/s2orc
v3-fos-license
Relations of Clay Fraction Mineralogy , Structure and Water Retention in Oxidic Latosols ( Oxisols ) from the Brazilian Cerrado Biome © 2012 Carducci et al., licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Relations of Clay Fraction Mineralogy, Structure and Water Retention in Oxidic Latosols (Oxisols) from the Brazilian Cerrado Biome Introduction In Brazil, Latosols are by far the main class of soils, mainly when one considers the soils potentially used for agricultural purposes. They cover approximately 50% of the Cerrado Biome, totaling about 200 million hectares, in [46,52]. The clay mineralogy of these soils are very simple, basically composed by 1:1 clay minerals, mainly kaolinite, and varying proportions of iron-and aluminum-oxides (in this chapter this general term includes oxides, hydroxides and oxi-hydroxides). As the oxides content increases, they tend to be associated with the formation of granular structure, composed by very small and resistant microaggregates, occurring in both superficial and sub-superficial soil horizons. To explain the formation of micro-aggregates in these highly weathered Latosols, in [30,31] it is highlighted that iron-and mainly aluminum-oxides act as aggregating agents of mineral particles by changing the arrangement of their components in relation to the plasma, resulting in granular aggregates with a diameter < 300 µm, in an agglutinated pattern, having a high pore volume, which is in turn organized into interconnected cavities, in [100]. Consequently, in these soils, the pore distribution by size is predominantly characterized by two distinct classes of pores: the first one is related to very large or structural pores (among micro-aggregates), which promote rapid internal drainage of the soil, however, they are very susceptible to alteration; and the second one is related to very small pores or textural pores (inside micro-aggregates), in which water is retained with very high energy, in [10,21,63]. This segregation of contrasting pores is typical of oxidic Latosols from this region, in which the increase of clay content is associated with higher total porosity and lower bulk density, in [31,37,46,72,88,89]. Thus, the content and nature of the clay fraction are very important in the hydro-physical behavior of these highly weathered soils. Under natural conditions, these soils have a high total pore volume, one part being composed by drainable pores (approximately 2/3 of the total pore volume; those with diameters > 145 µm), which are of fundamental importance for the soil high permeability. However, these soils also have high volume of pores with very small diameter (< 2.9 µm; approximately 1/3 of the total pore volume). Therefore, in order to remove the residual water content a considerable amount of energy is required, in [63]. Small amounts of water have been observed to be adsorbed on the soil matrix under 300.000 kPa pressure, in [10]. In the current stage of evolution of the agricultural systems in Brazil, in which yield increments are searched without increasing the productive area, it is necessary to understand in details the hydro-physical behavior of these Latosols, taking into consideration the environmental sustainability, in which underground water recharge is fundamental for the maintenance of the most varied types of edaphical life. It should be mentioned that irrigated agriculture in this region is undergoing accelerated growth and it is not clear until now if the existing water resources are sufficient to support this expansion, in [78]. These soils are usually very deep, providing large reservoirs of water for crops, since there are no chemical constraints for the expansion of the crop root system. Even in the subsuperficial soil horizons, the residual water plays an important role in the maintenance of adequate thermal and physical conditions, which minimizes root death during the pronounced dry season, which is typical of this region. Cerrado biome, mineralogy and structure of Oxidic Latosols The Brazilian Cerrado, which is the second largest biome in the country, is located in the central part of the South America, including large portions between parallels 3 ° and 24 ° south and between parallels 41º and 63º west. In Brazil, the biome occupies approximately 23.92% of the territory, covering several states (Mato Grosso, Mato Grosso do Sul, Rondônia, Tocantins, Minas Gerais, Bahia, Maranhão, Piauí, Sao Paulo, and particularly Goiás and the Federal District, where this vegetation covers the landscape on a relatively more continuous way ( Figure 1), but there is still remaining "islands" of this biome in Pará, Roraima and Amapá states, in [5]. The soils that support this biome are hydrologically important since the major basins (Amazônica, Platina and Sanfranciscana) have many of their springs in this region, in [29,47]. The Cerrado can be defined as a formation composed of tropical vegetation, represented mainly by grasses, with sparse trees and shrubs, in other words, including floristic and physiognomic aspects of vegetation, constituting a unique biome, in [29], also called neotropical savanna. The soils are represented by Latosols (Oxisols) (50%), Argisols (Ultisols) (15%), Quartzarenic Neosols (Entisols) (15%), Cambisols (Inceptisols) (10%) and Plinthosols (Oxisols having drainage restrictions) (6%) and other soils (4%), in [78]. The Latosols and Quartzarenic Neosols are located in predominantly gentle relief associated with a very sparse hydrography. The Latosols are considered the oldest soils on earth. They go from deep to very deep, non hydromorphic, and show great textural variation with clay content ranging from 150 g kg -1 to more than 800 g kg -1 [88,89]; exhibit low natural fertility due to the strong weatheringleaching, which contrasts with their excellent physical conditions favored by the strong and very small granular structure. Latosols also tend to present high acidity (pH 4.0 to 5.5), low cation exchange capacity, high anion adsorption capacity (especially phosphate and heavy metals) and low levels of available P (phosphorus) to plants [27,46,77]. The beginning of weathering of Latosols in this region dates from the Cretaceous and Tertiary, in [54]. They were formed under conditions of significant weathering-leaching, which contributed to their advanced degree of pedogenic development, resulting in a very simple mineralogy, in [76]. Their clay mineralogy consists basically of 1:1 clay minerals, mainly kaolinite (Si2Al2O5 (OH)4), iron oxides (hematite (Fe2O3) and goethite (FeOOH)) and aluminum oxides (gibbsite (Al(OH)3)) in different proportions, as well as quartz and other resistant minerals, in [16,38,39,44,46,54,65,73,74,75,76,77,83]. There are also registers in the clay fraction of some Latosols formed from rocks richer in iron, of maghemite (Fe2O3) as well as magnetite (Fe3O4) and ilmenite (FeTiO3) in the coarse fraction [93,97]. The identification of hydroxi-interlayered vermiculite in the clay fraction of A and B horizons of some Latosols has been also registered, in [71]. Knowledge of Latosol genesis facilitates the identification of their corresponding classes in international soil classification systems: the Oxisols in Soil Taxonomy, in [92] and the Ferralsols in World Reference Base [43]. As peculiar characteristics of Latosols can be cited: the presence of latossolic B horizon (Bw = intense weathering), minimal differentiation between A and B horizons, color varying from reddish to yellowish, depending on the parent material and the factors and processes of soil formation, in [15,54]. They exhibit weak macrostructure and strong microstructure [28,30,31], resulting in 50-300 µm size micro-aggregates, in [100]. These soils constitute the largest class in terms of territorial expression having high potential for agriculture, forestry and livestock purposes, in [46]. The pelitic rocks of the Bambuí Group which occur in Minas Gerais, Bahia and Distrito Federal states are important parent materials of many Latosols of the Cerrado Biome. These rocks are fine grained, resulting in clayey or very clayey soils. In these soils, the kaolinite is the mineral with higher expression in the clay fraction, in [66] and its presence in combination with low levels of iron and aluminum oxides favors the hard consistency when the soil dries, and higher bulk density, which is related to the blocky macrostructure, function of the face-to-face arrangement of the kaolinite plates in [30,31]. Ferreira et al.,in [30] relating the mineralogy and structure of Latosols in southeastern Brazil, stratified them into kaolinitic or gibbsitic soils: in kaolinitic Latosols the micromorphological evaluation showed that the distribution of quartz grains in relation to the plasma, is porphyric. In other words, the grains are enveloped in a dense and continuous plasma, with little tendency to develop the microstructure. This phenomenon is associated with the blocky structure, so that the soils are more compact, less permeable, with lower aggregate stability in water and have a greater susceptibility to sheet erosion. On the other hand, the gibbsitic Latosols show a more uniform distribution of the minerals in relation to the plasma, resulting in smaller granular and resistant aggregates (< 300 µm diameter), in an agglutinated pattern, influencing higher void ratio, which are in turn arranged into interconnected cavities, in [100], showing a greater susceptibility to gully erosion. Consequently, in these soils the pore distribution by size is characterized by presenting predominantly two distinct classes of pores: the first one is related to very large or structural pores (among micro-aggregates), which promote rapid internal drainage of the soil being, however, very susceptible to alteration; and the second one is related to the very small pores formed among the mineral particles (inside micro-aggregates), in which water is retained with very high energy, characterizing it as hygroscopic water, in [19,21,63]. This segregation of contrasting pores is typical of the oxidic Latosols from this region, in [73]. Usually, increasing the clay content of these oxidic Latosols results in increased total porosity and lower bulk density, in [30,31,89]. Based on this knowledge it can be understood that in very weathered tropical soils the micro-aggregates are very resistant and play a prominent role in the formulation of the soil aggregation hierarchy hypothesis. An indication of this resistance is the difficulty of evaluating the clay content in the field, requiring more time for reliable estimates, in [13]. This micro-aggregates resistance also manifests itself in the laboratory analysis of particle size distribution, mainly during the chemical and mechanical soil dispersion, in [38,39,63,99]. Bimodal pore distribution and water retention of oxidic Latosols The development of a specific type of soil structure is usually a consequence of the parent material and soil formation processes and factors, and these will condition many of the physical properties of soil. Marshall, in [55] stated the soil structure is defined as the arrangement of soil particles and the associated voids, including shape, size and arrangement of the aggregates formed by the primary particles (sand, silt and clay) which are grouped into units defined by limits. Marcos, in [53] cited that the morphological evaluation of the soil structure is qualitative, while the physical evaluation is functional. It is known that soil macrostructure is strongly affected by climatic changes, biological activity, as well as the land use and soil tillage, being vulnerable to mechanical and physicalchemical forces, by according to Hillel, in [41]. In another words, composite structural units or aggregates are formed by aggregation of primary mineral particles in association with organic particles, especially the humidified ones, in [91], originating the soil structure, which influences the porosity. Thus, the aggregates have their own genesis reflected in their size, shape, composition and stability, in [9,98]. According to with this soil structure model, there is a strong influence of the mineralogical components of the clay fraction upon formation of a particular structure type. It is reported, for instance, that oxides (mainly gibbsite, followed by iron oxides-goethite and hematite) jointly with organic matter, in this order of importance, tend to disorganize the particles at microscopic scale, in [30,77]. Therefore, the higher content of these components has a greater degree of disorganization and, consequently, the structure tends to become of the granular type. So, gibbsite, iron oxides and organic matter are precursors and maintainers of the granular structure, which is typical of oxidic Latosols in the Cerrado Biome, and it results in high permeability values, in [30,31]. In Latosols, the granular structure type is responsible for a lower bulk density and a higher porosity values compared to the blocky structure (kaolinitic Latosols), in [10,22]. The developments of structural-or among micro-pores (> 50 µm diameter) are more expressive in oxidic Latosols, followed by textural-or inside -micro-pores (< 50 µm diameter), in [14,50,63]. In oxidic Latosols, the structural pores exhibit a relationship with clay content reflecting on their hydro-physical attributes such as water retention. This feature can be considered a special characteristic of oxidic Latosols, in [31,[86][87][88][89]. Therefore, the presence of this type of structure formed by stable micro-aggregates, especially in the Bw horizons of oxidic Latosols, consequently determines the dominance of structural porosity over textural porosity, giving to these soils excellent permeability and moderate to low water retention, in [14]. The voids of the soil are formed by various processes that result in different pore shapes and sizes that affect the soil functions. For instance, the water and gases transportation occurs through the interconnected pores. The soil structure is considered to have various hierarchical levels, namely: a) groups of primary particles which comprise micro-aggregates; b) groups of micro-aggregates comprising aggregates; and c) groups of aggregates comprising much larger aggregates or soil clumps, in [19]. The pore distribution by size affects the soil hydro-physical dynamics. In the literature there are several classification schemes for pore diameter, highlighting the most simplified ones that separate two classes of pores: macro-pores, when the pores have a diameter > 50 µm, and micro-pores, when the pores have a diameter < 50 µm, as proposed by Kiehl, in [49] and Richards, in [79]. Pores of intermediate size, meso-pores, have lower expression in the Latosols from the Cerrado Biome, in [10]. There are equations that aim to quantify the pore size. Bouma, in [6] proposed the following equation: D = 4σ cos θ/Ψm, being: D = pore diameter (mm), σ = water surface tension (73.43 kPa at 20 °C); θ = contact angle between the meniscus and the wall of the capillary tube (assumed to be 0) e Ψm = matric potential (kPa). However, there are simple and straightforward methods to determine the pore size distribution, for example, using mathematical models to describe the water retention curve, because it is known that the shape and slope of the curve correspond to the homogeneity of the distribution of pore diameter, in [2,19,36]. Thus, the bimodal pore distribution of oxidic Latosols can be represented from the water retention curve, in [10]. When using the shape of the curve, the first inflection point occurs at low matric potentials (between 1 and 3 kPa, in absolute value) identifying structural pores, while the textural pores are represented by the second inflection point that occurs at extremely high matric potentials (between 10.000 and 20.000 kPa). Between these maximum points it can be observed that the asymptotes, related to the presence of intermediate pores, have low expression in oxidic Latosols in [10,21,22,75]. In soils of temperate regions, the bimodality has been observed within the range of the standard curve of water retention, in the range from 1 to 1.500 kPa in [21,22] due to the more uniform pore distribution, compared to soils of tropical regions. It is noteworthy to remember that the soil water retention depends on pore distribution, and this is influenced by various factors such as structure, particle size distribution, organic matter, clay mineralogy, as well as biological activity. There are two possible reasons for the influence of mineralogy on the soil water retention: a) specific surface area; and b) presence of electrical charge of clay minerals. The larger the specific surface area and the higher the electrical charge is, the more water can be bound to the clay minerals, in [62,34]. Thus, there is a substantial process of water being adsorbed on the surface of clay minerals by electrostatic forces and, hence, the water retention. Gaiser et al. [35] observed significant differences in soil water retention with different mineralogy, noting that soils with low activity clays (1:1 clay minerals and iron-and aluminum-oxides) retain less water when compared to soils that have high activity clays (2:1 clay minerals), using pedotransfer functions. Several studies have indicated a strong influence of clay fraction on water retention in Latosols, in [1,4,10,11,70,73,89]. A few authors claim that clayey Latosols having oxidic mineralogy favor higher water content and more gradual decrease of soil water content with increasing matric potential (in absolute value). The study of water retention developed by van den Berg et al., in [96] in Latosols from different regions showed that the increased release of water occurs at low potentials (between 5 and 10 kPa) similar to what happens with very sandy soils. The spatial variability of water retention in clayey Latosols was studied by Cichota & van Lier, in [12]. These authors observed that the water retained at matric potentials ranging from 1 to 100 kPa is not strictly related to the content of clay, which confirms the theory of Raws et al., in [70] that at low matric potentials the retention curve is directly influenced by structure stability and consequently by the formation of pores in addition to the indirect effects of organic matter. Many advances have been made in order to better characterize soil water retention. More sophisticated devices as the WP4-T, in [18] should be highlighted, which allows the quantification of the residual water retained at high matric potentials. The residual water retained in the textural pores of oxidic Latosols, although considered unavailable to crops, in [50,80], may reflect significant water content (up to 0.25 g g -1 ) in more clayey soils, in [10]. So, it becomes of great interest in studies involving regulation of microbial and biochemical processes in the soil, in [60], re-induction of desiccation tolerance of germinating seeds and seedlings when subjected to high matric potentials (Ψm > 1.500 kPa), in [81] and it can act as a lubricant between aggregates, when the soil undergoes external pressure during mechanized operations in [23]. Modeling the water retention curve of oxidic Latosols Water retention curve has been used to describe the dynamics of the soil water, in [20,36]. This curve graphically represents the relationship between the energy of water retention (matric potential, in logarithmic scale) and water content, which is dependent on the intrinsic characteristics of each soil, the result of joint action of soil attributes such as texture, structure, mineralogy and organic carbon, in [4,19,37,40]. Several types of adjustments to the water retention curve have been used, in [25,36] for describing the soil hydro-physical performance. However, in order to identify the bimodal distribution of pores in oxidic Latosols the double van Genuchten model was recently proposed by Carducci et al., in [10]. Based upon the shape of the curve, the first inflection point usually occurs at low matric potentials, representing the structural pores, while the textural pores are represented by the second inflection point that occurs at higher matric potentials. For soils from temperate regions, the bimodality of pore distribution has been observed within the range of the standard curve of soil water retention, in other words, in the range from 1 to 1.500 kPa (in absolute value), in [22] because there is a more uniform distribution of pores, when compared to soils from tropical regions. This mathematical model allows to identify, with high predictive power, the bimodal density function for the pore size distribution of tropical soils in a more superior range than to the one of the standard curve: 1 < Ψm < 300.000 kPa). One purpose of science is to find, describe and predict the possible relationships between events occurring in the environment. A common practice is to develop models that relate these events. For this purpose, statistical modeling is widely used, mainly by the use of linear and nonlinear regression models, in [56]. The two classes of regression models differ mainly in aspects related to their application and the characteristics linked to the mathematical form. The choice of which model to consider in fitting a certain set of data can be made intuitively, or through a graphical which expresses the function of the variables or prior knowledge of the phenomenon in question. The linear models are widely used for presenting analytical solution for estimating parameters and statistical properties. The interpretation of these parameters is purely mathematical, based on rate of variation of the dependent variable in relation to the independent variables, in [94]. Furthermore, the use of a linear model for predicting values outside the range of observed values of independent variables is not advisable. Although the linear model is very flexible, since many models can be formulated by the combination of independent variables, in [26], there are several types of models which are based on theoretical considerations inherent to the phenomenon which one has interest in knowing, i.e., the called mechanistic models, in [56,84]. Generally in these models the parameters have practical interpretation and the prediction of values is allowed, since when considering the mechanistic model the restrictions which ensure the model utility are imposed, in [3]. A model is considered nonlinear when the mathematical expectation of a dependent variable "Y" cannot be written as a linear function of parameters in a regression model. Historically, nonlinear regression models date from early 1920 ' s, in [33]. However, the application and a detailed investigation of these models had to wait for the advancements allowed by computational calculations after 1970, in [24]. The rise up of nonlinear models often accompanies the forecasts involving physical and/or biological dynamics about the phenomenon under study in [102]. Such expectations are based upon models in which the parameters have practical significance in describing the phenomenon that is observed. The function of statistics in this scenario is to evaluate, select, and provide models and tools for better understanding of these phenomena. An overview of a nonlinear model considers a set of p columns of a matrix X and a vector of parameters θ = (θ1,...,θk) T such that the average related to a response Y is given by: Where ƒ is the function average or expectation of Y. Unlike linear models, the numbers of columns of the matrix X does not necessarily need to be equal to the number of parameters in the vector θ. Many of the functions impose restrictions on parameters (eg: θi > 0, i = 1,...,k) due to both practical interpretations of the compatibility of mathematical relationships. The variance of Y in turn is given by: The above equations, including the presupposition of independence between observations, define the classic nonlinear model. The only difference between the classes of models is the form of the expectation function. The function is nonlinear regarding the parameters, and therefore, many parallels can be drawn regarding the procedures for estimation of parameters and statistical inference. The fitting of nonlinear models can be obtained by minimizing the residual sum of squares, RSS(θ), where: By inspection of all values from the parameter space of θ € . For linear models there is an analytical solution for the estimating that minimizes RSS(θ). For nonlinear models the search for the minimum point of equation (3) is usually a problem with the numerical solution. Such problem uses a linear approximation of nonlinear function that converges to the minimum point at each iteration, in [48]. This procedure, as expected, also provides rough estimates for standard errors and hypothesis tests, and such approach is a function of how strong the nonlinearity of the model. The Taylor series approximation of the function of expectation around a value θ* considering expanding to the second term, can be written as: Where F (θ*) and H (θ*) are the score matrix and hessian arrangement, respectively. The j-th column vector of the score matrix is given by ∂ ƒ (x, θ) = ∂θj and the jl-th column vector of the hessian matrix is given by ∂ 2 ƒ(x, θ) = ∂θj∂θl, both evaluated at θ = θ*. Omitting the second term of the expansion (4), we can rewrite (3) as: êi* is the current residuum that depends on the current value of θ* in the iterative process. In the re-writing of the matrix form, the minimization process can be written as: The equations (6), below, are applied in two forms: first to support the algorithm for the estimation of θ and the second as the basis for statistical inference on the parameter estimates, in [48]. The majority of statistical packages use the Gauss-Newton algorithm to find the parameter estimates in nonlinear models. Other packages also present derivative forms or algorithms based on other optimization processes. Practically algorithms differ at execution time. However, the efficiency of any one of them is very dependent on the value θ (0) and given at the beginning of iterative process. Depending on the numerical distance between θ (0) and the algorithm can converge to a local minimum, or even not converge, therefore, suitable choices for θ (0) in this sense are more important than the iterative method. At each iteration the algorithms gets closer to the θ value which minimizes the sum of squared residuals, and hence ê* increasingly approaches the final residue. In this process one can think that is equal to the parametric value plus a linear combination of random variables (e), so by the limit central theorem and satisfying certain regularity conditions, will present approximately normal distribution, in [84]: An estimate of the variance is obtained by replacing in θ by in equation (7), = In which the second estimate is: Where k is the number of estimated parameters of the expectation function and n is the sample size. These results are generalizations of those obtained in linear models, and hence, the inferential methods such as F test for comparison of corresponding models, t for testing hypotheses about the parameters, can be applied to nonlinear models. These tests are simple extensions of the applied ones to linear models that are submitted to an appropriate linear approximation. Due to this, in contrast to the linear case where the same hypothesis is inspected similarly by different procedures with the same descriptive level, in nonlinear models equivalent tests may lead to differing conclusions. For instance, the Wald test for H0: Aθ = d may not produce the same result by the F test of model reduction, in [26]. The properties of these tests depend both on the sample size and the intensity of nonlinearity of the model. Once obtained the estimate of the parameters it is possible to establish the asymptotic standard error for expectation E(Y) at a given point xi: Where is is an abbreviated representation of (xi; ). An confidence interval of (1 -α) to covering E (Y) in a given association xi can be obtained by: , ± / , , As discussed above, one can see that all inference procedures for nonlinear models admit supposition of adequate linear approximation and make use of asymptotic arguments. With the method of generalized nonlinear least squares it is possible to model the heterogeneity of variance in a specification similar to that used to model the response variable average. Davidian & Giltinan, in [17] presented the following expressions for the general definition of the variance function: The variance of the response in equation (12) is a function g(•), which in turn may be a function of the response average (µ) of the fixed effect of independent variables (x) and of the parameter vector ( ) associated with the variance function g(•). Not necessarily g(•) must be specified as a function of all arguments. The variance function can be represented by any continuous positive function, being the most common are the exponential function and power function: The process of estimating the variance function is based on generalized least squares. Following the estimative parameter and choice of initial values for θ, an iterative process to generates definitive values for the parameters by minimization of the pseudoverisimilitude function with respect to θ: , , = ∑ , , , , Technically, the above minimization means the verisimilitude maximization in relation to θ. For minimization of the above expression, by iteration, it is necessary the knowledge of θ. Regardless of the variance and g(•) function, minimization of the equation implicates on minimization of the sum of squares errors ({yi-ƒ (xi; )} 2 ). However, the most suitable the estimated variance values and of the g(•) function, smaller the sum squared errors. Computationally, the algorithm employed provides the joint estimation of the θ, and parameters. The heterogeneity of variance is corrected by specifying the variance function and estimates the associated parameters. Therefore, those observations that have larger deviation have their influence on the estimation of parameters ponderated by its variance. The standard errors of the parameter estimatives at the end of the estimation procedure are considering only the variance due to residual error, free of the difference in dispersion observed for the response variable. Based on concepts of nonlinear models mentioned above, there are applications of these in various areas of Soil Science. As plausible examples it can be mentioned nonlinear regression models to predict soil nitrogen mineralization, in [67,68] models of potassium release from various sources of organic residue in Latosol, in [101] extraction of zinc from sewage sludge, in [95] as well as the nonlinear model of Genuchten, in [36] which is the most used worldwide to describe the soil water retention. The soil water retention curve is a nonlinear theoretical model which relates to water content with the matric potential. This feature is specific for each soil, in [4] being that the water content held in a given Ψm depends on the structure, the pore distribution and bulk density in which capillary phenomena are of greater importance. However, when the adsorption phenomenon governs, it is dependent on the texture and specific surface area of the mineral particles of clay fraction, in [1,4,41,70]. Its graphic representation is based on the survey of a certain number of points, usually selected arbitrarily, by plotting the abscissa axis the logarithm of matric potential (log Ψm) and on the ordinates axis the soil water content (U, g g -1 , θ, dm 3 dm -3 ). Based on these points, a curve is delineated to represent the soil water retention characteristics. The knowledge of the water retention curve has practical and scientific applications, including: determining the inflection point as being the field capacity, in [20,32,57,58] the slope of retention curve at the inflection point, in another words, obtaining the physical parameter "S", in [19] total water availability and drainable porosity, in [57], water content and pore size distribution, in [63,50] non saturated hydraulic conductivity, in [36,103] among others. Several nonlinear models are used to describe the relation between water content and matric potential, in [2,7,8,19,36,42,45,57,58,61,82]. These empirical models continue to be used in order to adjust the soil water retention curves because it has not been developed theoretical mathematical expressions capable of adequately represent this physical-hydrical relationship. In adjusting the water retention curve is expected that the greater the number of points, the better representation of the soil water retention in [90]. At low matric potentials, the retention curve is directly influenced by the stability of the structure and, consequently, by the formation of structural pores in addition to the indirect effects of organic matter, in [64,72]. In high matric potentials, the water retention is influenced by textural pores associated with particle size distribution and soil mineralogy, becoming the more important due to the available surface for water adsorption, in [51]. This relation between the factors mentioned above characterizes the non-increasing monotonic function, which is common to all mathematical models of the water retention curve. For the soil physical-hydrical description, the theoretical model proposed by Genuchten, in [36] has been universally adopted and allows to relate, with high predictive power, the retention energy and the water availability, in [19]. This model is characterized by two asymptotes, related to soil water content corresponding to saturation and the residual content, and an inflection point between the plateaus, which is dependent on soil properties, being its shape and its slope regulated by empirical parameters of adjusting of the model ("α", "n" and "m"). The estimative of the water retention curve is given by fitting the tested model to the data from the undisturbed soil samples, submitted to the interval of the standard matric potential (1 at 1.500 kPa). Despite its extensive use in relation to other available models, in [25] it does not adequately fit to soils with bimodal distribution of pores, i.e., soils with two contrasting classes of pores, classified into structural and textural pores, in [22]. As a result, modelings have been proposed which employ equations capable of identifying this distribution, in which these pore classes are quantified by means of two maximum points, obtained by derivation of the water retention curve, in [2,19] and consequently, two inflection points. The double exponential model proposed by Dexter et al, in [19] allows identifying the bimodal pore distribution in soils from temperate region in the matric potential interval related to the saturation water content (Usat) up to the residual water content (Ures). On the other hand, the Alfaro Soto et al. model, in [2] identifies the bimodal pores distribution in tropical soils in a matric potential interval upper to the standard determination (1 < Ψm <100.000kPa) of the water retention curve. The application of theoretical models, both in unimodal-and bimodal-pore distribution soils, provides only the description of the water content average value as a function of the matric potential and does not consider the possible correlation attributable to observed measurements in the sample at different matric potentials. In addition, these models do not consider the heterogeneity of variance, which was studied by Moraes et al, in [59] which found reduction of dispersion of the water content by increasing the matric potential. A new model of adjustment for the water retention curve was proposed, in [10], denominated double van Genuchten ( Figure 2). So, as well as other models, in [2,19] the derivative of this model presents the bimodal density function for the pore size distribution of soil tropical, which stratifies the porosity of these soils into structural and textural pores, obtained by two inflection points which are evident from the nonlinear relation among the variables, expressed by this model, considering, however, the different matric potential interval for establishment of water retention curve (1< Ψm < 300.000 kPa). However, due to the higher number of parameters, the template double van Genuchten becomes more flexible. The equation below (Figure 2) shows m=1-1/n restriction, in [61] for both curve segments, structural (mstr) and textural (mtex). The gravimetric water content and matric potential are represented by U and Ψ, respectively. The parameters Ures, Upwp, Usat represent the inferior asymptotic plateau (Ψ  ∞) or asymptotic residual water content, the intermediate plateau, represent the value of water content which is slightly constant around the permanent wilting point and the upper asymptotic plateau (Ψ  0), indicates the saturation water content, respectively. The α and n parameters are associated with the scale and shape of the curve between top, middle and bottom asymptotes; αstr and nstr (structural) correspond to the first segment and αtex and ntex (texture) to the second segment of the curve. This procedure of adjusting of nonlinear models can be obtained by employing the 2.14.1software R, in [69]. On the other hand, the water retention curve represents a cumulative distribution, thus, its derivative is proportional to the probability density function, and this function represents the distribution density of pores by size. The slopes represent the class of pore diameter that occurs most frequently. It explains why a larger quantity of water is removed when it is applied a tension corresponding to that diameter of the pores and therefore there is a great loss of water around this matric potential. The double van Genuchten model generalizes this assumption to accommodate the bimodal pore distribution, and therefore, the function has two inflection points. Final remarks A higher content of iron-and aluminum-oxides in the clay fraction of clayey Latosols (Oxisols), widely dominant soils in the plateaus of the Brazilian Cerrado Biome, currently the most demanded for sustainable grain production, is associated with the soil granular structure of these oxidic soils. This structure, when well expressed as in the B horizon of these very old Latosols, favors in the soil the existence of two distinct populations of pores: the bigger pores or structural pores (among aggregates) and the smaller pores or textural pores formed between the mineral particles (inside aggregates). This means that in these soils practically there are no pores between these two limits. This condition is also valid for sandy soils. In this context, the model recently proposed by Carducci et al, in [10], and much detailed in this chapter adequately contemplates this b i m o d a l d i s t r i b u t i o n o f p o r e s i z e a n d functionality with respect to soil water retention in these peculiar soils in this important Brazilian Biome (one of the last agricultural frontiers in the world). This represents a conceptual and methodological advance and an adapted modeling to the mineralogical and structural characteristics of these soils, in a region characterized by welldefined wet and dry season, with direct consequences on the water dynamics in these soils and in the environment in general.
2017-09-17T12:24:39.104Z
2012-09-12T00:00:00.000
{ "year": 2012, "sha1": "a4934ef95ff2c4f67e26e17f467f822a8beed34b", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/38856", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "38d343afa1b450314066a1b92c3e71ae535433f9", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Geology" ] }
249354751
pes2o/s2orc
v3-fos-license
An Evaluation of Telepractice use During the Covid-19 Pandemic for the Treatment of Speech and Language Disorders in Belgium The aim of this article was to evaluate the sudden implementation of telepractice in Belgium during the COVID-19 pandemic. A 38-question survey was completed by 1,222 Dutch-speaking speech-language pathologists (SLPs) from Belgium. Most reported good or very good satisfaction with telepractice and that telepractice can be effectively used with clients of different ages and speech disorders with or without comorbidity. The SLPs reported when telepractice could be used most effectively. They also detailed their difficulties with both technology and client-related issues. Limitations when switching to telepractice included a lack of training and experience, and digital materials. Video-conferencing technology was used for the majority of clients (70.2%). The chief concerns of respondents were technological issues with hardware and software, the stability of the internet connection, and the lack of training. The most common reason (83.0%) for not using telepractice was the type and age of clients. Other reasons included: that telepractice was impersonal and prevented physical contact (52.2%), technological barriers (50.0%), a perception of being ineffective (40.9%), refusal by the care provider or client (33.0%), no apparent need to change the intervention system (25.0%), ethical aspects (13.6%), and higher costs (5.7%). Forty-seven respondents (51.1%) indicated that therapy via telepractice was less effective than in-person treatment, while 25.5% experienced telepractice as equally effective. No respondent found telepractice more effective than in-person therapy. Vrinda and Reni (2020) evaluated telepractice during the COVID-19 pandemic by surveying 105 SLPs from the Kerala region (Southwest India); 104 SLPs completed the questionnaire. The survey (presented via an online Google form) contained 25 questions, 22 of which had a closed response pattern. The SLPs' professional experience ranged from 6 months to 30 years. The work setting was academic for 33.7% and a clinical setting for 45.3%. Special education was the work setting for 2.9%, a rehabilitation center for 5.8%, and a few other settings for < 1.9% of the SLPs surveyed. While (74%) used telepractice during the COVID-19 period, 79.2% had no experience with telepractice before the pandemic. These SLPs reported to have prepared themselves by watching webinars (61%), reading articles (49.4%), acquiring skills through trial and error (71.4%), discussing telepractice among colleagues (71.4%), training parents (1.3%), consulting YouTube on telerehabilitation (1.3%), consulting the ethical code for speech and audiology, and verifying the protection of clients' privacy. Telepractice was used for counselling and guidance by most clinicians (both 87%), for follow-up (70%), diagnosis/evaluation (63.6%), and screening (32.5%). The ages of the clients ranged from younger than 6 months (3.9%) to older than 60 years (10.4%). The mode was between 3.5 and 11 years (70.1%). The sudden need to switch to telepractice raised several issues. It was difficult to treat children through telepractice (76.6%), there was a lack of online tools (46.8%), and some clients were not willing to receive online care (44.2%). Regarding technical issues, 94.9% of participants reported problems with internet connections. Advantages were perceived in the use of telepractice during the COVID-19 period. Many SLPs (54.5%), reported more regularity in client attendance, with client contact increasing (53.2%). Not having to travel with the risk of contracting the COVID-19 virus was mentioned by 92.2% as an advantage of telepractice. According to 61%, parents could be more involved in the therapy. In addition, 84.5% of the SLPs reported that with telepractice, clients felt more comfortable carrying-out activities from home. A further advantage was that follow-up and supervision with telepractice was perceived as easy by 46.8% of the SLPs. Client satisfaction with telepractice was perceived as "very good" by 9.1%, and as "good" by 50.6%, with 39% reportedly "satisfied" and only 1.3% "not satisfied." Tenforde et al. (2020) conducted a survey of the application of telepractice during the COVID-19 period in Massachusetts (USA). They assessed its feasibility and satisfaction in a multidisciplinary context (i.e., physical therapy, occupational therapy, and speech therapy). Of 211 clients surveyed, 205 completed a questionnaire. Of the 205 clients, 110 identified as female (53.7%), 92 as male (44.9%) and 3 as transgender (1.5%). A quarter (25.4%) were 7 years old or younger. The second largest group (32.7%) was aged between 35 and 64 years. About one fifth (19.5%) were 64 years of age or older. CLIENTS IN MASSACHUSETTS, USA The online survey contained 16 multiple-choice questions answered with a choice of: "weak," "sufficient," "good," "'very good," and "excellent." Client-related data included: gender, age (0 to ≥ 65 years), insurability, travelling time without telepractice, support in the environment, type of therapy, whether the treatment was new or continued, the duration of treatment, and the diagnosis or condition. The latter included neurogenic communication disorders (e.g., post stroke, concussion, post-traumatic brain injury, Parkinson, and pediatric neurologic conditions). Clients were asked about the quality aspects of telepractice, including how the communication with the therapist was experienced, how the therapy was perceived, and convenience. Since all clients were insured for the care they received, financial charges did not prevent them from participating in telepractice. The mode for time spent in a session was between 30 and 45 minutes for 59.5% of the clients. Clients were also asked about available help from friends or family. Of the respondents, 52.2% reported that they could not call on help with telepractice from a friend or family member, 39.0% indicated that they could rely on their help with telepractice, and 8.8% said they could call on remote assistance. The clients were asked about their experience with telepractice services. Overall the answer "excellent" varied between 70% and 90% depending on the aspect of telepractice queried. In addition, 10-20% reported "very good" and 3-10% answered "good." There was no correlation between the age of a client and the evaluation. Female clients reported a slightly higher level of satisfaction. METHODOLOGY STUDY APPROVAL The procedure of developing the survey in the current study, inviting SLP members of VVL, and analyzing the results was reviewed and approved by an independent scientific committee Scienti-L with members of the University of Antwerp, the University of Ghent, and the University of Louvain. DEVELOPING THE SURVEY To develop the survey questions, the findings of a systematic review of studies with telepractice served as a source of information (Boey, 2014(Boey, , 2015 as well as the three studies on pandemic-based telepractice described above. The survey construction was also informed via collaboration with a sociologist and a statistician with professional experience in survey development and the online software used. We chose CheckMarket® as an online tool in function of the ease of use for the respondents, for the processing and display of the results, for the security of data storing in datacenters, and use of two-step verification. Before release, the survey was pilot tested by three SLPs. In total, 38 questions were asked. These are listed in the Appendix. Questions about diagnostics, screening, and testing were not included as they were not allowed by NIHDI. The questions explored a number of aspects of telepractice: (a) the reasons to use or not use telepractice (questions 1-5), (b) the frequency of use of telepractice versus in-person treatment (questions 6-10), (c) the treated disorders and setting (questions 11-16), (d) the use of technology and possible issues (questions 17-20), (e) the feasibility and accessibility (questions 29-34), (f) the potential future use of telepractice (questions 35-37) and (f) additional remarks and comments. The answers included. "yes-no," "yes rather often," "yes occasionally," "no," "very dissatisfied," "rather dissatisfied," "rather satisfied," "very satisfied." Some multiple-choice answers allowed descriptive choices (e.g., about internet connection, sound and vision, digital display of material, interactions between SLPs and client, and privacy). A number of questions allowed for open answers. TIMING The survey was launched in September 2020, allowing the researchers to examine experiences after telepractice launched in Belgium (i.e., predominantly in April 2020). During that time, SLPs suddenly had to master telepractice through training, become familiar with video conference technology, provide interactive material digitally, and draw up recommendations for clients. INVITATION TO PARTICIPATE IN THE SURVEY The Vlaamse Vereniging voor Logopedisten (i.e. the Flemish Association of Speech-language therapists, acronym: VVL) has access to the largest group of SLPs in Belgium practicing in various settings. Moreover, each individual in this group could be reached by e-mail to invite them to participate in the survey. That is why these 2,699 SLPs were invited to participate in the survey. The invitation was sent to each member by means of an eNewsletter of VVL with a link to the survey. A reminder was sent ten days later to those who had not yet participated. The survey ran for 18 days. Participation was anonymous. The email addresses used for the invitation were sent by Campaign Monitor® and segregated from the CheckMarket online-tool so that no names or other identifying information were requested, registered, or used in the processing of results. Of the 2,669 invitations sent, only four bounced back (0.1%). PROCESSING AND ANALYZING THE RESULTS The CheckMarket® application allowed reports to be generated automatically, including percentage numbers of responses, graphs (bar charts, pie charts), and qualitative responses that could be classified by frequently used key words. The VVL Study Department processed the results in consultation with the sociologist and statistician. Numerical and percentage and central statistical tendencies figures were calculated. SURVEY RESPONDENTS A total of 1,222 members responded to the survey, a response rate of 45.3%. Of the respondents, 194 completed the survey partially (15.9%) and 1,028 (84.1%) completely. Only 171 (14%) indicated that they did not use telepractice. Most respondents were female (97.6%) and the average age was 33 years 8 months (SD = 10). CLIENT AGES The youngest person to be treated via telepractice was 3 years10 months and the oldest was 72 years old. Almost every SLP (96%) reported having engaged in telepractice with children between the ages of 7 and 12 years. SLPs provided service via telepractice to adults (33%), to teenagers between the ages of 13 and 17 years (61%), and to toddlers between 2 and 6 years of age (43%) . Parent counselling was provided to 15%. This wide age range reflects the accessibility of the health care system and reimbursement in Belgium. INTENSITY OF THE USE OF TELEPRACTICE The NIHDI-regulations required a duration of 30 minutes for an online therapy session. The average number of online sessions of 30 minutes weekly in the month before the survey was less than 10 for 41% of the SLPs and between 11 and 20 for 34% of the SLPs. SPEECH AND LANGUAGE DISORDERS Dyslexia, dyscalculia, language development disorders, articulation disorders and oro-myofunctional disorders were treated more by telepractice than in-person therapy. This was reported by over half of the participants. About a quarter reported online therapy for aphasia, voice disorders, and dysarthria. Almost a fifth treated dysphagia and dyspraxia. Stuttering (12%), cluttering (6%), and hearing loss (5%) were less frequently treated online. Most SLPs (93.5%) indicated that when inperson therapy with COVID-protective measures became possible, they reduced the number of telepractice sessions as compared to the lock-down period. The three main reasons were: the choice of the client or parents (37.6%), the contact between therapist and client, the greater flexibility of the interaction, and the availability of more materials (26.9%). CO-MORBIDITY The registration of existing co-morbidity is part of the initial diagnosis of a SLP in cooperation with a physician. A total of 933 SLPs reported the extent to which in-person or telepractice was used as a treatment modality when a co-morbid disorder such as Attention Deficit Hyperactivity Disorder (ADHD), Attention Deficit Disorder (ADD) or Autism Spectrum Disorder (ASD) was present. For all specified co-morbid disorders, in-person treatment was more frequently used than telepractice. CONDITIONS FOR TELEPRACTICE USAGE Three types of conditions influenced telepractice usage. First, were conditions related to a properly functioning computer (i.e., sound, camera, microphone, headset), software (i.e., video conferencing software, specific software for telepractice and digital materials) and the Internet connection (i.e., stable and fast enough). These accounted for 50.0% of the responses. A second consideration accounted for 23.5% of the response. This related to clinician effort and time investment. The SLPs' reported that their time investment was much higher for telepractice than for in-person therapy. A third consideration, accounting for 26.3% of the responses, related to client factors, such as being able to work with the computer at a distance, (which sometimes implied disorder-related exclusion), having the motivation or interest to do so, and being able to realize the tele-interaction with comfort and privacy. TECHNOLOGY HARDWARE AND SOFTWARE A small proportion of respondents indicated use of more than one system. The majority used a portable computer (71.5%), with less using a desktop computer (15.3%), a tablet (7.6%) or a smartphone (5.6%). For more than half, Zoom (55%) was used as video technology software. TECHNICAL ACCESSIBILITY For most SLPs (84%) the technical access to telepractice occurred with few or no problems. Eight reported problems with creating or using the link. Operating the software caused problems for a small proportion of respondents (6%). Technical accessibility was easy for 69% of the clients. Approximately one in five clients (21%) had problems operating the software and 21% had problems using the link. Difficulties logging in were reported for 18% of the clients; and 7% had difficulties turning on the camera or microphone. Overall, 44% of the SLPs reported a technical problem with telepractice, occurring occasionally (26%) or sometimes (27%). Problems with the internet connection were reported most often by the SLPs (86%). In addition, there were problems with sound (e.g., distortion, reverberation) for 50% of respondents. Problems with visual acuity (e.g., image, lighting) were reported by 23%. EFFORT, TIME, AND COST PREPARING FOR THERAPY WITH TELEPRACTICE Preparing for therapy with telepractice took more time than the in-person speech therapy, according to 85% of the respondents. For 13% there was no difference and for 2% preparation took less time. To improve their use of telepractice the SLPs put extra time and effort in education to learn how to work with the software (78%). Many SLPs informed themselves about the use of telepractice (62%). This occurred via a brief training course (41%), searching for digital materials for telepractice made available by VVL in a DIGICENTER (28%), and obtaining technological tools (6%). TELETHERAPY CLINICIAN EFFORT Prior to the COVID19 pandemic, speech therapy sessions were held "back-to-back" without the need for between session preparation delays. In contrast, during the COVID-19 pandemic fewer clients could be seen each day because of sanitation between sessions, lag-time between clients, and time connecting to tele-sessions. This was reported by 63% of the respondents. It is therefore not surprising that a majority of respondents (83%) indicated that more effort was required to provide telepractice compared with in-person therapy. For 15% of respondents, the effort was similar and for 2% it was less. COSTS While 52% of the SLPs reported that the cost of telepractice was the same as the cost of in-person therapy, 28% reported that the cost of telepractice was higher than for in-person therapy, and 20% replied it was less. SLP AND CLIENT SATISFACTION The SLP participants were asked to indicate the degree of overall satisfaction for both themselves and for their clients using a 7-point scale (i.e., "very dissatisfied," "dissatisfied," "rather dissatisfied," "neutral," "rather satisfied," "satisfied," and "very satisfied"). The distribution of the answers given to both questions is shown in Figure 1. It is clear the distribution is skewed to the right, with a much higher degree of satisfaction with telepractice reported. ADVANTAGES AND DISADVANTAGES OF TELEPRACTICE FOR CLIENTS The participating SLPs were asked to mention the advantages and disadvantages perceived by their clients. Eighty SLPs listed 247 advantages (Table 1) and 73 SLPs reported 156 disadvantages (Table 2). In order of importance, 40.1% reported the advantage of not needing to travel to the speech therapy practice, 23.5% reported that telepractice enabled continuity of treatment, and 12.1% reported an increase of motivation with telepractice. Other advantages were less frequently reported. Table 2 shows the disadvantages reported by SLPs as perceived by clients (Appendix, Question 4). Missing in-person interaction with the SLP was cited by 18.6%. The unavailability of a computer (i.e., due to no available computer, or that other children or parents needed to use the computer) and an unstable or insufficiently fast internet connection were each reported in 16% of the responses. Not being able to implement specific interventions via telepractice constituted 15.4% of the responses. Other disadvantages were less frequently reported. ADVANTAGES AND DISADVANTAGES OF TELEPRACTICE FOR SLPS The four most frequently mentioned advantages reported (Table 3, and Appendix, Question 25) were: (a) the safety of telepractice with no risk of contamination by COVID-19, (b) telepractice was a good alternative to therapy in the lock-down situation, (c) telepractice led to more structured therapy, and (d) telepractice required no travel for the therapist. Other less frequently stated advantages are shown in Table 3. Disadvantages of telepractice were reported by 142 respondents for a total of 437 comments (Table 4, and Appendix, Question 26). The four most frequent reported disadvantages were: (a) the need of more time to prepare a therapy session, (b) less flexibility to use or switch material in the online therapy session, (c) more stress due to focusing on the computer screen, and (d) less possible interaction between therapist and client. Other disadvantages were related to technical issues, environmental, and specific therapy elements. FUTURE APPLICATIONS To the question of whether speech-language pathologists will work with telepractice in the future, almost half (48%) replied they certainly would. For this group, it is important to note that the application possibilities were most developed. Only 9% indicated that they will definitely not use telepractice in the future. Description Percentage (%) compared to total answers (437) More preparation time needed 15.6 Less interaction between client and therapist 11.2 Less efficient/impossible with certain disorders/people/types of therapy 9.8 Less flexible in use of material/in variation of therapy/adjustment of therapy 11.9 SLPs has less control over the child, less feeling with the child. 8.9 Negative effect on child concentration/distraction within home environment 6.4 Instructions do not come across as well 2.7 Field of vision on the child is more limited (e.g., breathing, body language) 3.2 Very tiring/stressful due to constant focus on the screen 11.7 Play therapy is more difficult 2. COMMENTS, SUGGESTIONS, CONCERNS As a final question (Appendix, Question 38), respondents were given the opportunity to provide additional comments, suggestions, and reservations. They expressed the following: (a) The wish to keep telepractice possible in the future after the COVID-19 period as a supplement to in-person speech therapy and as an alternative, chiefly due to problems with travelling due to traffic or restricted mobility of a client (expressed by 43.5%). (b) That telepractice can be used for individual treatment if the client meets certain selection criteria. This should be considered for each individual client (expressed by 28.9%). (c) Support for further development of possibilities with telepractice is welcome (e.g., digital material, manual, training, education) (expressed by 11.8%). (d) The effort required to prepare and provide telepractice is higher than for the in-person situation (expressed by 11.8%). DISCUSSION This study evaluated the sudden and first-time usage of SLP provided telepractice in Belgium during the first wave of the COVID-19 pandemic. During that period, the NIHDI mandated that only telepractice could only be used to continue, not initiate, speech therapy. Reported data on the use of telepractice was obtained from 1,028 SLPs who fully answered an online survey with 38 questions. The data indicated that telepractice has been widely used with clients of a range of ages (3 to 72 years old) and with different disorders. The settings from which telepractice was conducted was diverse: private office, rehabilitation center, special education, school, and hospital. Satisfaction with telepractice was high among both SLPs (81%) and clients (89%). Being able to continue the speech therapy in safe health conditions (during the COVID-19 pandemic) was an important motivation. Both for SLPs and their clients, satisfaction was related to the fact that telepractice was the only way to continue speech therapy during the period of lock-down. Not having to travel was mentioned as an important advantage. Therapy delivered via telepractice was reported by SLPs to be more structured, an advantage of telepractice. Clients reported that telepractice increased their motivation to participate in therapy sessions. Logically, these benefits only apply when the telepractice equipment is functioning properly. This requires the availability of a computer or tablet with a well-functioning camera and sound, and a stable and sufficiently fast internet connection. Technical access was generally easy for both the SLPs (84%) and the clients (69%). Among the problems reported, an insufficiently stable internet connection was the most frequently mentioned (86%), along with problems with sound (50%) or image (23%). For more than half of the respondents, problems occurred rarely or occasionally. The ability to avoid transient or consistent barriers for participation and respect for the client's privacy were cited as prerequisites to telepractice. Some aspects of telepractice were reported by clients as a disadvantage. Mainly, these were problems with equipment, software, and internet. Some clients reported they felt a lack of human contact in telepractice as compared to in-person therapy. The difficulty of specific interventions was also noted (e.g., manipulation of articulators, tongue strength training, role playing, etc.). Another important concern was the need for more concentration and attentional effort when using telepractice. Related to this dissatisfaction were the combined stressors of online schoolwork for the child (36%) and parental telework at home (26%). Some of the disadvantages reported by SLPs were similar to those reported by clients. Both noted the limited interaction possibilities and the increased effort due to the focus on the screen. More specifically, SLPs mentioned they required more time and effort to prepare telepractice sessions as compared to in-person therapy. In addition, some SLPs noted reduced flexibility to use and switch materials during the telesession. It was difficult to make exact comparisons of the results of the current study with the earlier described results obtained by Fong et al. (2021), Vrinda and Reni (2020), and Tenforde et al. (2020). First, the survey questions differed among the studies. Second, in the current Belgium study, interventions such as consultation, screening, and diagnosis were not permitted; only the continuation of therapy was allowed. In general, some client characteristics who participated in telepractice were similar between the studies, such as their wide age distribution. Notable among these studies were the different disorders for which clients are treated by means of telepractice (e.g., articulation and language disorders, speech motor disorders, stuttering, voice disorders, aphasia, dyslexia, etc.). Tenforde et al. (2020) included neurogenic communication disorders. Telepractice was used in different settings and the proportion of each setting varied across studies. In the current study, private practice was represented more than in the Fong et al. (2021) and Vrinda and Reni (2020) studies. The frequency of use of telepractice could be compared between Fong et al. (2021) and the current study. Fong et al. (2021) reported a less frequent weekly use than was the case for the Belgian SLPs. It is noteworthy that in the study of Fong et al. (2021) and of Vrina and Reni (2020) about three quarters of the SLPs had up to three months experience with telepractice before they were surveyed. In the current study, the maximum was five months of experience before the survey. Problems with hardware and software were reported by half the participants in the study by Fong et al. (2021), by about one-fourth of those surveyed in the Vrinda and Reni (2020) study and in only 12% in the current study. This was a lesser problem in our study because internet connectivity was predominantly achieved through a cable network. CONCLUSIONS AND LIMITATIONS In the current study SLPs and clients were required to use telepractice to minimize the health risks from infection by COVID-19. The representation of certain age categories and client disorders reflected, among other things, the services offered and the policy of the NIHDI on speech therapy interventions and reimbursement. Despite the differences between the compared studies and the current study in terms of location, setting, disorders treated, client ages, numbers of respondents, and number of survey questions, common issues emerged when telepractice was suddenly used during the COVID-19 pandemic. Three major "take-aways" are as follows: 1. It is clear that telepractice can be useful for a wide range of disorders and for different age groups and types of clients. 2. There are common barriers that can make telepractice difficult or impossible. These are chiefly technological issues (i.e., computer with camera, image, sound, internet) or user issues (i.e., circumstances, another person helping). 3. There is the future expectation that telepractice can be used outside of a pandemic. However, this requires the provision of education, training, and materials, and recommendations from SLPs who used telepractice, and those who did not. As limitations, the survey studies considered herein do not allow for statements about measurable effectiveness in terms of achieved modification of a disorder. That requires research with comparative studies between in-person speech therapy and telepractice. There were problems with the computer and/or the internet connection. There was no interest in telepractice a priori. The therapy care which was not possible in-person was simply postponed or suspended. The combination with online schoolwork made telepractice impossible because that was too demanding. Parents working at home made telepractice impossible because it was too much of a burden. Client could not sit alone in a room to receive telepractice. Other, please specify. 25. Give advantages or benefits of telepractice for you as a speech-language pathologist. 26. Please give disadvantages or drawbacks of telepractice for you as a speech-language pathologist. 27. Give advantages or benefits of telepractice for your clients. 28. Please list disadvantages or drawbacks of telepractice for your clients. 29. What is your experience of time spent preparing with telepractice? The preparation takes more time for telepractice than for in-person therapy. The preparation is similar to in-person therapy. Preparation takes less time with telepractice than with in-person therapy. Other? Please specify. 30. What is your experience in terms of time spent in succession of sessions with telepractice? The telepractice sessions cannot be held consecutively. The telepractice sessions can be held consecutively. Otherwise? Please specify. 31. What is your experience of the effort involved in using telepractice? The experienced effort is greater with telepractice than with in-person therapy. The experienced effort with telepractice is equal to that with in-person therapy. The experienced effort with telepractice is less than that with in-person therapy. Otherwise? Please specify. 19 32. What were the costs of telepractice?(i.e., investment in hardware or software, materials, and time) The costs of telepractice and in-person therapy are almost the same. Relatively speaking telepractice costs more than in-person therapy. Relatively speaking telepractice costs less than in-person therapy. 33. How do you experience the accessibility of telepractice for you as a speech-language pathologist? Smooth, little, or no problems Problems creating and using the link of the software for telepractice Problems with logging in for the speech-language pathologist Problems with the operation of the software application by the speech-language pathologist Other, please specify 34. How do you experience the accessibility to telepractice for your clients in general? Generally smooth, few or no problems Problems with using the link of the telepractice software Problems with logging in for the client Problems with the operation of the software application by the client. Other, please specify 35. Will you continue to use telepractice in the future (even after the COVID period)? Possibly No, definitely not. 36. If you will use telepractice in the future, with which age groups of clients?
2022-06-05T15:20:16.986Z
2022-05-25T00:00:00.000
{ "year": 2022, "sha1": "73f6119a04b843057d28ea6685c89e6f2e0edcce", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "MergedPDFExtraction", "pdf_hash": "7a0f12b7211c8947d7278960ce84db88efdba391", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
220920176
pes2o/s2orc
v3-fos-license
Transition pathways connecting crystals and quasicrystals Due to structural incommensurability, the emergence of a quasicrystal from a crystalline phase represents a challenge to computational physics. Here the nucleation of quasicrystals is investigated by using an efficient computational method applied to a Landau free-energy functional. Specifically, transition pathways connecting different local minima of the Lifshitz-Petrich model are obtained by using the high-index saddle dynamics. Saddle points on these paths are identified as the critical nuclei of the 6-fold crystals and 12-fold quasicrystals. The results reveal that phase transitions between the crystalline and quasicrystalline phases could follow two possible pathways, corresponding to a one-stage phase transition and a two-stage phase transition involving a metastable lamellar quasicrystalline state, respectively. In general, nucleation of a stable state from a metastable state could be examined ny using three approaches, i.e. classical nucleation theory, atomistic theory, and densityfunctional theory (18,19). Within the framework of the density-functional theory, the free-energy landscape of the system is described by a free-energy functional determined by the density of molecular species. Stable and metastable phases of the system correspond to local minima of the free-energy landscape, whereas the minimum energy paths (MEPs) on the free-energy landscape represent the most probable transition pathways between different phases. Transition states (i.e. index-1 saddle points) on the pathways could be identified as critical nuclei, representing critical states along the transition pathways. This theoretical framework has been applied successfully to various problems undergoing phase transitions ( Fig. 1)-for instance, the (rapid) cooling of liquids, the melting of a solid, or the nucleation of crystalline structures (20)(21)(22)(23)(24)(25). However, the study of the phase transition between periodic structures and quasiperiodic structures remains a challenge due to incompatible lattice mismatch. Thus, a fundamental question in material sciences is: How does a quasicrystalline structure emerge from a crystalline structure? In this article, we examine the transition pathways con-liquid crystal quasicrystal cooling melƟng rapid cooling nucleaƟon necting quasicrystals and crystals within the framework of density-functional theory. Specifically, we apply an efficient numerical method based on the high-index saddle dynamics Significance Statement Despite the fact that tremendous efforts have been made on the study of quasicrystals since their discovery in 1984, nucleation of quasicrystals-the emergence of a quasicrystal from a crystalline phase-still presents an unsolved problem. The difficulties lie in that quasicrystals and crystals are incommensurate structures in general, so there are no obvious epitaxial relations between them. We solved this problem by applying an efficient numerical method to Landau theory of phase transitions and obtained the accurate critical nuclei and transition pathways connecting crystalline and quasicrystalline phases. The proposed computational methodology not only reveals the mechanism of nucleation of quasicrystals, but also paves the way to investigate a wide range of physical problems undergoing the first-order phase transitions. (HiSD) to a Landau free-energy functional, i.e. the Lifshitz-Petrich (LP) model (26), with local minima corresponding to two-dimensional (2D) crystalline and quasicrystalline phases. MEPs connecting various local minima of the model are obtained and critical nuclei of the 6-fold crystalline and 12-fold quasicrystalline states are identified. In particular, two MEPs connecting two ordered phases are obtained, revealing that the phase transitions between the crystalline and quasicrystalline phases could follow two possible pathways, corresponding to either a one-stage phase transition or a two-stage phase transition involving a metastable intermediate quasicrystalline state, respectively. LP Model Although our methodology applies to general freeenergy functionals, we will focus on the LP model for simplicity. The LP model is Landau theory designed to explore quasicrystalline structures with two characteristic wavelength scales (26). Despite its deceivingly simple form, the LP model exhibits a rich phase behavior containing a number of equilibrium ordered phases with 2-, 6-, and 12-fold symmetries (26,27). As such, this simple Landau free-energy provides an ideal model system for the study of transition pathways connecting crystals and quasicrystals. The LP model assumes a scalar order parameter φ(r) corresponding to the density profile of the molecules in a volume V . The free-energy functional of the model is given by (26,27), where 1 and q are two characteristic wavelength scales. The thermodynamic behavior of this model is controlled by two parameters, ε and α, where ε is a temperature-like parameter and α is a parameter characterizing the asymmetry of the order parameter (28). The coefficients for the spatial derivatives and the φ 4 term could be chosen as 1 2 and 1 4 by a rescaling of the model (SI Appendix) (27). Possible equilibrium phases of the model correspond to local minima of the free-energy functional with the mass conservation dr φ = 0, which are solutions of the Euler-Lagrange equation of the system, DF(φ) = 0. The Euler-Lagrange equation has multiple solutions, correspond to stable/metastable phases, transition states (critical nuclei, index-1 saddle points), and high-index saddle points of the model system. In this article, we will focus on the nucleations of two-dimensional 12-fold (dodecagonal) quasicrystals, so a two-dimensional LP model (Eq. 1) is adopted with q = 2 cos π 12 . The first step of the study is to find accurate stable solutions, corresponding to crystals and quasicrystals, of the Euler-Lagrange equation for the LP free-energy functional Eq. 1. Because quasicrystals do not have periodic order, special numerical methods are needed to describe their structures accurately. In general, discretization methods for quasiperiodic structures include the crystalline approximant method (29) and the projection method (30). In this article, we adopt the crystalline approximant method to approximate quasiperiodic structures in the whole space with periodic structures in a large domain with proper sizes (Materials and Methods). Multiple stable solutions of the Euler-Lagrange equation can be obtained in the LP model with different parameters α and ε. The initial configurations are composed of plane waves with the appropriate reciprocal wave vectors (SI Appendix) (27). The simplest solution is φ(r) = 0, corresponding to a homogeneous state, i.e. the disordered liquid. Beside this trivial solution, a number of spatially inhomogeneous solutions, including the 12-fold quasicrystalline state (QC), the 6-fold crystalline state (C6), and the Lamella state have been found for different model parameters. Interestingly, a lamellar quasicrystalline state (LQ) is identified as a stable state for some parameters as well. LQ is periodic in one dimension and quasiperiodic in the other dimension, which has been obtained previously in ref. (31) and (32). Furthermore, a transformed 6-fold crystalline state (T6) that is periodic with less symmetry can also be found in the phase diagram. The structures of these stable phases are shown in real and reciprocal spaces in Fig. 2A-E. It is important to note that the Hessian D 2 F(φ) of these states has different multiplicities of zero eigenvalues, corresponding to the numbers of Goldstone modes of these states (33, 34) (Models and Methods). The phase diagram of the LP model is constructed and plotted in the ε-α plane in Fig. 2F, showing stable regions of the QC, C6, LQ, T6, Lamella, and Liquid. Similar phase diagrams have been obtained by a number of researchers (26,27,31). It is noted that our current study focuses on 2D structures, and thus possible three-dimensional equilibrium phases, such as the body-centered-cubic and gyroid phases, are ignored in the phase diagram Fig. 2F. The phase diagram shown in Fig. 2F is the mean-field phase diagram of the LP model in 2D with q = 2 cos π 12 . Fluctuations could have important effects on the thermodynamics of the model system. However, we are not aware of any systematic examination of the fluctuation effects on the LP model, and a relevant study might be fluctuation effects on the Landau-Brasovskii model (35,36). We would leave the fluctuation effects on the LP model for possible future study. HiSD Method While a stable phase, corresponding to a local minimum of the free-energy functional Eq. 1, can be calculated by gradient descent algorithms with proper initial configurations, finding a transition state is much more difficult because it does not correspond to a local minimum. Moreover, multiple zero eigenvalues of its Hessian D 2 F(φ) at stationary points could lead to the degeneracy of transition states. The problem is further complicated by the fact that there is no a priori knowledge of the transition states. Most of the existing methods for solving nonlinear equations, such as homotopy methods (37)(38)(39) and deflation techniques (40,41), are inefficient for this degenerate problem because of superabundant solutions from arbitrary translation. Surface-walking methods for searching index-1 saddle points, such as the gentlest ascent dynamics (42) and dimer-type methods (43), are also not capable of computing the transition states from metastable ordered states in the current problem because the eigenvectors with zero eigenvalues of the metastable state would be mistaken as the ascent direction, leading to a failure of escaping the basin of attraction. On the other hand, the string method (44,45) with suitable initializations connecting initial and final states could relax to the MEPs. However, finding initial guesses for a string that is suitable for a particular MEP is not straightforward. In particular, it is quite difficult in the current problem due to the fact that there are no obvious epitaxial relations between crystals and quasicrystals. Furthermore, in order to obtain the accurate critical nucleus and MEP, the string method needs a sufficient number of nodes because the critical nucleus is close to the initial metastable state, which could lead to a large increase of the computational cost. The climbing string method (46) could overcome such difficulty and reduce the computational cost by calculating the half of the MEP from the initial state to the transition state. However, the climbing string method cannot easily climb out of the basin of attraction because of multiple zero eigenvalues of the initial state. The presence of zero eigenvalues is computationally a challenge. To deal with the repeated zero eigenvalues of equilibrium phases in computing the degenerate transition states, we applied a numerical method, HiSD, for high-index saddle points to find degenerate index-1 saddle points, with the inclusion of both negative eigenvalues and zero eigenvalues. The HiSD for finding index-k saddles (k-HiSD) is governed by the following dynamics (47),φ [2] where v1, · · · , v k represent the ascent directions, which approximate the eigenvectors corresponding to the smallest k eigenvalues of the Hessian D 2 F(φ), The LP functional Eq. 1 is highly ill-conditioned because of the eighth-order spatial derivatives, so the locally optimal block preconditioned conjugate gradient (LOBPCG) method (48) is applied to calculate the smallest k eigenvalues and the corresponding orthonormal eigenvectors. A preconditioner For a metastable state φ * whose Hessian D 2 F(φ * ) has m zero eigenvalues, we use the LOBPCG method to calculate {u * 1 , · · · , u * m } as an orthonormal basis of the nullspace of Hessian D 2 F(φ * ) and u * m+1 as a normalized eigenvector of the smallest positive eigenvalue. Since the smallest positive eigenvalue of Hessians at each stable/metastable ordered state is repeated, there are different choices for u * m+1 , which can lead to multiple transition states and MEPs. Next, we apply the (m + 1)-HiSD by choosing φ(0) = φ * + u * m+1 as the initial search position and vi(0) = u * i (i = 1, · · · , m + 1) as the initial ascent directions for searching an index-(m + 1) saddle point. The small positive constant is used to push the system away from the minimum, which could be regarded as an upward search on a pathway map (49). By relaxing the (m + 1)-HiSD in a semi-implicit scheme for time-dependent φ with updated ascent directions vi (i = 1, · · · , m + 1) as the eigenvectors at the current position φ(t) using one-step LOBPCG method, a stationary solution φ new can be found, corresponding to a degenerate transition state with only one negative eigenvalue and m repeated zero eigenvalues in most cases. It should be noted that for non-equilibrium positions φ(t), the Hessians have no zero eigenvalues in general. If φ new turns out to be a high-index saddle point-for instance, an index-k saddle (k m)-we then implement (k − 1)-HiSD to apply a downward search on the pathway map (49) to search lower-index saddles. The initial search position for the downward search is chosen as φ(0) = φ new + u new k , and the initial ascent direc- , · · · , u new m } are the orthonormal eigenvectors of D 2 F(φ new ) calculated by LOBPCG. This procedure is repeated to new saddle points until the degenerate transition state is located. The MEP is then obtained by following the gradient flow dynamics along positive and negative unstable directions of the transition state (SI Appendix). Nucleation from a Liquid to a Quasicrystal First, we present the MEP connecting a disordered liquid to a quasicrystal. By choosing ε = −0.01 and α = 1, the QC has a lower free-energy density of f = F/V = −2.7 × 10 −3 than the disordered liquid with f = 0. The critical nucleus of QC from the liquid is shown in Fig. 3 The QC critical nucleus shows a circular shape with a small amplitude. (Scale bars: 10π.) in Fig. 3 represents the possible nucleation process starting from the liquid state toward the quasicrystal. If the patch of the quasicrystal is smaller than the critical nucleus, it will shrink back to the liquid. If the patch is larger than the critical nucleus, it will grow and eventually take over the whole system. The critical nucleus represents a small patch of QC surrounded by damping density waves. The density wave at the center of the nucleus has a much smaller amplitude than that of the corresponding QC state. Therefore, the critical nucleus obtained from solving the Euler-Lagrange equation of the system differs significantly from that of the classical nucleation theory. Along the transition pathway and beyond the critical nucleus, the nucleus grows isotropically with an increasing amplitude at the centre, eventually reaching a full QC phase (see Movie S1 for the liquid → QC transition pathway). Our finding is consistent with the previous results (25,31,50), indicating that there is only one critical nucleus from liquids to quasicrystals, and the growth of quasicrystalline nucleus will fill the whole space. Moreover, our presented example is generic, not limited by special model parameters (see the bifurcation diagram in SI Appendix). Nucleation from a Quasicrystal to a Crystal Next we demonstrate how a quasicrystal would transform to a crystal. We choose ε = 0.05, α = 1 so that QC is a metastable state with f = −5.3 ×10 −3 and C6 is a stable state with f = −6.3×10 −3 . For this case, we found two transition pathways connecting QC to C6. In the one-stage transition pathway, a circular critical nucleus of C6 shown in Fig. 4A is observed. Interestingly, the growing C6 nucleus beyond the critical nucleus shows that a transient state along the transition pathway contains another interphase connecting the C6 and QC states (see Movie S2 for the QC → C6 transition pathway). This new interphase is the metastable LQ state, periodic in one dimension and quasiperiodic in the other dimension, which can be stabilized at other parameters (Fig. 2F ). It is noted that a similar interface between crystals and quasicrystals was obtained with Dirichlet boundary conditions for the phasefield order parameter (51). This finding indicates that LQ could serve as an intermediate state connecting the QC and C6 phases. Indeed, a two-stage transition pathway from QC to C6 has been obtained from our calculations. This two-stage pathway reveals a first transition from QC → LQ and a second transition from LQ → C6 as shown in Fig. 4A. Nucleation at the first stage shows an ellipsoidal critical nucleus of LQ with the periodic direction as the major axis. The energy barrier of the LQ nucleus (∆f = 4.5 × 10 −6 ) is lower than the energy barrier of the C6 nucleus (∆f = 5.3 × 10 −6 ), indicating that the QC → LQ transition pathway is the more probable one. After the QC → LQ transition, the second-stage transition follows the formation of another ellipsoidal critical nucleus of C6 with the quasiperiodic direction of LQ as the major axis and eventually to the C6 phase (see Movie S3 for the QC → LQ → C6 transition pathway). It is noted that two transition pathways have been observed for the gyroid to lamellar transitions of block copolymers (22). Furthermore, the appearance of a metastable intermediate state as a precursor of the stable phase is consistent with Ostwald's step rule (52). Nucleation from a Crystal to a Quasicrystal Finally, we present results on the emergence of a quasiperiodic structure from a periodic structure. By choosing ε = 5 × 10 −6 and α = √ 2/2, C6 becomes a metastable state with f = −6.8 × 10 −4 and QC is a stable state with f = −7.5 × 10 −4 . Again, two transition pathways are obtained in this case (Fig. 4B). It is noted that, along the one-stage transition pathway, a larger computation domain with L = 306 was used to avoid the effect of finite domain sizes on growth dynamics. Similar to the phase transition from QC to C6, a circular critical nucleus of QC is found on the one-stage transition pathway. After nucleation, the size of the QC nucleus increases with the appearance of the LQ interphase (see Movie S4 for the C6 → QC transition pathway). On the other hand, a two-stage transition pathway from C6 to QC via a metastable LQ, as shown in Fig. 4B, has been obtained. The critical nucleus of LQ at the first stage assumes an ellipsoidal shape with the quasiperiodic direction as the major axis. The energy barrier of LQ nucleus (∆f = 1.7 × 10 −6 ) is lower than that of C6 nucleus (∆f = 4.6 × 10 −6 ), indicating that the C6 → LQ transition would be more likely chosen than the direct C6 → QC transition pathway. Because LQ and QC have similar free energies in this case, a small driving force from LQ to QC transition is expected. Therefore, a much larger critical nucleus of QC is found. A full C6 → LQ → QC transition pathway is shown in Movie S5. Discussion In summary, we applied an efficient numerical method to accurately compute critical nuclei and transition pathways between crystals and quasicrystals. The computational challenge of the problem stems from the existence of multiple zero eigenvalues of the Hessian of different ordered phases. We solved this problem by applying the HiSD method to search for high-index saddle points, resulting in degenerate index-1 saddle points corresponding to the critical nuclei. The proposed methodology is applicable to a wide range of physical problems with degeneracy undergoing phase transitions. Application of the numerical method to the LP model reveals an interesting set of transition pathways connecting crystalline and quasicrystalline phases. For the transitions between the crystalline C6 and quasicrystalline QC phases, two transition pathways, corresponding to a one-stage direct transition and a two-stage indirect transition, have been obtained. We found that a one-dimensional quasicrystalline LQ phase, with periodicity in one direction and quasiperiodicity in the other direction, plays a crucial role to connect C6 with QC. This discovery is consistent with the phase diagram Fig. 2F, where the LQ phase can be stabilized between the stable regions of C6 and QC. The two possible pathways represent different underlying mechanisms of breaking periodicity. Along the one-stage transition pathway, the periodicity breaks in two dimensions. On the other hand, for each stage of the two-stage transition pathway, the periodicity is broken along one direction. Compared with the one-stage transition pathway between C6 and QC, the two-stage transition pathway C6 ↔ LQ ↔ QC is, consistent with the Ostwald's step rule, more probable because the LQ nucleus has a lower energy barrier. Several studies with different models suggested the existence of multiple localized states composed of Hexagon (21) or quasicrystal (53) patches surrounded by the liquid. This phenomenon generally corresponds to the special case of phase coexistence between the disordered state and the ordered state, which occurs in a narrow region near the phase boundary between these two phases. Thus, the parameter range for the existence of multiple localized states is very limited. Furthermore, these localized states would correspond to local minima of the free energy landscape, whereas the critical nuclei correspond to saddle points on the free-energy surface. Using the LP model, we demonstrated that there is only one critical nucleus along the MEP from the liquid state to the quasicrystalline state. If multiple critical nuclei exist, our method can also find them, and the MEP from the initial state to the final state may pass through multiple transition states, corresponding to multiple energy barriers. When multiple MEPs exist, the nucleation of quasicrystals would most likely occur along the MEP whose critical state is with the lowest energy barrier. It should be pointed out that the LP model is a phenomenological model, and the results obtained from such a model cannot be applied directly to physical experiments, unless a connection is made between the physical system and the model parameters. Nevertheless, these findings shed light on the nucleation and growth of quasicrystals. The accurate numerical results provide a comprehensive picture of critical nuclei and transition pathways between periodic and quasiperiodic structures. Materials and Methods Crystalline Approximant Method. For a given set of d base vectors {e * 1 , · · · , e * d }, a reciprocal lattice vector k of d-dimensional quasicrystals can be expressed as It is important to note that some of the coefficients κ j might be irrational numbers. A quasiperiodic function φ(r) can be expanded as · r). [5] Since some reciprocal lattice vectors cannot be represented as linear combinations of e * i with integer coefficients, proper rational numbers L are chosen such that Lκ j of all the concerned reciprocal lattice vectors k could be approximated as integers. As a result, a quasiperiodic function could be approximated by a periodic function with a period 2πL, where k are linear combinations of e * j with integer coefficients. Within this approximation, the computational domain becomes [0, 2πL] d with periodic boundary conditions for φ(r). For the 2D (d = 2), 12-fold quasicrystals, q is chosen as 2 cos π 12 in the LP model. For e * 1 = (1, 0) and e * 2 = (0, 1), the coefficients to be approximated are 1, 1 2 , q cos π 12 , q cos π 4 , and q cos 5π 12 , and proper values of L are 30, 82, 112, 306, etc. (SI Appendix). We have tested the accuracy of various L and found that L 112 gave results within the required accuracy. Therefore, we set L = 112 or 306 in our numerical calculations, and use the spectral methods for Eq. 6 with N = 1024 or 3072 points in each dimension to discretize the order parameter φ(r). The stable phases are calculated using gradient flow, [7] with a semi-implicit scheme (30), where Pϕ = ϕ − 1 V dr ϕ is the projection operator of the mass conservation constraint. The nonlinear terms in Eq. 7 are treated by using the pseudospectral method (29). The semi-implicit scheme and the pseudospectral method are also applied in the HiSD Eq. 2. Zero Eigenvalues of Hessians at Equilibrium States. The Hessians of ordered equilibrium states exhibits multiple zero eigenvalues. We dealt with this by treating the eigenvectors corresponding to repeated zero eigenvalues as unstable directions, and the degenerate transition states (index-1 saddle points) can be calculated from degenerate metastable states using HiSD for index-k saddle points (k 1). Here, the (Morse) index of a stationary point of a functional is defined as the number of negative eigenvalues of the Hessian (54), and the word "degenerate" specifies that its Hessian has zero eigenvalues. For instance, the stable/metastable phases have index 0, and the transition states are index-1 saddle points. The homogeneous state φ(r) = 0 is an isolated solution and its Hessian has no zero eigenvalues in general. For C6, the Hessian has two repeated zero eigenvalues, corresponding to the translation along the x and y axes, while the rotation transformation cannot be realized because of the discretization method. On the other hand, numerical calculations show that the Hessian at LQ has zero eigenvalues of multiplicity three and the Hessian at QC has zero eigenvalues of multiplicity four. The various zero-eigenvalue multiplicities of Hessians can be explained with a higher-dimensional description of quasicrystals-that is, a d-dimensional quasicrystalline structure can be represented by a projection from a higher-dimensional periodic structure (17). To calculate QC, a four-dimensional (4D) reciprocal space should be applied in the projection method (30), because the reciprocal lattice vectors can be represented by linear combinations of four primitive reciprocal vectors with integer coefficients, as the arrows shown in Fig. 2A-E. Since the 2D projection of any 4D translation of QC remains as an equilibrium state, QC is a degenerate solution with four repeated zero eigenvalues. Correspondingly, three primitive reciprocal vectors are enough to represent LQ with integer coefficients. In addition, Hessians at nonequilibrium points have no zero eigenvalues generally. Supporting Information Text Dimensionless form of the two-dimensional Lifshitz-Petrich model. The original two-dimensional Lifshitz-Petrich (LP) model may contain seven parameters c > 0, q2 q1 > 0, β > 0 and ε, α, A ∈ R for a general physical system (1), but only three parameters remains after rescaling (2). Let q = q2/q1 and φ(r) = q 4 1 β −1 cφ(q1r) + A, we havẽ where the rescaled parameters are (V is the integral area in Eq. 1) Therefore, without loss of generality, we can set c = β = q1 = 1 and A = 0 in the LP model. The parameter q represents the ratio of two characteristic wavelength scales in the system. To stabilize different quasicrystals, the LP model requires certain choices of q. In the current study, q is chosen as 2 cos π 12 to stabilize various ordered phases of interest, including the two-dimensional 12-fold (dodecagonal) quasicrystals. For the three-dimensional icosahedral and twodimensional 10-fold (decagonal) quasicrystals, q is chosen as 2 cos π 5 . For the two-dimensional 8-fold (octagonal) quasicrystals, q is chosen as 2 cos π 8 (2). A similar phase field crystal model was used in (3)(4)(5) to stabilize quasicrystals with various symmetries, which is slightly different from Eq. 1, and our methodology can be directly applied to this model as well. The actual free energy barrier is proportional to the values ∆F obtained in our calculations that is in unit of kT , that is, ∆F physical = ∆F ξ kT , where ξ is a model-dependent parameter. From the derivation of the Landau theory from certain molecular models, all the parameters in the Landau theory were specified by the molecular parameters (6). Since we have taken the Landau theory as a generic model of phase transitions, the information about the exact value of ξ is not available within our current approach. If we take the Landau theory specified by Eq. 1 as the starting physical model, we could determine the value of ξ via the scaling analysis following Eq. 1. Specifically, the free energy given by Eq. 1 is in the unit of kT . Thus, the scaled free energy is in the unit of β −1 c 2 q 14 1 kT , or ξ = β −1 c 2 q 14 1 . kj,L = L −1 [Lkj], where [k] rounds each entry of k to the nearest integer. The initial values of these phases are prepared as follows (8), φ(r) = j∈J a exp (ikj,L · r) . [6] In practical computation, a is taken as 0.058 for better convergent performance. The initial choice of the set J, which is based on the symmetry of the intended structure, is taken as follows:
2020-08-03T01:00:46.235Z
2020-07-31T00:00:00.000
{ "year": 2020, "sha1": "17a4eb60cfcd74dbd87c1828377447d8d35ac10a", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2007.15866", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "17a4eb60cfcd74dbd87c1828377447d8d35ac10a", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Physics", "Medicine" ] }
263674785
pes2o/s2orc
v3-fos-license
Multispecies blue justice and energy transition conflict: examining challenges and possibilities for synergy between low-carbon energy and justice for humans and nonhuman nature This paper explores deep insights into sustainability transition tensions and pathways in terms of place-based conflict and potential for synergies between offshore wind energy (OWE) development and justice for humans and nonhuman nature. Specifically, we build a capability and recognition-based multispecies blue justice framework that at once centers ecological reflexivity (i.e., environmental awareness-raising, proxy representation of nature, and institutional recognition and protection of rights of nature and human-nature relationality), decenters anthropocentric frames of justice, and sheds light on injustices, human and nonhuman that climate and energy transitions may create or reinforce. This framework then informs analysis of a sustainability transition conflict, specifically a longstanding OWE conflict on Hiiumaa island, Estonia. This analysis unravels justice concerns, human and nonhuman, raised by proxy representatives of nature (i.e., grassroots actors and environmental stewards), the knowledge contestations involved, and the resolution measures undertaken thus far. Next, we discuss the possible transformative role of the OWE conflict, including how a Supreme Court ruling invalidating the OWE plan has fostered reflexive planning and may have set a legal precedent that may have human and nonhuman justice implications for the handling of future planning cases. We then highlight remaining challenges for socially and ecologically responsive OWE deployment. These include the judicial non-recognition of nature’s right as well as environmental values and sociocultural ties to nature as rights worth protecting, and the likely effects that formalization of European Union ambitions to speed-up and ramp-up renewable energy could have locally. These include prospects for environmental stewards and ocean defenders to steer nature-positive, people-centered energy transitions. Last, we propose conditions for enhanced multispecies justice, including how formal interventions (e.g., law) and informal practices (e.g., negotiation, awareness-raising) can be harnessed to unlock productive conflict and align energy transitions with the norms of justice, human and nonhuman. Introduction Oceans and seas are fundamental to life.They cover around 70% of the Earth's surface, contain 80% of all life forms, produce at least 50% of the Earth's oxygen, absorb about 30% of human-induced CO 2 emissions, and provide food (about 20% of daily intake of animal protein), jobs, medicines, cosmetics and biofuel (from algae), and energy (from wind, waves, and tides) (Scholaert and Jacobs 2022).Coastal communities also depend on oceans for shelter, livelihoods, recreation, as well as spiritual, mental, and physical wellbeing (Gee et al. 2017;Tafon et al. 2023a).Oceans are also vital to the world's economy, with estimates suggesting that the value added generated by ocean-based industry globally could reach USD 3 trillion in 2030, with oceanrelated employment estimated to surge above 40 million during the same period (OECD 2016).Oceans thus have the potential to contribute to more than half of the sustainable development goals, including goals 1 (no poverty), 2 (zero hunger), 3 (health and wellbeing), 7 (affordable and clean 45 Page 2 of 16 energy), 8 (decent work and economic growth), 9 (industry, innovation and transformation), 13 (climate action), 15 (biodiversity), and 16 (peace and justice).However, ocean health and the wellbeing of organisms that depend on it are under threat from climate change (e.g., through heat and emissions related acidification and oxygen deficiency) and other human-related activities, including seabed mining, and pollution from shipping (e.g., oil spills and underwater noise), plastics, and agricultural, industry and sewage run-offs (Scholaert and Jacobs 2022;Erinosho et al. 2022).Unabated climate change and blue economic pursuits can also undermine the capabilities of the most vulnerable who depend on the seas and coasts for material and non-material wellbeing and for whom sea-level rise and natural disasters such as storm surges diminish responsive capacities (Nash et al. 2022).Threats to coastal community and ecological wellbeing are also likely to intensify as new maritime sectors such as blue biotechnology, offshore renewable energy, and marine aquaculture intensify (Tafon et al. 2022). Among new maritime sectors, offshore wind energy (OWE) has gained prominence in supranational energy and climate policy circles as an engine of growth and decarbonization, with the European Union (EU) seeking to up its installed OWE 20-fold (EC 2020a, b).However, OWE deployment across Europe has been slow.Investors, developers, and policymakers alike ascribe this to permit-related issues (EC (European Commission) 2022a),1 although threats to biodiversity, national defense, and community wellbeing (Tafon et al 2019) as well as recent supply-chain bottlenecks resulting from economic sanctions on Russia are also stalling progress (EC (European Commission) 2022a).Nonetheless, newly proposed measures to accelerate progress, particularly those triggered by the recent ambition to end Europe's dependence on an increasingly volatile and politically weaponized Russian energy well before 2030, have birthed an era of rapid renewable energy (RE) that is setting off an unprecedented massive scale OWE rush.Indeed, despite having just recently (in 2021) raised the Union's RE target (from 32 to 40%), the need to phase out dependence on Russia's energy has resulted in a proposal to further raise the target to 45% (EC (European Commission) 2022a).Furthermore, the European Commission released in May 2022 alone a suite of policy proposals which if formalized, will among other things simplify and fast-track permit-granting procedures for RE projects.Among them, the Proposal to amend RE Directives (EC (European Commission) 2022b) articulates four lines of action, including designing so-called renewables-go-to areas (in which an environmental impact assessment (EIA) is not required), limiting permit-granting time to one year (for renewables-go-to areas) and two years (for projects outside renewables-go-to areas), speeding-up judicial appeals procedures, and institutionalizing RE as an "overriding public interest " (p. 23). Given the political nature of the ocean (i.e., the multiple and conflicting stakes, worldviews, values, and power relations), there is urgent need to unravel what rapid energy transition could mean for environmental stewardship and democracy, power relations, and the wellbeing and capabilities of the most vulnerable, human and nonhuman.Responding to calls for a more just sustainability transition at sea (Bennett 2022;Crosman et al. 2022;Tafon et al. 2023b), this paper argues that ocean justice must be conceptualized and pursued beyond prevailing anthropocentric frames and practices (that favor elite humans) to embrace a more encompassing multispecies blue justice (MBJ) concept that extends capabilities and the community of justice to neglected others, human and nonhuman. In line with the above claim, this paper examines tensions and potential for interface between RE transition and place-based ocean justice concerns relating to wellbeing and capability, human and nonhuman.To do so, we first elaborate a MBJ framework that can be used to critically assess and reform ocean-based sustainability transitions in terms of their sociopolitical and ecological performance.We define MBJ as the wellbeing of all lives (i.e., humans, ecosystems and plants and animals) or their condition and ability to flourish.Our MBJ concept places emphasis on identifying and addressing structural forces (e.g., governance arrangements, norms, power relations) that undermine the rights, needs, and capabilities of marginalized identities, human and nonhuman (White 2013; Tafon et al. 2023a).Defined in those terms, MBJ calls for an ecological reflexivity that supports institutionalized recognition and representation of this silenced constituency, strengthens environmental stewardship and citizenship, and harnesses mutually beneficial relations between humans and nonhuman nature.Starting from the premise that prevailing injustices against humans and nonhuman nature result from structural inequalities, MBJ seeks recognition and better involvement of marginalized humans and nonhumans, through either direct or proxy 2 representation at the supranational, national, and local levels of policy and decision-making (Schlosberg 2007). Secondly, we examine through our MBJ lens, the stakes involved in an ongoing OWE conflict in Hiiumaa, Estonia, unraveling the role of knowledge in sustaining the conflict, the MBJ concerns raised by various representational proxies of nature and coastal identity in pushing for nature-positive, people-centered RE transition.More explicitly, we examine, on the one hand, how Hiiumaa islanders believe the OWE development would impinge on their capabilities to live a good life, including the importance of the environment and nature both as a key aspect of that life and as a right in and of itself.In examining this aspect of our study, we place emphasis on the relationalities between the wellbeing of place-based humans and nonhumans, i.e., we examine how humans in Hiiumaa see how their wellbeing in relation to nonhuman nature will be affected by the OWE project (if it proceeds).Additionally, we examine how more formalized proxy representatives of nature (i.e., environmental bureaucrats and agencies) countenance the nonhuman justice effects of proceeding with the OWE project as presently proposed.We emphasize the role and agency of grassroots movements and "expert" proxy representatives of nature as a means to address the methodological challenge of giving direct political voice to nonhumans.By highlighting issues and struggles around multispecies recognition, capabilities, and representation in relation to OWE development, the paper makes a theoretical and empirical contribution to the field of environmental justice, especially more-thanhuman justice, which while a conceptually burgeoning field, is still in need of empirical grounding not least in a multistakeholder, power-ridden, and conflict-laden context such as marine-based sustainability transition planning and governance. The remainder of the paper is structured as following.Section two sets our methodological approach.It outlines the importance of examining MBJ in conflict settings and details the methods and materials that we draw on for the empirical analysis.Section three develops a recognition and capability informed MBJ framework for the analysis of injustices, human and nonhuman.The framework then informs analysis in section four, of a longstanding OWE conflict in Hiiumaa, Estonia, focusing on actor positionalities, knowledge contestations, and MBJ justice concerns raised by grassroots actors and other representational "proxies" of nature.Section five considers ways in which the OWE controversy may have positively transformed Estonia's marine and OWE planning, while also highlighting remaining challenges for socially just and ecologically sustainable OWE deployment.Section six considers ways in which the capabilities necessary for enhancing the wellbeing of Hiiumaa residents and nonhuman nature can be advanced in relation to planning for RE.Section seven concludes the paper. Studying MBJ through ocean conflicts We situate our empirical examination of MBJ issues in a local environmental conflict setting as a way to gain insights into how the wellbeing of humans and nonhuman nature is affected by human activities and how this is countenanced by socioenvironmental stewards and defenders (Bennett 2022).Conflict sheds light on how nature and its use and management are framed and organized; how rules, policies, and cultural norms and practices condition this; what values, rights, needs, knowledge, and capabilities (human and nonhuman) are at stake; how and by whom institutionalized norms are resisted, and with what alternative truths, values, and sustainability visions (Smith and Patterson 2018;Temper et al. 2020;Tafon et al. 2022).Conflict portals are thus vital for analyzing and potentially redressing MBJ issues.This is because conflict brings to the fore the socioenvironmental struggles of diverse groups, e.g., small-scale fishers, environmental stewards and activists, indigenous communities, and others, as they raise concerns related to neglected rights, needs, sufferings, and beings and doings (Alexander 2019;Jentoft 2020;Tafon et al. 2023a).These struggles tend to challenge institutions and practices and seek to reverse or minimize injustices and secure recognition and protection of rights to flourish for the marginalized-nonhuman nature, the poor, ethnic and racial minorities, women, children, and future generations (White 2013;Pellow 2018;Scheidel et al. 2020).Conflict thus offers opportunities to spot and potentially institutionalize hitherto unrecognized or undervalued moral regimes, relational cosmologies and practices, and environmental knowledge and stewardship in support of wellbeing, human and nonhuman (Schlosberg 2007) as we highlight both conceptually and in the context of the ongoing OWE controversy in Estonia. Methods and materials Empirical material for this paper was obtained through three strategies.First, through online and face-to-face semi-structured interviews with diverse interest groups (see Appendix), including from EIA (N = 1), the Ministry of Environment (N = 1), OWE developers (N = 1), and residents of Hiiumaa (N = 3) who represent Hiiu Tuul, a grassroots movement.The reason for the limited number of interviews (N = 6) is mainly because we targeted only actors who are either directly involved in the OWE process (e.g., the developers, or the Ministry of Environment having jurisdiction over EIA matters) or were affected in human and nonhuman justice terms (e.g., Hiiu Tuul).Furthermore, the three Hiiumaa residents were selected based on prior knowledge (e.g., Tafon et al. 2019) of them as key leaders of the Hiiu Tuul group that spearheaded the legal opposition of the OWE project.Other actor groups with stakes in the OWE project (e.g., Hiiu Municipality) did not respond to our requests for interviews.Second, in addition to interviews, we carried out thematic content analysis of written comments (N = 3) submitted in September 2019 by Hiiu Tuul, the Hiiumaa Environmental Board, and the Estonian Fund for Nature.These comments were submitted in response to an EIA that the OWE developer had produced in an attempt to revitalize its OWE plan, which the Supreme Court had invalidated in August 2018.These stakeholders were selected on the basis of either being directly affected by the OWE project in human justice terms (Hiiu Tuul) or actively representing nature (the Estonian Fund for Nature, the Hiiumaa Environmental Board, and Hiiu Tuul).While several other actors (e.g., the Port of Tallinn, the Ministry of Defense, the National Heritage Board) also submitted written comments in relation to the EIA, these were not considered in this study because they did not align with our primary focus on MBJ concerns.Our third research technique consisted of (1) a content analysis of the OWE developer's website and a signed Cooperation Agreement between the developer and a coastal municipality, and (2) an online participant observation of bilateral meetings between Hiiumaa municipality and the OWE developer. The aim of the combination of these research techniques was to broaden and deepen our understanding of how the OWE project is framed, the emerging human and nonhuman justice concerns and related knowledge claims, and the strategies adopted toward securing the project's acceptability.We also sought to trace the conflict trajectory and map out actor positionalities both historically and as the conflict is currently unfolding.Here, we place emphasis on potential alliances and disruptions, agreements and concessions, remaining conflict resolvability challenges, and necessary conditions to move beyond the current stalemate toward rendering the OWE project socially just and environmentally sustainable. Multispecies blue justice The past few years have witnessed the expansion of the distinct but interrelated fields of blue justice (Saunders et al. 2020;Parsons et al. 2021;Bennett 2022;Crosman et al. 2022;Tafon et al. 2023a) and climate justice (Schlosberg 2019;Shue 2019;UN 2015).However, both fields frame justice primarily in human terms, with blue justice emphasizing equitable distributions and empowerment of weaker actors, and climate justice focusing on climate disasters, causes, and differentiated responsibilities, vulnerabilities, impacts, and adaptive capacities.Proper understanding of procedural and distributive justice and differential climate change vulnerabilities and responsive abilities is undoubtedly crucial to addressing the "greenhouse gassed and fossil fueled desires" of human "weathermakers" (Neimanis 2019 p. 432).However, overemphasis on distribution and participation, especially within a narrow frame of climate change effects, may obscure and normalize human and nonhuman injustices that seemingly innocuous technological solutions to climate change (e.g., OWE) may cause or exacerbate (Kaldellis et al. 2016;Lloret et al. 2022).The MBJ framework that we advance here is crucial for transforming RE transition conflict and enhancing wellbeing and capabilities, human and nonhuman.In developing the MBJ framework we focus on capabilities and recognitional justice, which while critical to realizing the other dimensions of justice (Honneth 1995), remain undertheorized, under-examined, and under-pursued in the ocean (Saunders et al. 2020;Tafon et al. 2023a).However, it is important to note that, while the MBJ framework sees capability and recognition as "foundational," all the constitutive elements of a theory of justice (including procedural and distributive justice) are interdependent and indivisible.Furthermore, while for analytical clarity we elaborate MBJ in terms of human and nonhuman justice, in practice they are interlinked and interdependent and should be treated as such. Recognitional justice starts from the premise that socioeconomic inequalities and insecurities, political exclusions, environmental harms, and inequitable distributions of rights and capacities across humans and nonhumans, time, space, and differentiated identities are rooted in structural arrangements (White 2013;Schlosberg 2007).From this premise, recognitional justice seeks remedy at the structural level of human institutions (e.g., regulation, policy, capitalism, norms etc.) where rules and discourses around environmental rights, needs, citizenship, stewardship, relationships, behavior, identity, vulnerability, participation, and distribution of goods and bads are constructed and organized (Pellow 2018;Tafon et al. 2023a).Recognition is centered on three key principles-love (loving care for the other's wellbeing in light of their needs), respect, (the organization of a system of political and civil rights that bestows on subjects a status of autonomous "personhood" and representation with equal rights as others), and esteem (by which every being should enjoy social esteem according to their achievement as productive beings) (Honneth 1995;Honneth in Fraser and Honneth 2003 p. 139-141).When imbued with a capability approach, these forms of recognition are crucial for advancing MBJ. In terms of human justice, MBJ considers the different needs and values of vulnerable social groups and the conditions for strengthening and actualizing their capacity to contribute to society and to flourish as autonomous individuals and communities.Here, MBJ addresses the relationship between human needs and the ocean, in the sense that humans depend on oceans for individual and group capabilities.A capabilities-informed account of MBJ thus enables unraveling of why certain things matter to people across time, space, and identities.These include factors that enable fulfillment of material and nonmaterial forms of wellbeing, from coastal identity to supportive personal relationships, rewarding employment, good psychological and physical health, strong community, financial and personal security, and a healthy and attractive coast and sea.An MBJ thus emphasizes the relationship between place and multidimensional wellbeing and sheds light on the role of political, environmental, economic, social, demographic, and technological processes in either hindering or advancing group capabilities (cf.Robeyns 2020).From this perspective, marginalized groups are understood as requiring recognition qua love to meet basic needs; respect in the sense of promoting policies, legislation, norms, and rules that foster their agency in processes that affect their lives; and esteem in the sense of recognition of their fundamental rights to a decent and full-functioning life.Misrecognition of these human rights and needs interrupts community capabilities and functioning, therefore resulting in socioenvironmental harms and injustices (Schlosberg 2007). The second way in which MBJ is vital to sustainability transitions is its broadening of the subject of justice beyond anthropocentric frames to encompass the wellbeing of nonhuman nature (White 2013; Pellow 2018), hence the "multispecies" in MBJ.Emphasizing the capabilities of well-functioning ecosystems, MBJ creates a link between ocean health, and the basic needs of humans and nonhumans, whereby ecosystems serve as life support systems for both categories of justice subjects (Celermajer et al. 2020).MBJ is concerned with resilience and responsive abilities linked to threats and conditions for the wellbeing of nature in and of itself, while valuing socio-natural ties, relationships to nature and how a "balance" might be achieved with this broader community of justice subjects.Intrinsically, this means recognizing nonhuman nature as subjects of justice whose wellbeing or flourishing as individuals or a community depends on better treatment of ecosystems and is vulnerable to their abuse (Schlosberg 2013).Relationally and instrumentally, it means avoiding or improving practices that undermine nature-people relationships, and nature's contributions to society, including oxygen and food production, CO 2 absorption, and more.MBJ thus calls for broadening the moral and legal community of justice to nonhumans (Tschakert et al. 2021) in terms of valuing "all beings" in all their diversity and relationships and composing legal frameworks and relational ontologies of care and solidarity that support nature-positive, people-centered sustainability transitions.This integrative approach shifts focus of blue justice from a mere "social" equity concern around distributions of material benefits and costs, to encompass a wide range of human (i.e., health, rights, identity, culture, food) and nonhuman nature issues (e.g., harm to ecosystems and knock-on effects on climate, and species growth, reproduction, and mortality).Being attentive to the differential wellbeing and responsive abilities of people and nature, MBJ broadens the scope of justice beyond a narrow focus on climate change (e.g., causes, disasters, and differentiated vulnerabilities, impacts, and adaptations) to encompass injustices, human and nonhuman that innovative technological solutions to climate change themselves may also spawn or exacerbate. Two theoretico-methodological issues arise.The first concerns an ongoing debate about whether and how we can extend justice frameworks to nonhumans (e.g., between extended capabilities approaches or emphasis on distributional justice through ecological space), and whether it is possible to institutionalize this.Notions of liability for harm, and responsibility for care of the nonhuman (that cannot protect itself) provide moral grounds for extending the community of justice to nonhumans (Wienhues 2020) and are finding their ways into institutionalized statements and legal frameworks (White 2013).For instance, recognizing ecosystem rights and nature rights, the 2008 Constitution of Ecuador explicitly states in Article 71: Nature… has the right to exist, persist, maintain and regenerate its vital cycles, structure, functions and its processes in evolution.Every person, people, community or nationality, will be able to demand the recognition of rights for nature before the public organisms' (cited in White 2013 p. 151). The second concern relates to the challenge of giving political voice to the nonverbal communicating nonhuman nature, in the sense that while humans can verbally express their views and experiences of injustice and what diminished capabilities might entail in light of different stressors, nonhumans cannot (at least, not directly).However, this is a challenge only if, as most critiques of multispecies justice do, think of nonhuman communication strictly in terms of actual and direct presence in democratic conversations and institutions.But as Schlosberg (2007 p. 192) notes drawing on Dryzek's (1995Dryzek's ( , 2000) ) notion of ecological reflexivity, institutionalized recognition and representation of nonhumans in environmental governance entail widening our conceptions of communication to include the nonverbal "speech" of entities that while (seemingly) lacking subjectivity and rationality, have physical integrity and "bodily" processes that should be listened to and respected.Importantly, Schlosberg's (2007) notion of integrity here, among other aspects, includes consideration of the health and functioning of ecosystems, i.e., including consideration of how human forced changes to ecosystem conditions and qualities impact capabilities of nonhumans to flourish.Practically, it entails listening to the "signals" including species extinction, droughts, flu-ridden birds, climate change (e.g., insect eggs hatching earlier, ocean warming etc.) that the "natural world" communicates through nonverbal speech.Here, Schlosberg's (2007) notion of proxy representation is useful to both extend the capabilities approach to nonhuman nature and represent this "speechless" Other in decision-making circles.Proxy representation refers to the use of a variety of actors to represent in environmental institutions the "remote" others who are inarticulate or cannot represent themselves but are (likely to be) impacted by environmental/climate change and decisions.Proxy representation can be a very effective means of ecological reflexivity as a local population and a diverse array of actors with varying degrees and types of connection to and expertise on nonhuman nature can inform environmental decisions on issues native to place (Schlosberg 2007 p. 193-194).Examples of proxies, as we show in our empirical analysis, might include relevant legal advocates, locals with strong socio-cultural and natural ties to the area, non-governmental advocacy organizations, as well as conservation scientists and locals with knowledge of animal and ecosystem wellbeing/suffering, such as local amateur bird surveyors (see Wilsey et al. 222).A key point is that proxy representatives speaking for nonhumans are assumed to have credible insights into conditions for their wellbeing through a range of means, including but also beyond language and science (Brown 2018).Indigenous and local communities that have adopted a rights of nature ontology and connect with nature convivially through practices such as Ubuntu (Mabele et al. 2022), Buen Vivir (Dancer 2021), or "two-eyed seeing" (Reid et al. 2021) are also useful proxies that can represent nature in environmental decision-making.Some of these relational ontologies (e.g., Buen Vivir) are already enshrined in some domestic legal frameworks, e.g., in Bolivia (Dancer 2021).These efforts reflect a growing realization of the need to confer moral considerability and therefore recognition that strives (at least) to account for the wellbeing of all living beings in decision-making processes (Wienhues 2020). Sociopolitical dynamics of Estonia's offshore wind energy conflict This section presents an account of the OWE conflict, with emphasis on the conflict trajectory and dynamics, key decisions made, and how these relate to the depoliticization and repoliticization of MBJ. Conflict dynamics In 2006, RE company Nelja Energia announced plans to build offshore wind farms (OWFs) with a production capacity of 700-1100 MW near the coast of Hiiumaa, Estonia's second largest islands whose economy mostly depends on tourism, livestock, farming, fishing, fish processing, and wrecking (marine salvage).The project application for a permit was put on hold pending formalization of a relevant legal framework to regulate OWFs planning and multiuse of the sea.In the meantime, a series of informal meetings on OWFs related issues were held by the Governor of Hiiumaa, in which coastal residents, municipalities, and the defense sector expressed different concerns to be taken into consideration.In June 2012, formal OWFs planning processes began as part of a county-wide marine spatial planning (MSP) process.The Hiiumaa MSP consultation process lasted four years, culminating in the adoption of Estonia's first marine plane in June 2016.Among others, the plan allocated three areas-i.e., Neupokojev Bank, and the Vinkov and Apollo shoals-for development of the OWE project.However, the MSP plan was contested in court by a local environmental group (Hiiu Tuul) and Emmaste municipality, one of four municipalities of Hiiumaa county.In 2017 a legal decision was reached by a Tallinn court in favor of the OWF, which was later upheld by an appellate court. Developer's concessions to secure social acceptance To ensure the social acceptance of the project, the developer Nelja Energia established a Cooperation Agreement in 2017 with Hiiu Municipality, one of the four municipalities of Hiiumaa county.According to the agreement, in order to minimize the visual footprint of the project, the company has agreed to build the OWF at least 12 km from the island and to use only submerged cables if it chose to connect the OWF to the transmission grid via Hiiumaa.Other benefits included the training of technicians and the setting up of a maintenance operations center of the OWF in Hiiumaa.Nelja Energia also agreed to exclude Neupokojev Bank, an important recreational site, from its planned development area.The developer would also support local not-for-profit initiatives and set up a nonprofit association to which it would donate at least 0.2% of its revenue from the sale of electricity, but not less than €0.32 per MW-hour of electricity produced.Finally, Nelja Energia would also create possibilities for Hiiu Municipality residents to invest in the OWF through buying bonds bearing a fixed 15% annual interest.However, recent events (described below) cast uncertainties on the implementation of the agreement.For instance, in 2018, Estonia underwent an administrative reform that resulted in merging of the four Hiiumaa county municipalities into just one Hiiumaa Municipality, meaning that Hiiu Municipality with whom Nelja Energia had signed the agreement no longer exists. OWE plan invalidated and a new EIA In May 2018 Enefit Green a RE subsidiary of Eesti Energia (Estonia's state-owned energy company) acquired Nelja Energia, thereby inheriting the Hiiumaa OWE project.The new developer also prides itself in putting community wellbeing and environmental care at the heart of its operations.However, these promises did not stop the grassroots group, Page 7 of 16 45 Hiiu Tuul from pursuing its opposition to the project.The group, which had gathered over 8000 signatures from residents and tourists, enlisted a charismatic spokesperson and the help of an experienced lawyer in its anti-wind campaign.Just three months after Enefit Green took over from Nelja Energia, Hiiu Tuul's legal campaign culminated in a decision by Estonia's Supreme Court in August 2018 invalidating the OWF project plan.The Court opined that the OWF project had not been subjected to sufficient analysis of potential impacts on the marine environment and landbased activities, including planned mitigation measures.This came shortly after (i.e., 4 th May 2018) the Ministry of Environment had dismissed the developer's EIA submitted in 2017, due to impact on nature.Nonetheless, while some actors, including Hiiu Tuul read the Court's verdict as a total cancelation of the OWE project, the developer continued to push for its approval.In 2019, Enefit Green submitted an updated EIA (hereafter EIA2) with two separate options to the Ministry of Environment.In both options, the developer proposes to reduce the scale of the OWFs in the TP1 (in Apollo shoal) and TP2 (in Vinkov shoal) development areas (see Fig. 2).The difference is that Option 1 proposes a maximum a 2.5-km distance between TP1 and a marine protected area (MPA) in Apollo shoal, while in Option 2 the maximum distance is increased to 4 km.However, to ensure the economic and technical viability of the OWE project, the developer has increased the height and production capacity of turbines to 260 m and 15-20 MW respectively.Concerned stakeholders submitted written comments to the Environment Ministry.Below, we analyze the OWE conflict grievances and demands of key actors as expressed in interviews and as written submissions. Conflict mapping: emerging multispecies justice issues and actor positionalities In this section, we present different representations of the conflict across actor groups (Table 1).We focus on the concerns of the grassroots group and environmental "experts," given the focus on this paper on the human and nonhuman wellbeing elements of MBJ. Grassroots Group Hiiu Tuul First, in order to legally challenge the OWF in court after adoption of the MSP plan, differently concerned Hiiumaa coastal residents formed an environmental non-governmental organization, Hiiu Tuul.Their legal objection was consciously framed around nature protection concerns, judging that concerns around sociocultural wellbeing would not have legal sway.Of particular importance was the impact of the OWE project on bats and migratory birds in (Vinkov shoal, i.e., development areas TP2, TP3 and TP4) (see Figs. 1 and 2) and on fish habitat and other species in the MPA bordering development area TP1.Non-judiciary-based concerns raised related to sociocultural wellbeing, that is, impacts on coastal tourism, the built environment (property value), and human wellbeing (living environment, health, and esthetics).In terms of human wellbeing, respondents are concerned about disruptions to place attachment but also to emotional wellbeing. The open sea on the horizon carries in itself a big feeling of relief.The sea frees us from everyday tensions and helps to carry on with life… The turning blades of turbines will pull attention to them, and the sea will lose emotional value and become a random video film [Hiiu Tuul 1]. I feel well knowing that my roots are implanted somewhere [Hiiumaa].If this is transformed in an unacceptable way, then I have lost an important place.We need a place to which we are attached as a home [Hiiu Tuul 2]. Hiiu Tuul also argued that Hiiumaa could achieve clean and self-sufficient energy through a mix of RE sources developed at small scale, including from biomass, solar, and wind energy.Relatedly, some members cast doubt on OWE as a sufficient and reliable alternative to energy produced from oil shale.They also questioned the greenness of OWE, considering the environmental footprints of OWFs from materials procurement to construction, production, and demolition phases.In their written response to EIA2, Hiiu Tuul reiterated the above concerns, adding that EIA2 lacks data on the different concerns raised in relation to the earlier EIA.These include impacts of dredging and sand extraction on seals and on coastal processes (e.g., erosion), as well as limited data on the depth of foundational dredging and the management of excavated sediment.Another key concern is what they see as the industrialization of a "pristine" Hiiumaa landscape that attracts tourists. Hiiumaa is a tourism island because of its relatively pure and diverse nature. If hundreds of turbines are seen above the sea, then one of our most valuable natural environments would have become an industrial landscape [Hiiu Tuul 1]. In addition, the group is applying for termination of the developer's permit for special use of water on the grounds that there is no legal basis for the OWE project following the Supreme Court cancelation decision of 2018.Acknowledging that RE transition is a priority, Hiiu Tuul nonetheless argues that this is no reason to sacrifice nature protection or the norm of democracy that underpins ocean planning.Based on these claims, Hiiu Tuul requests that the Ministry of Environment should not approve EIA2.However, if the latter were to be approved, Hiiu Tuul adds, this should be on the condition that a decision is made that restricts the use of windmills (e.g., during migration of birds and bats) and sets stricter limits on the turbine height and total project production capacity (than that currently proposed). Hiiumaa Environmental Board Hiiumaa Environmental Board is a decentralized, policy application and monitoring oriented organ of Estonia's Ministry of Environment based in Hiiumaa.While its role in the early planning phase of the OWE project was seen by some coastal residents as biased toward the developer (see Tafon et al. 2019), its position vis-à-vis EIA2 is rather critical.For instance, the Board sees the 4 km distance away from the MPA in Option 2 as insufficient and not consistent with the 2018 recommendation by the Ministry of Environment to consider alternative locations outside of the MPA.The Board thus proposes complete exclusion of TP1 from OWE development.It also argues that EIA2 was supposed to also provide analysis of impacts on the ecosystem and organisms in the Vinkov shoal (TP2, TP3, and TP4) and mitigation measures.While not having a protection status like Apollo shoal (TP1), the Board argues that from an ecosystem perspective, the entire Vinkov shoal is important to migratory Arctic waterbirds (e.g., long-tailed duck, eider, greater scaup), with significant risk of collisions and disruption of feeding and breeding.The shoal is also home to seals, bats, and fish (e.g., eel, garfish).According to the Board, Enefit Green's reliance on bat studies conducted elsewhere (Kõpu peninsula) without actual site-specific studies where its OWE is planned, is insufficient to justify the decision not to propose mitigation measures for impacts on organisms in the Vinkov shoal.However, Enefit Green relied on an existing site-specific ornithological study in developing EIA2. As the study found that the number of waterbirds stopping in any one of the three development areas in Vinkov shoal did not exceed 20000 individuals at a given time, the developer interpreted this as not meeting the Ramsar protection requirements to justify its decision to maintain TP 2, TP3, and TP 4 development sites.However, the Board argues that together, the Vinkov shoal as a whole is a valuable ecosystem and should be considered as such, not in terms of individual development areas.It also believes that the OWE construction will alter and deteriorate the seabed habitat and biota on which numerous bottom-feeding waterbirds depend, and argues that the developer has failed to provide a clear account of feeding area that will be lost for bottom-feeding waterbirds. The Board also notes that the most important spawning grounds for pikeperch in Estonian coastal waters are located in Hiiumaa.However, it argues, the EIA2 does not report on impacts on pikeperch spawning grounds, as well as on fish (pike, eel) migratory routes, and operations-related noise on fish (especially for a project that will use gravity foundations).Finally, the Board also notes that submarine cables will affect the permanent habitat of gray seal and ringed seal under protection, and that underground cables connecting the OWE to the electricity grid on land will be detrimental to the integrity of Natura 2000 sites on land, as cables are planned to pass through several protected sites, including the Tahkuna, Kukka-Luhastu, and Väinameri nature reserves.The Board thus sees the planned activity as conflicting with guidelines for OWE development as stipulated in different policies, including Energy Development Plan until 2030, the Nature Conservation Plan until 2020 etc.According to the Board, proposals contained in EIA2 seem to be guided by cost efficiency without serious consideration of environmental impact. Estonian Fund for Nature The Environmental NGO, Estonian Fund for Nature (hereafter EFN) is concerned that the section on the effects of OWE projects on marine mammals and bats is largely based on studies done elsewhere and does not reflect the realities of the particular project area.In relation to bats, they argue that EIA2 is inconsistent with the EU 2014 EUROBATS "Guidelines for consideration of bats in wind projects" as well as the Estonian Nature Conservation Act, noting that the developer's assertion that the OWE project will not adversely affect the migratory corridor of bats is a mere assumption based on very limited data, and thus insufficient for an EIA.They also deem the assertion that "no significant death or injury will occur" (for bats) as insufficient and superfluous for an EIA, as it does not specify mortality rates.They also argue that the statement that developers will reduce the speed of turbines during avian migration periods is vague, as it does not specify the time period and speed limit.Furthermore, they also request minimal use of drilling or ramming for gravity foundation.They hope that undue consideration or "carelessness in carrying out the EIA will not be an obstacle to carrying out the [OWE] project necessary for the Estonian renewable energy transition".They also recommend that the developers reflect further on the Supreme Court's decision that annulled the project. Transformations flowing from the conflict and remaining challenges for MBJ In this section, we discuss favorable conditions for MBJ that the OWE conflict seems to have engendered, while highlighting remaining challenges. Transformations: legal precedent and a more reflexive marine planning? Coastal residents and local environmental group, Hiiu Tuul have been at the forefront of the socioenvironmental conflict surrounding the Hiiumaa OWE project since it was formally announced in 2006.While opposition by the defense sector as well as the environmental concerns raised by the Estonian Fund for Nature may have also played a key role, it is largely to the over-a-decade-long campaign led by Hiiu Tuul that delays in implementing the OWE project can be attributed, at least judging from a key element on which the Supreme Court (case 3-16-1472) based its annulment ruling, viz., insufficient scientific analysis of the project's environmental impacts and mitigation measures.Importantly, experience from the Hiiumaa MSP and related OWE controversy (including the resistance of coastal residents and Hiiu Tuul, as well as the resultant invalidation of the OWE project) seems to have contributed toward transformation of ocean planning in Estonia in two ways.First, it resulted in the Supreme Court OWE plan annulment ruling, effectively establishing a planning judicial precedent in Estonia.Indeed, the Court established that while MSP is only subject Page 11 of 16 45 to strategic environmental assessment (SEA) requirements (meaning that it is expected to be less detailed than project specific EIAs), SEA studies should not be limited to minimum statutory requirements in terms of depth and breadth.Beyond MSP, this may also have implications for the planning or development of major infrastructure projects on land or at sea. Second, the Hiiumaa OWE controversy became a point of reference for stakeholders and planners engaged in the Estonian national MSP process that took place between May 2017 and May 2022.In an attempt to avoid the Hiiumaa "blunder" in which locals dragged the Hiiumaa marine plan in court for at least 2 years, the Estonian government decided to put on hold, until 2027, considerations of OWE development in marine areas that are important for fishing.This decision, which guarantees stability for fishers until 2027 when the situation will be re-evaluated, was reached after strong opposition from fishers about developing OWE in these areas (ERR 2021).It therefore seems that the Hiiumaa marine planning and OWE conflict may have contributed toward changing the attitude of the Estonian government and marine planners toward conflict, from a predominantly negative position to one in which they increasingly take the concerns of diverse sea users more seriously, and proactively taking steps toward addressing the environmental and distributive effects of marine spatial plans.Indeed, in order to avoid a repetition of the Hiiumaa OWE legal controversy, the recent Estonian MSP plan has laid down over 20 conditions for the development of OWE, including the location of wind farms at least 11 km from the coastline (to reduce visual impacts), avoidance of overlap with traditional fishing, respect of natural assets, mitigation of environmental impacts (on fish spawning and the migratory movement of birds and bats), development of a mechanism for the inclusion of locals in the construction and maintenance of turbines, and more (ERR 2021).Furthermore, marine planners also took concrete steps to map the sociocultural values of coastal communities during the national MSP process (Pikner et al. 2022). Remaining challenges: administrative reform, nonrecognition of sociocultural values and socio-natural ties, and MBJ implications of rapid energy transitions The above analysis supports our previous argument that conflict is relevant for MBJ analysis in terms of highlighting the (re)politicization of justice, which is in tune with the conflict literature (Bennett 2022;Temper et al. 2020;Scheidel et al. 2020) which foregrounds socioecological conflict as an opportunity to harness diversity and align transitions with the principles of ecological reflexivity, justice, and rights, human and nonhuman.However, despite the positive changes registered at the national MSP level in Estonia, there still remain important obstacles to MBJ and capabilities.First, the merging of all four Hiiumaa municipalities into one larger municipality (through the 2017 administrative reform) has distanced local decision-making, including on ocean and OWE planning issues further away from communities, with adverse implications for political voice, connections to place and nature, health, culture, environmental citizenship and stewardship, and proxy representation of nature. Second, the judiciary does not formally recognize sociocultural values and socio-natural relationships.This nonrecognition reflects the Estonian planning law, which largely reduces impact assessment to consideration of environmental effects in scientific cognitive terms.This increases the likelihood that existing threats to environmental values will materialize and intensify, especially those related to the sociocultural wellbeing of Hiiumaa islanders, including the ability to enjoy recreational activities, bodily health, sensory engagements with the open sea, and emotional bonds of affiliation with one another in relation to the sea.As we have discussed earlier, it is based on their understanding that coastal/marine sociocultural values are not formally recognized that led Hiiu Tuul to focus their legal opposition to the OWE project mainly on environmental arguments, and in scientific cognitive terms.But the issue of nonrecognition of sociocultural rights to the sea/coast is not exclusive to Hiiumaa.This is corroborated by a recent ruling made by a first-tier Tallinn administrative court in relation to a complaint filed by Saaremaa rural municipality councilor, in opposition to the recently adopted Estonian MSP plan, particularly the section that designates areas for OWE development in Saaremaa coastal waters.As reported by ERR (2022), while the court found that the complainant's concerns about the inadequacies of the EIA could in themselves be well founded, this does not provide sufficient grounds for a complaint for the protection of the "subjective rights" of a person, including the applicant's "property rights, and the right to health and a quality living environment". Finally, lingering uncertainties surrounding the broader geopolitical context of Russia's invasion of Ukraine and the changing EU policy context on RE deployment as a response to the recent energy crisis are likely to shape RE conflict relations both in Hiiumaa and elsewhere in Estonia.While formalization of the over 20 conditions for OWE development referenced earlier (ERR 2021) will give coastal communities and authorities a degree of political voice in OWE related decisions, there is a risk that this will amount almost to nothing by way of recognition and protection of socioenvironmental rights, including the wellbeing and capabilities of affected communities and nonhumans.Rather, these capabilities are likely to be trampled by the EU top-down measures (e.RE an overriding public interest, formalizing renewables go-to areas, easing-up and speeding-up judiciary processes and permit-granting procedures, among others.Indeed, their institutionalization in Estonia is likely to shape the Hiiumaa OWE conflict, in terms of changing power relations and arguably countering at least some interpretations of the Supreme Court ruling, thereby weakening the position of those that are seeking to maintain a "natural" Hiiumaa and a nature-positive energy transition.Another potential risk is the reversal of the local successes registered thus far in halting the human and biodiversity impacts of the OWE project.When combined with huge financial incentives (to RE investors and developers), the EU measures discussed above are thus also likely to weaken the environmental activism and stewardship of grassroots actors and environmental "experts" (i.e., the Hiiumaa Environmental Board, the Estonian Fund for Nature) as the project proceeds.Diminishing the agency of these proxy representatives of nature poses severe risks for the wellbeing and capabilities of vulnerable humans and nonhumans, while ironically relieving industry from the requirements of social accountability and environmental responsiveness.Put together, the EU measures risk setting in motion energy transitions that privilege speed and scale over principles of democracy and wellbeing, human and nonhuman.De-risking the human and nonhuman nature threats of rapid and massive scale RE would require acknowledgement of existing inequities in responsive abilities and resilience between and across differentiated identities, including species, class, gender, age. Toward nature-positive, people-centered RE transitions This section explores possibilities to enhance justice for nonhumans and the capabilities of marginalized Hiiumaa islanders in relation to their care for nature, socioecological values and relations, and processes to work toward transforming the existing conflict toward productive, just, and sustainable outcomes.That is, we are concerned with ecological injustices that affect the nonhuman life itself, but also undermine human capabilities that are regarded as valuable and worth protecting. Hiiu Tuul has expressed concerns that developing the OWE would effectively shift the Hiiumaa seascape (or islandscape) from a "natural place" to an "industrial place," in the sense that "Hiiumaa can no longer advertise itself as an island of untouched nature" [Hiiu Tuul 1].This view expresses both a social concern to perpetuate "an island way of life" but also implicitly contains within it a concern to preserve a place-based relationship to the nonhuman nature as it is currently experienced.In these two aspects, multiple capabilities are being touched on.The view that Hiiumaa (and related marine environs) would become an industrial seascape reflects a concern among islanders that nonhuman nature would be adversely affected by the development of OWE, an issue that is supported by concerns over possible bird and bat collision fatalities, and habitat degradation or loss for marine mammals and fish spawning.It also suggests that cultural continuity and socio-economic prospects (i.e., wellbeing and enhanced capabilities) for islanders are intertwined with maintaining and further developing existing interdependent relationships to nature (e.g. through nature-based tourism development and experiences), which is consistent with the literature (Gee et al. 2017;Lepoša and Knutsson 2022). While institutionalized processes, such as MSP commonly consider environmental implications more or less conventionally, this is cast purely in scientific cognitive and instrumental terms and linked to conceptions of nonhuman nature as a resource and/or biodiversity with no subjective interests, or arguably only concern for how the basic material conditions for life can be met.While we see instrumental representation of nonhuman nature's functionality as important to get insights into basic material conditions for wellbeing, it is insufficient as it does not adequately capture human and nonhuman capabilities (or means to live a good life) whether intrinsically or in relation to one another.This is not to say that recognizing and representing nonhuman nature's values are straightforward.As Kenter and O'Connor (2022) note: nature representations can sometimes cut across value justifications.As our reading of islanders' (especially Hiiu Tuul) concerns shows, the resilience of Hiiumaa ecosystems supports different values and capabilities-a "pristine" Hiiumaa conserves nature (e.g., bats, birds etc.) for its own sake, benefits islanders economically, and supports human-nature relations (Hiiumaa as a nature-based place).In terms of relational and instrumental values, one Hiiu Tuul member stated in an interview that there had been little to no consideration in the EIA on the: "impact of the activity on nature tourism, which is one of the main types of tourism… People's wellbeing is not only related to their health, but also to their living environment, including the surroundings, views, or the beach.There is no assessment of property and living environment" [Hiiu Tuul 3].Such relational and instrumental representations were also clearly stated in Hiiu Tuul's written submission.But so too were representations of nature's intrinsic values, as expressed in the following quote: "[There are no studies on impacts] on the migratory routes of birds, bats and seals… or how OWFs will be maintained when the sea is frozen.This may be important for the effects on seals, as the noise from construction, Page 13 of 16 45 operation and decommissioning significantly disturbs seals during calving" [Hiiu Tuul written submission]. A similar concern was expressed by environmental "experts" (the Hiiumaa Environmental Board and the Estonian Fund for Nature), albeit from a purely scientific cognitive viewpoint of nonhuman nature.In the EIA submissions, several of these experts raised concerns that the wellbeing of bats, fish, birds, seals etc. would be harmed through the development of the OWE as proposed, thereby meaning that the OWE proposal reflects a lack of consideration of the wellbeing of different nonhuman species.This is captured in the following quote by the Hiiumaa Environmental Board: "At the time of writing the EIA report, no bat surveys have been carried out in the proposed area of operation, although there are extensive bat and spring migrations of bats in the proposed area of OWFs.The effects of the project on migratory bats at sea remain essentially unexplored, as such an assessment can only be based on fieldwork carried out on the project" [Hiiumaa Environmental Board]. The Hiiumaa Environmental Board also believes that other aspects of the EIA for the OWE project do not adequately address either proper data collection or lack mitigation measures: "Sections 3.3.6 and 5.4 of the EIA state that there is an impact on marine habitats and that the loss of marine habitats may occur in 1.2 km2…, which is directly below the wind turbines.At the same time, the EIA report does not provide numerical correlations for the extent to which the feeding area for demersal birds will decrease…, taking into account both the loss of the food base and the avoidance of the wind farm area" [Hiiumaa Environmental Board]. Drawing on our framing of MBJ, engaging productively in conflict presents opportunities for a contextualized sustainability that is likely to establish conditions and relations for enhanced MBJ, as well as successful project implementation.This would include the OWE developers actually performing site-specific social and environmental studies, rather than extrapolating from studies conducted for OWE projects elsewhere.It would also necessitate engaging more meaningfully with the environmental concerns of the Supreme Court and the environmental "experts," but also with those of Hiiu Tuul in relation to taking better account of the values, capabilities, and wellbeing of humans and nonhuman nature, including their interdependencies and relationalities, in the OWE planning process.Admittedly, similarly to the Hiiumaa Environmental Board and the Estonian Fund for Nature, the Supreme Court ruling was limited to cognitive and regulatory understandings of nonhuman nature as codified in the planning law-it took no explicit interest in relationality between islanders and nature or the socioecological wellbeing of islanders more explicitly.We argue that ecological reflexivity through proxy representation and institutionalized recognition and protection of nonhuman nature rights and needs (e.g., as in Ecuador's Constitution) and humannature relationalities (e.g., as in Bolivia's Constitution) could be contextualized to support a deeper understanding and consideration in the planning process of the wellbeing of nonhuman nature and the different human experiential, cultural, emotional, environmental, and socio-economic relations to nonhuman nature (Dryzek 1995;Schlosberg 2007).Ineffective recognition, including participation and proxy representation in decisions that affect capabilities to live a good life, maintains environmental relations and develops related socio-economic opportunities, undermines capabilities, and is therefore likely to harm the interdependent relations between islanders and nonhumans.Such recognition would work toward an attentiveness to ecological flows of human and nonhuman connectivity (Celermajer et al. 2020), which would then work to illuminate how pathways for interventions such as OWE can at least take into account the implication of breaking or disturbing relations that currently may be regarded as just and sustainable.Advancing and enhancing multispecies capabilities would also require promotion of educational and awareness-raising programs to build more and better engaged environmental citizens and proxy representatives to support more responsible, cognitive, sympathetic, and convivial engagement with nature, and nurturing the wellbeing of placed-based humans in relation to nonhumans.It recognizes that limited ecological reflexivity is likely to lead to diminished wellbeing, human and nonhuman. Concluding remarks The MBJ approach employed here focusses on the intrinsic rights of nature and marginalized humans and the relationships or entanglements between human and nonhuman wellbeing.This enables insights into the relationality between how different human actors see the threat to their wellbeing posed by the OWE proposal-socio-economically, socioculturally, and socio-environmentally-as well as threats more intrinsic to the nonhuman nature, with which they have a relation.This framework thus decenters anthropocentric interpretations of blue justice, making a case for the rights of nonhumans, but without subverting (especially in the face of rapid RE transitions) the sociocultural, economic, and ecological relationships of local communities with ocean/ coastal ecosystems and organisms.The approach taken to handling the conflict (in the Supreme Court ruling) does not seem to countenance the threats to the islanders' wellbeing directly, as it implicitly, at least, does not recognize the relationship between human and nonhuman wellbeing.The concerns that have stalled (or in some views canceled) the OWE plan in the Hiiumaa MSP process relate to a lack of evidence presented about the likely impact on different aspects of nonhuman nature and more broadly environmental values, if the proposal were to proceed. There are opportunities within the existing conflicting actor positions to open-up lines of exploratory engagement to find common ground.This would be beneficial both to advance the OWE project as well as to give certainty around the future for islander actors concerned about the project's implications for multispecies wellbeing.Most promising among these relate to the proponent undertaking of more detailed studies of the ocean's intrinsic, relational, and instrumental values and relatedly developing context specific mitigation strategies in concert with islanders and different environmental "experts" and proxy representatives of nature.While seemingly not a formal requirement on the OWE proponents, it would benefit conflict relations if future EIA work would include different islander socio-cultural relations and economic aspirations linked to environmental values.These possible points engagement would need to be grounded with the various actors to see whether they could form the basis of productive exchange between the various conflicting positions.Of course, what would benefit such an engagement toward long-term people-centered and naturepositive OWE deployment beyond the case at hand would be the institutionalization of rights of nature, sociocultural values, and human-nature relationalities in planning regulation. Connected to the above point, adoption of a more ecologically centered viewpoint would recognize the valuable role that the unprotected ecosystems (e.g., the Vinkov shoals when treated as an integrated ecosystem) play in the provision of habitat for migratory birds, bats, and other species.From a MBJ viewpoint, these shoals, among other ecological functions, are integral to supporting species' capacities to fulfil their lifecycle.Recognition of the importance of these shoals also relates to islanders' concerns about the potential of the OWE project transforming a pristine "island landscape" (or seascape) to an industrial landscape (or seascape). The energy security imperative exacerbated by the existing situation with Russia implies that there will be increased time pressure on building RE capacity.This may also mean that a more conducive public sentiment toward RE is generated.Regardless, steps should be taken (e.g., through environmental education and awareness-raising as well as institutionalized ecological reflexivity and relatedly, proxy representation of nature) to ensure that speed in RE deployment does not undermine norms of democracy, ecological integrity, social wellbeing, and socioecological interlinkages. Finally, through a case study of an OWE proposal in Estonia, this article has revealed insights on conflict and potential for interface between energy transition and MBJ.In a more general sense, this provides contextual appreciation of frictions and opportunities for synergies between several of the UN SDGs, particularly in relation to contributions to climate action (and clean energy), biodiversity, health, wellbeing, peace, and justice.While actor positions were found to be rather conflictual, our MBJ approach exposed pathways to constructively engage in these tensions to pursue a more multidimensional and contextualized sustainability that is able to simultaneously deliver additional OWE capacity, meaningfully take into account islanders' economic aspirations and socio-environmental relations, and ensure nonhuman nature's capabilities to flourish. Appendix List of interviewees with dates and method g., EC (European Commission) 2020a, b, 2022a, b) to ramp-up and scale-up RE development, including making 45 Page 12 of 16 Table 1 Summary of MBJ concerns, and areas of tradeoff possibilities/challenges
2023-10-06T13:52:43.236Z
2023-10-06T00:00:00.000
{ "year": 2023, "sha1": "5a85cbf7599239ef40a34bbd5f0513da4b748ea0", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40152-023-00336-y.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "4d66d8fee22da25192bd5697e51d09d61b9fa381", "s2fieldsofstudy": [ "Environmental Science", "Political Science" ], "extfieldsofstudy": [] }
257975533
pes2o/s2orc
v3-fos-license
Evaluation of the Effectiveness of the Separate Anesthesia Induction Rooms on Multidisciplinary Work Flow in Operating Rooms Introduction Operating suites are multidisciplinary units par excellence, and mostly they are the most expensive units in hospitals. Interdisciplinary workflow and efficiency are therefore crucial, which is influenced by floor plans varying from hospital to hospital. Most operating rooms are equipped with adjacent induction rooms, allowing preparation and anesthesia induction of the next patient, while the previous patient is still in the operating room. Parallelizing the working steps is thought to improve turn-over time, thus increasing throughput, number of cases and finally revenue. However, this assumption has never been challenged. Methods We analyzed workflow during regular working hours in an operating suite equipped with a mixture of operating rooms (OR) with next door induction rooms and operating rooms without induction rooms. This allows a direct comparison of both structural elements for efficiency using utilization data over a 24-months period. Both settings were used for gynecological operations. Results Key result is that induction rooms do not improve perioperative workflow including turn-over time. Instead, ORs without adjacent induction rooms have a significantly shorter turn-over time and OR occupancy duration per case, although surgical time and staffing were similar. Discussion Adjacent induction rooms require extra space, funding, and high maintenance costs, but they do not speed up peri-operative processes. Modern anesthetic techniques allow for fast induction of and emergence from anesthesia. Induction rooms adjacent to the OR are no longer needed if general anesthesia without extended monitoring is used for the majority of cases. Introduction Operating suites are high cost and high revenue units, with costs generally on the rise and declining reimbursements. Therefore, it is crucial to optimize efficacy of each operating room (OR) while simultaneously minimizing costs, and thus improving revenue. 1 In this context, reducing idle time is essential in order to process more cases during regular working hours. Most single measures to improve throughput in a given setting, however, save only a few minutes per case, making several measures mandatory, depending on the average time needed per case. 2 Anesthesia induction rooms are supposed to save the most relevant amount of time by parallelizing the entire process of inducing anesthesia, while the OR itself is still blocked by the previous patient, followed by cleaning and the scrub nurses' preparation of all instruments needed for the next case. 3 However, during the last decades, anesthesia techniques have been simplified (eg the widespread use of laryngeal masks instead of endotracheal tubes) and the development of short acting anesthetics helped substantially shorting each patient's wake up time, thus allowing for faster induction of and emergence from anesthesia. We therefore challenged the need for anesthesia induction rooms when we expanded our existing operating suite: Due to limited space, we had the option to either add two ORs, each designed with an adjacent room for preoperative induction of anesthesia plus another adjacent room for postoperative emergence from anesthesia, or three ORs with a central holding area equipped with everything needed for induction of anesthesia, or four ORs with a rudimentary holding area, which only serves for having patients from the ward readily available when needed. Although our existing operating suite is designed with an adjacent room for induction of anesthesia plus a room for emergence from anesthesia, we rather decided in favor of more ORs available at the risk of lower effectiveness and lower case load per OR, than having less ORs with the option of better work flow related to adjacent induction rooms. In the present study, we compare the efficacy of ORs equipped with adjacent induction rooms and emergence from anesthesia, with ORs lacking those extra induction rooms. Both ORs were equally equipped and staffed. Since the effectiveness and the resultant case load per OR may differ between surgical disciplines, we took the chance to retrospectively monitor the performance after our Department of Gynecology had to move from an OR with adjacent induction rooms to an OR without these facilities. Materials and Methods Over a period of 24 consecutive months, we looked at the efficiency of our gynecology department by comparing 12 months work flow in an OR equipped with adjacent induction rooms with 12 months work flow in an OR without adjacent induction rooms. Therefore, we retrospectively analyzed the surgical time, OR occupancy time, time for induction of and emergence from anesthesia, and turn-over time (TOT) between cases. The TOT is defined as the entire time between end of surgery (end of wound closure) and begin of the next surgery (next patient's incision) and includes all work steps in-between, such as emergence from anesthesia and patient transport to the recovery room, cleaning of the OR, induction of the next patient's anesthesia, preparing the surgical instruments, disinfection of the surgical area, etc. In case an OR with adjacent induction room was used, the patient was supervised by an anesthetic nurse preparing the patient for induction of anesthesia, which was then started by the anesthesiologist, while the OR next door still was prepared for the next operation. All data were retrospectively extracted from our hospital information system (SAP, Walldorf, Germany). This retrospective study has been exempted from requiring ethical approval by the local ethics committee, since only anonymous data without any reference to patient data were analyzed (Hessian State Medical Association, application number 2022-3122-AF, Frankfurt (FRG), October 5th 2022). An unpaired Students' t-test was used for statistical analysis. Significance was defined as p<0.05. Results We retrospectively analyzed the time needed to complete different peri-operative work steps over two years, half of the time operating an OR with adjacent induction rooms, the other half working in an OR without induction rooms. A total of 1840 surgical cases were performed during regular working hours by our Department of Gynecology. All cases were included into our analysis. In both settings, each OR was equally staffed: 1 anesthesiology resident, 1 attending anesthesiologist supervising 8 ORs, 1 anesthesia nurse, 2 scrub nurses, surgeons as needed, and 1 cleaner per 4 ORs. The duration of all analyzed work steps of the perioperative workflow are summarized in Table 1. The surgical time of gynecological operations was unchanged, no matter if these operations were performed in an OR with or without adjacent induction rooms. The relatively large variation is due to the broad spectrum of gynecological operations, ranging from a 3-minute hysteroscopy to a 10-hour cancer-related laparotomy. Interestingly, induction of anesthesia and emergence from anesthesia took significantly less time when performed in the OR itself instead of being performed in separate induction rooms, leading to a significant reduction of TOT by 13 minutes per case on average. The early morning start of the first surgical case was significantly improved when anesthesia was introduced in the OR itself. Also, the number of cases started in time, which is before 08:05 a.m. in our institution, were higher in the OR without adjacent induction rooms (101 versus 51 per year). Discussion There exists a wide variety of different floor plans in operating suites, depending on the age of the facilities and the surgical disciplines using it. They mostly consist of ORs with adjacent rooms for induction of and emergence from anesthesia. The OR's design is supposed to have a substantial impact on surgical workflow. [4][5][6] Being one of the most expensive units in terms of building-costs, running costs and staff salaries, each hospital's administration has a special focus on the effectiveness of these units. However, studies analyzing the impact of architecture and/or changes in workflow are only available as mathematical models. 7 Until now, for example, there is no data available comparing the impact of extra induction rooms on operating suite efficiency. This is even more amazing, since these extra rooms require rare and expensive extra space. Furthermore, these induction rooms also need to be fully staffed with expensive medical equipment, such as oxygen and vacuum lines, ventilators and monitors. In order to reimburse this investment, throughput of these ORs should be significantly higher when compared to ORs lacking these extra rooms. In recent years, central holding areas with the ability to induce anesthesia were implemented in newly built operating suites to replace the induction rooms for each OR. However, even this new concept has never been analyzed or challenged before, thus looking more like a bad compromise than a safe and sound concept, neither economically, nor from the medical point of view. Nonetheless, most of newly built operating suites are still based on classic floor plans with ORs, each being equipped with adjacent induction rooms, or at least having implemented a central-holding area as described above. We show that parallelizing anesthesia work steps, the scrub nurses' preparation and clean-up can easily be integrated into the OR itself. We further paralleled, wherever medically reasonable, the patient's instrumentation with arterial and central venous lines, while the surgeons simultaneously started to disinfect the surgical area and subsequently started the operation itself. However, in our setting, any regional blocks or regional catheters needed for postoperative pain-control were introduced pre-or postoperatively in the recovery room, no matter whether the patient was operated in an OR with or without adjacent induction room. Our nursing staff in the recovery room has been trained to assist with these blocks. 8 Paralleled peri-operative work steps are supposed to minimize the TOT between operations. Relocating anesthesia work steps into different rooms or into a holding area is thought to speed up the peri-operative process, because anesthesia no longer interferes with other peri-operative work steps, eg scrub nurses' preparation of surgical instruments. However, the time needed for induction of and emergence from anesthesia only contributes to a little more than half of the TOT. Also, these anesthesia procedures can easily be paralleled in the OR itself. In our study, mean surgical time and surgical spectrum (gynecology) were comparable in both settings studied. Since the entire staffing remained unchanged, differences in the duration of other peri-operative work steps are solely related to the availability or lack of adjacent induction rooms. Comparing the duration of different work steps, the presence of adjacent induction rooms in our study had no positive effect on work flow and did not increase the OR's case load. Different than expected, having work steps paralleled in the OR itself because of a lack of adjacent induction rooms did not prolong TOT. Instead, TOT was significantly reduced, 901 since induction of and emergence from anesthesia took less time when performed in the OR. Interestingly, even the OR occupancy was significantly lower in the OR without adjacent induction rooms, although in this setting, induction of and emergence from anesthesia is performed in the OR itself, thus contributing to the OR occupancy. But obviously, other work steps before and after surgery take more time than induction of and emergence from anesthesia. First and foremost, the preoperative preparation of all surgical instruments plus handing over all disposable items needed as well as the postoperative instrument's processing and clean-up should be named here. The reduction of time needed for induction of and emergence from anesthesia, the reduced OR occupancy per case and the reduced TOT most likely were rendered possible by the fact, that the different players of the OR team (scrub nurses, anaesthetic staff, surgeons) watch each others progress and keep pushing and adapting their peri-operative workflow case by case, consciously or unconsciously. This also holds true for the early morning start of the first case, which was significantly earlier in the OR without adjacent induction rooms (08:07 versus 08:12). Accordingly, days with a surgical start on time were doubled. The interference cannot only be observed between anesthesiologist, scrub nurses and surgeons: Interestingly, the time available for cleaning was halved in the stand-alone OR compared to the OR with adjacent induction rooms, although the cleaning procedure remained unchanged. Since our study was retrospective and thus free of any bias, we cannot identify the underlying cause. However, knowing anesthesia to be performed next door rather than having the anesthetist waiting to enter the OR for preparing next cases' anesthesia may have taken pressure from the cleaning staff. Both, the significantly earlier start in the morning, plus the significantly reduced TOT sum up to a time saving, which allows for scheduling additional cases per day and OR, thus improving revenues. It is important to note that patient safety was never compromised: In each individual case, there was a brief pause to go through the surgical safety checklist in the presence of the entire team, including surgeon and anesthesiologist. 9 Our study shows for the first time that adjacent induction rooms had no positive impact on peri-operative work flow. Instead, we demonstrated even a slight but significant reduction in TOT, while the mean surgical time showed no difference. Building extra induction rooms for induction of and emergence from anesthesia for the purpose of maximizing each OR's case load seems to be a useless investment. Building an additional OR at the expense of lacking induction rooms is the better investment. Limitations In this study, we focussed on gynecological operations. Extra induction rooms may be beneficial in surgical disciplines requiring complex anesthetic catheterization prior to surgery, such as pediatric cardiac surgery. However, extra staff is needed to allow for induction and catheterization of the next patient, while the previous patient is still in progress. This may or may not have an impact on improved turn-over-times. However, operations requiring complex anesthetic preparation are mostly associated with long surgical time, and the time saved by overlapping the next case does not allow to schedule an extra case of that complexity. Theoretically, extra-anesthesia personnel may have helped to improve the extra induction room's impact on TOT. However, the additional salaries need to be reimbursed by improved TOT. In our study, the TOT in the OR with adjacent induction rooms was 20 minutes longer than the entire time spent in the induction room, indicating that anesthesia is not the limiting factor. Thus, extra-anesthesia personnel would have not improved TOT. Further, the stand-alone OR delivered improved TOT and shortened OR occupancy despite the fact that all anesthesia procedures were to be performed in the OR itself, again proving anesthesia not to be time limiting. Conclusion The present data shows that the availability of an anesthesia induction room does not necessarily result in better turn-over performance in the adjacent operating room. Building extra induction rooms for induction of and emergence from anesthesia for the purpose of maximizing each OR's case load seems to be a useless investment. Building an additional OR at the expense of lacking induction rooms may be the better investment.
2023-04-06T15:30:27.974Z
2023-04-01T00:00:00.000
{ "year": 2023, "sha1": "41810a1527290ba215d35d7ce98c4d381d0f50da", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=88693", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9c880babce06964ecd8c879e56c8537739bc4fe5", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
255798681
pes2o/s2orc
v3-fos-license
S-adenosylmethionine blocks tumorigenesis and with immune checkpoint inhibitor enhances anti-cancer efficacy against BRAF mutant and wildtype melanomas Despite marked success in treatment with immune checkpoint inhibitor (CPI), only a third of patients are responsive. Thus, melanoma still has one of the highest prevalence and mortality rates; which has led to a search for novel combination therapies that might complement CPI. Aberrant methylomes are one of the mechanisms of resistance to CPI therapy. S-adenosylmethionine (SAM), methyl donor of important epigenetic processes, has significant anti-cancer effects in several malignancies; however, SAM's effect has never been extensively investigated in melanoma. We demonstrate that SAM modulates phenotype switching of melanoma cells and directs the cells towards differentiation indicated by increased melanogenesis (melanin and melanosome synthesis), melanocyte-like morphology, elevated Mitf and Mitf activators’ expression, increased antigen expression, reduced proliferation, and reduced stemness genes' expression. Consistently, providing SAM orally, reduced tumor growth and progression, and metastasis of syngeneic BRAF mutant and wild-type (WT) melanoma mouse models. Of note, SAM and anti-PD-1 antibody combination treatment had enhanced anti-cancer efficacy compared to monotherapies, showed significant reduction in tumor growth and progression, and increased survival. Furthermore, SAM and anti-PD-1 antibody combination triggered significantly higher immune cell infiltration, higher CD8+ T cells infiltration and effector functions, and polyfunctionality of CD8+ T cells in YUMMER1.7 tumors. Therefore, SAM combined with CPI provides a novel therapeutic strategy against BRAF mutant and WT melanomas and provides potential to be translated into clinic. Melanocyte differentiation antigens (MDAs) are peptides generated from genes involved in melanogenesis and melanosome generation including TYR, TYRP-1, TYRP-2, MART-1 and PMEL [14] . Although MDAs are self-antigens, autologous cytotoxic T lymphocytes directed against MDAs, can mediate tumor regression, break tolerance to the tumor, and therefore are being evaluated as targets for anti-melanoma immunotherapy and in melanoma vaccines [14] . A major immune evasive mechanism is low intrinsic immunogenicity where tumor cells display reduced levels of immunogenic tumor antigens (or neoepitopes) [17] . Therefore, increasing expression of antigens such as MDAs and melanoma associated antigens (MAAs) can enhance immunogenicity of the tumor cells resulting in greater response in general and to CPIs [ 18 , 19 ]. The resistance to CPI has been associated with alterations in the methylome of cancer cells [ 20 , 21 ]. To distinguish CPI responders from non-responders, DNA hypomethylation was proposed as an essential biomarker for predicting tumor response to host immunity, and DNA hypomethylation could also provide a possible mechanism for immune escape and resistance to CPI [21] . Moreover, global hypomethylation levels were strongly associated with immune evasion signatures independently of aneuploidy and tumor mutational burden [21] . Sadenosylmethionine (SAM), a methyl donor of numerous epigenetic methyl transferases, was shown to counteract DNA hypomethylation and block DNA demethylation [22][23][24] . SAM has significant anti-cancer effects in various malignancies; however, SAM's effect has not been extensively investigated in melanoma. Interestingly, SAM is also crucial for activation, proliferation, and survival of T cells [25][26][27][28][29] . Furthermore, SAM levels are reduced by cancer cells via several mechanisms in TME [ 30 , 31 ]. Consistently, the depletion of methionine (the pre-cursor of SAM) in TME results in CD8 + T cells becoming dysfunctional and CD8 + T cells become unresponsive to CPI [30] . This is another essential immune evasive mechanism used by tumor cells [30] . We have previously tested the effect of SAM and anti-PD-1 antibody combination on tumor growth of a syngeneic BRAF WT mouse model and found enhanced anti-cancer efficacy of the combination treatment. However, the effect of SAM alone and in combination with anti-PD-1 antibody along with molecular pathways involved were not extensively investigated in melanoma, in general, and in the BRAF mutant melanoma which represents 50% of patients. Hence, we tested the hypothesis that SAM elevates anti-cancer immune responses, in addition to having anticancer effects, and that an effective novel therapeutic strategy for both BRAF mutant and WT melanoma would be to combine SAM and CPI. We show here using cancer cell lines and mouse models that SAM has significant anti-cancer effects on BRAF mutated and WT melanomas. The anti-cancer effect of SAM involves marked inhibition of cell proliferation and directing phenotype switching of invasive and proliferative melanoma cells into differentiated state. We also show that SAM and anti-PD-1 antibody combination has enhanced anti-cancer ef-ficacy in reducing tumor growth and metastasis, and increasing survival in melanoma mouse models. The combination also markedly elevated adaptive immune responses indicated by a higher immune cell infiltration, higher CD8 + T cells infiltration, activation, and effector functions, and higher polyfunctional CD8 + T cells in TME of YUMMER1.7 tumors. Lastly, the combination also enhanced the frequency and functionality of CD4 + T h 1 cells and reduced immunosuppressive CD4 + FoxP3 + T regs in TME of YUMMER1.7 tumors. SAM has marked anti-proliferative effects on melanoma cells Uncontrolled cellular proliferation is a major hallmark of cancer [32] . Firstly, to determine the anti-cancer effect of SAM, we tested the effect of SAM on proliferation in human (A375) and murine (YUMM1.7, B16 and YUMMER1.7) melanoma cell lines. B16 is a BRAF WT while YUMM1.7, YUMMER1.7 and A375 are BRAF mutant cell lines. While SAM decreased cell proliferation in all cell lines in a dose-dependent manner relative to control, the effect was greatest on YUMMER1.7 cell line at both SAM 200μM and 500μM doses, followed by YUMM1.7, B16, and A375, respectively ( Fig. 1 A). Next, to determine the key players that regulate cell cycle progression in melanoma, we analyzed major cell cycle regulators that are differentially expressed in human melanoma primary tumor, metastatic and normal skin tissues using the Xena platform [33] . Key cell cycle regulators including inhibitors such as p21 had low expression in primary tumors and metastatic tissue compared to normal skin tissue ( Fig. 1 B). In contrast, cell cycle dependent kinases and cyclins that drive cell cycle forward such as CDK1, CDK2 and CCNB1/2 had higher expression in primary tumors and metastatic tissues compared to normal skin tissue ( Fig. 1 C). Importantly, treatment of YUMMER1.7 and B16 cells with SAM reversed the expression of most of the cell cycle regulators tested ( Fig. 1 D and Supplementary Fig. 1). For instance, p21 which was highly expressed in the human primary tumors and metastatic tissue was significantly upregulated by SAM in YUMMER1.7 cells and vice versa for CDK1, CDK2 and CCNB1/2 expression ( Fig. 1 D). RNA-sequencing of YUMMER1.7 cells treated with SAM (200μM and 500μM) revealed many differentially expressed genes (DEGs) upon SAM treatment at 200μM (up, 3715; and down, 3557 genes) and 500μM (up, 4235; and down, 3935 genes) (Supplementary Fig. 2A). We carried out pathway analysis of DEGs using GSEA (Supplementary Fig. 2B and C, 3 and 4 and Supplementary Tables 3 and 4). Importantly, key cell cycle pathways and pathways involved in translation and related processes were significantly downregulated upon SAM (500μM) treatment in YUMMER1.7 cells indicating marked cell cycle inhibition (Supplementary Figs. 2B and 3, and Supplementary Table 5). Next, we overlapped the DEGs obtained from SAM treatment with the known melanoma driving genes (n = 422) of the Melanoma Gene Database (MGDB) and found 47% of genes to be common between the MGDB and DEGs obtained after treatment with SAM (500μM) (Supplementary Fig. 2D) [34] . Expectedly, pathway analysis of common genes revealed Melanoma and core cancer pathways enrichment (Supplementary file 1). Together, these results indicate that SAM regulates core genes/pathways in melanoma tumorigenesis and inhibits cell cycle pathways thereby reducing cellular proliferation of melanoma cells. SAM increases melanin and melanosome synthesis of melanoma cells Upon treatment of B16 cells with SAM, we noticed that B16 cells appear darker suggesting increased melanin synthesis. Since melanin pigmentation was reported to affect melanoma behaviour, we investigated the effect of SAM on melanogenesis and melanosome formation [ 11 , 35 , 36 ]. B16 cells produce melanin and phenotypically recapitulates clinical features of human melanoma [37] . The ability to produce melanin is lost upon subsequent cycles during in vitro cell culturing. We (B-C) Expression of key cell cycle regulators in normal skin, primary tumor and metastatic tissue of human melanoma patients and normal healthy individuals extracted from TCGA and GTEx databases using Xena platform [33] . The expression data of genes has been plotted in a scatter-plot graph (1024 samples). (D) Expression of cell cycle regulators in YUMMER1.7 cells after treatment with two doses of SAM, 200μM and 500μM as determined using RT-qPCR. Expression is depicted as fold change ( ± SEM) relative to control. Statistical significance was calculated using (A-D) one-way ANOVA test. took advantage of this and treated non-pigmented B16 cells with varying concentrations of SAM (200-500μM). Increase in SAM concentration induced melanogenesis as B16 cells had increased black pigmentation in a dose-dependent manner ( Fig. 2 A and B). Furthermore, the number of melanosomes and melanin synthesizing B16 cells were elevated as well ( Fig. 2 A). To measure the amount of intracellular melanin, we extracted the melanin from cells and measured absorbance. Treatment with SAM showed a gradual increase in endogenous (intracellular) melanin production in a dose dependent manner ( Fig. 2 B). We also observed slight increase in exogeneous (extracellular) melanin that changed the medium color from red to reddish black in wells treated with SAM (data not shown). SAM regulates phenotype switching of melanoma cells through modulating Mitf expression Melanocytes are thin, elongated cells with branched structures, consisting of a central body and dendrites, and contain numerous melanincontaining melanosomes [ 12 , 38 , 39 ]. Whereas B16 cells are a mixture of short spindle-shaped and epithelial-like cells lacking dendrites, and loose pigmentation (Control group, Fig. 2 C). Treatment with SAM resulted in differentiation of B16 cells into melanocyte-like cells ( Fig. 2 C). These differentiated cells were thin, dark black stained, and had more melanosomes and dendrites ( Fig. 2 C). Furthermore, in heterogeneous B16 cell population, a higher number of B16 cells appeared melanocytelike cells in increasing SAM dose from 200μM to 500μM ( Fig. 2 A). Moreover, the differentiated cells also had reduced proliferative ability ( Figs. 1 A and 2 A). MITF loss or low expression can lead to dedifferentiated melanomas that are resistant to therapies, are highly invasive and metastatic, and results in reduced overall survival of patients [ 10 , 15 , 16 ]. In addition to its clinical significance, MITF is also the master regulator of melanogenesis and hence we investigated its expression. SAM elevated Mitf expression by several folds in all the melanoma cell lines that we tested in a dose-dependent manner ( Fig. 2 D). Mitf was also found to be significantly upregulated upon SAM treatment in RNA-seq data and was a common gene between DEGs upon SAM treatment and the MGDB database. Interestingly, 126 MITF target genes were also differentially expressed in response to SAM treatment ( Supplementary Fig. 2E) [40] . Moreover, the transcriptional activators Sox10 and Creb1 , which induce Mitf expression, melanogenesis, and differentiation phenotype, were also increased several folds by SAM ( Fig. 2 D). Cancer stem cells (CSCs) or melanoma initiating cells (MICs) have high expression of stemness marker genes including NANOG, WNT, SOX2 , and in some studies, BRN2 ( Pou3f2 ) and SLUG ( Snai2 ) [ 9 , 41 , 42 ]. We found that SAM caused significant reduction in essential MICs marker genes including Nanog and Wnt expression in both B16 and YUM-MER1.7 cells, and significant downregulation of Sox2 expression in B16 cells ( Fig. 2 E). Brn2 was also significantly downregulated in B16 and YUMMER1.7 cells, and Slug in B16 cells ( Supplementary Fig. 5). This indicates that SAM reduces the proportion of MICs in heterogeneous melanoma cell population and redirects MICs from dedifferentiated state towards a differentiated phenotype. SAM increases immunogenicity and sensitivity of melanoma cells to CPIs Since melanogenesis involves expression of melanocyte differentiation antigens (MDAs) and recognition of antigens including MDAs is central to immune response and immunotherapy against melanoma, we next determined the effect of SAM on expression of MDAs. Expectedly, SAM increased expression of most MDAs ( Tyr, Tyrp1, Tyrp2/Dct, Mart-1/Melan-A, Pmel/Gp100 ) directly involved in melanogenesis by several folds in B16 and YUMMER1.7 cells ( Fig. 3 A). As expected, YUM-MER1.7 and YUMM1.7 cell lines that do not produce melanin had no/undetectable expression of Tyr ( Fig. 3 A). Expression of Tyrp1 and Mart-1 was also upregulated by SAM while Tryp2 expression was undetectable in YUMM1.7 cells ( Fig. 3 A). Both FAS death receptor (FAS/Apo1) and its ligand (FASL/Apo1L), and TRAIL death receptors (TNFRSF10-A/B/C/D or TRAIL-R1/2/3/4) and its ligand (TNFSF10/Apo2L), are major apoptosis pathway that causes instant cell death [43][44][45] . The lack of expression of FAS and TRAIL receptors in tumors can result in immune evasion and is corre-lated with poor prognosis of malignant melanomas, whereas increased FAS and TRAIL receptor expression in tumors can result in killing by cytotoxic CD8 + T and NK cells [45][46][47][48] . Importantly, we found significantly lower expression of FAS receptor in primary tumors and metastatic tissues of melanoma patients as compared to normal skin tissue samples, and this was associated with lower overall survival, progression-free interval, and disease-specific survival ( Fig. 3 D-E). Interestingly, we found SAM significantly increased expression of Fas receptor in YUMMER1.7 cells ( Fig. 3 F). Additionally, SAM lowered expression of genes Il18 and Il18bp in YUMMER1.7 cells which were previously shown to have a crucial role in survival of B16 melanoma cells and inhibit Fas/FasL pathway and NK mediated killing ( Fig. 3 G) [49] . Intercellular adhesion molecule 1 (ICAM-1) is an essential antigen present on antigen presenting cells (APCs) or tumor cells that interacts with lymphocyte function associated antigen 1 (LFA-1) which is a major co-stimulatory molecule present on T cells. LFA-1/ICAM-1 interactions are essential for trans-endothelial migration of CD8 + T cells into TME, and CD8 + T cells initial activation and lytic functions [50][51][52] . Moreover, ICAM-1 overexpression on tumor cells have been reported to cause reduction in tumor growth [ 53 , 54 ]. Parallel to the increase in Fas , TRAIL receptors and immunostimulatory antigens' expression, Icam1 (200μM and 500μM) and Icam2 (500μM only) were also significantly upregulated by several folds upon SAM treatment ( Fig. 3 J). Collectively, these data show that SAM alters melanoma transcriptome that is consistent with increased immunogenicity and sensitivity of melanoma cells to CPIs. SAM modulates Mitf expression that further regulates phenotype switching Next, to elucidate a potential mechanism for upregulation of antigens upon SAM treatment, we examined a direct involvement of the Mitf transcription factor. Therefore, we knocked down (KD) Mitf using siRNA targeting the Mitf gene (siMitf) at 3 sites simultaneously and confirmed that Mitf expression was markedly downregulated ( Fig. 4 A). Most of the MDAs' expression was downregulated upon siMitf KD indicating that Mitf is an important transcription factor controlling the expression of these MDAs ( Fig. 4 A). Parallel to the effect of SAM elevating Mitf levels, KD of Mitf expression significantly increased expression of stemness genes including Nanog, Wnt and Sox2 in B16 and YUMMER1.7 cells ( Fig. 4 B). Hence, showing that Mitf induction is critical for its effect on melanogenesis and differentiation. Collectively, these data suggest that SAM modulates the phenotype switching of melanoma cells. Treating melanoma cells with SAM switches proliferative and invasive stem-cell phenotype towards more differentiated state indicated by low proliferative ability, melanocytelike cell morphology, elevated melanogenesis, decreased stemness ability and increased immunogenicity ( Fig. 4 C). SAM reduces tumor growth, progression, and metastasis of melanoma tumors To determine effect of SAM in vivo, we established either YUM-MER1.7 or B16 tumors in immunocompetent B6 mice and treated them with either control (PBS) or SAM (80mg/kg/day) via oral gavage. SAM treatment significantly reduced tumor growth and progression in both mouse models compared to control ( Fig. 5 A). Expression of nuclear protein Ki67 (Ki67) is strongly associated with various tumor parameters including growth, progression, clinical tumor stage, metastasis, and is the most extensively used proliferation marker [55] . Consistent with SAM reducing proliferation and causing cell cycle inhibition, SAM had a significant decrease in Ki67-positive stained tumor cells indicating marked reduction in proliferation, growth, and progression of YUMMER1.7 and B16 tumors ( Fig. 5 B). Moreover, parallel to in vitro results, SAM also showed a significant elevation in Mitf and MDAs (such as Mart-1 and Pmel ) expression in both YUMMER1.7 and B16 tumors ( Fig. 5 C). Both melanosomes and melanin pigmentation have shown to significantly inhibit melanoma metastasis ( in vivo ) [ 35 , 36 ]. SAM increased melanin and melanosome production, increased Mitf expression and reduced the pool of invasive MICs in B16 heterogenous population. Con-sidering this, we investigated the effect of SAM in a model of B16 melanoma lung metastasis through intravenous administration of B16 cells. The mice treated with SAM had significant decrease in proportion of lung metastatic nodules compared to control lungs ( Fig. 5 D). Moreover, treatment of SAM decreased cell migration of YUMMER1.7 and A375 cells in a wound-healing assay ( in vitro ) ( Supplementary Fig. 8). Additionally, SAM reduced invasiveness of B16 cells as shown by us previously [22] . Taken together, these results suggest that SAM can significantly reduce the metastatic potential of melanoma. SAM and anti-PD-1 antibody combination has superior anti-cancer efficacy against melanoma tumors Based on our data above, we further hypothesized that treatment with SAM would complement with CPI therapy. We treated YUM-MER1.7 tumor bearing mice with control (IgG and oral PBS), SAM, anti-PD-1 antibody, and combination of both. Both SAM and anti-PD-1 antibody alone had significant effect in reducing tumor growth and progression as indicated by lower mean tumor volume and high tumor growth inhibition (TGI) (74.4% and 72.0% at day 22), compared to control (0%), respectively ( Fig. 6 A and B). However, SAM + anti-PD-1 combination had significantly high efficacy in blocking tumor growth and progression compared to all other groups indicated by lowest mean tumor volume and maximum TGI (88.6%) at day 22 ( Fig. 6 A and B). We also tested SAM + anti-PD-1 antibody combination effect on lung metastasis ( Fig. 6 E). We found that SAM + anti-PD-1 had a larger effect on reducing lung metastasis as compared to all other groups ( Fig. 6 E). Taken together, these data suggest that SAM and anti-PD-1 antibody combination can significantly reduce tumor growth, progression, and metastasis of melanoma. SAM and anti-PD-1 antibody combination elevates the infiltration, effector functions and polyfunctionality of CD8 + T cells in the TME To further understand the significant reduction in tumor growth and progression by SAM, anti-PD-1 antibody, and the combination, immunophenotyping of the tumors from YUMMER1.7-tumor bearing mice was carried out. Despite considerable inter-individual variability within each group, there were significantly higher levels of immune infiltration in the tumors treated with either SAM, anti-PD-1 antibody, or SAM + anti-PD-1 combination compared to control tumors (Supplementary Fig. 9). Similarly, the density of CD3 + T cells was increased in all treatment groups ( Fig. 7 A). Furthermore, their proportion amongst tumor-infiltrating immune cells was increased and correlated inversely with tumor weight ( Fig. 7 A and Supplementary Fig. 10). Cytotoxic CD8 + T cells are the most powerful effectors in the adaptive anti-cancer immune response and increased CD8 + T cell activation and function are hallmarks of response to PD-1 blockade [56] . Hence, we investigated the effect of the treatments on CD8 + T cells' tumor infiltration, activation, and effector functions. Significantly higher densities and frequency of CD8 + T cells infiltrated tumors of mice treated with SAM + anti-PD-1 antibody combination compared to control ( Fig. 7 B). The infiltrated CD8 + T cells had higher activation levels in the SAM + anti-PD-1 antibody combination group as indicated by increased expression of the stimulatory checkpoint molecule ICOS both in terms of frequency and level of expression. Furthermore, this phenotype was inversely correlated with tumor weight (r 2 = 0.34) ( Fig. 7 C). Expression of the transcription factor T-bet, which is an indicator of CD8 + T cells activation and effector functions, was increased in the combination group compared to control as well ( Fig. 7 D). CPIs were designed to counteract the exhaustion of tumor-infiltrating T cells, a state caused by chronic activation and characterized by high levels of PD-1 signaling and loss of effector functions such as cytokinesecreting capacity [56] . This feature was recapitulated in the control tumors, with high proportions of PD-1 expressing cells ( Supplementary Fig. 11) and absence of polyfunctional cells secreting both IFN and TNF ( Fig. 7 E), a subset of CD8 + cells considered as the most cytotoxic and the most potent effector of anti-tumor immunity [57] . Accordingly, the CD8 + T cells in the TME of combination group produced significantly higher levels of T-bet ( Fig. 7 D), which promotes CD8 effector function such as the expression of IFN , and cytokines including IFN and TNF compared to control group ( Fig. 7 E). Moreover, polyfunctional CD8 + cells were observed in higher frequency and density in the combination group and represented up to 60% of CD8 + cells in the mice with lowest tumor burden ( Fig. 7 E). In general, SAM group had a higher density of infiltrating CD8 + T cells ( Fig. 7 A, B, p < 0.05 ), activation (MFI of ICOS + and T-bet + density; p < 0.05 ; Fig. 7 C and D), and a non-significant trend towards higher cytokine expression (IFN and TNF ; Fig. 7 E; p > 0.05 ) and polyfunctionality ( Fig. E; p > 0.05 ) compared to control. This is in line with SAM increasing the immunogenicity (antigen expression) of melanoma cells and tumors ( Figs. 3 and 5 ) which would ultimately lead to higher acti- vation and effector functions of antigen-specific CD8 + T cells. Indeed, loss of MDAs (MART-1, TYR) due to mutation or KD results in increased tumor volume in immunocompetent mice [ 16 , 58 ]. SAM-mediated protection correlates with augmented CD4 + T helper responses in melanoma tumors Consistently, all treatment groups had a significantly higher density and frequency of infiltrating CD4 + T cells compared to the control group ( Fig. 8 A, p < 0.05 ), in the TME. In the combination group, there was a significant shift in the composition of the CD4 + T cell pool with reduced accumulation of Foxp3 + T reg cells and an increase in the frequency of T-bet + T h 1 cells ( Fig. 8 B and C). Furthermore, the functionality of these T h 1 cells was increased in the combination group, as shown by the increased frequency of IFN and TNF which correlated inversely with tumor volume ( Fig. 8 C). Surprisingly, we observed the presence of a subset of CD4 + IL17 + T cells in groups treated with SAM (SAM and SAM + anti-PD-1) which was absent in the control and anti-PD-1 antibody group ( Fig. 8 D). These cells were confirmed as bona fide Th17 cells through ROR t co-expression and absence of T-bet and IFN expression ( Fig. 8 C, D). Furthermore, expression of IL-6, a cytokine known to promote Th17 polarization in the presence of TGF [59] , was markedly increased in YUMMER1.7 cells cultured with SAM in vitro ( Supplementary Fig. 12). IL-17 re-sponses have not been associated with response to checkpoint blockade in melanoma, and the frequency of Th17 did not correlate with tumor volume in the groups that received SAM ( Fig. 8 D). However, in other tumor types, Th17 cells contribute to the recruitment of CD4 + and CD8 + T cells into TME, and activation of tumor-specific CD8 + T cells [60] . In accordance with this data, the expression of Icam1 which is essential for trans-endothelial migration and lytic functions of CD8 + T cells, was upregulated by several folds by SAM in YUMMER1.7 cells ( Fig. 3 H) [ [70][71][72] . Taken together, these data indicate that combining SAM with anti-PD-1 antibody treatment provides no additional benefit in terms of recruitment of CD8 + and CD4 + T cells in the TME, however, the difference was in higher activation and effector functions of both CD4 + and CD8 + T cells. Discussion Melanoma is one of the most prevalent cancers and has high mortality rates especially after the cancer has metastasized. A high tumor mutational burden (TMB) is strongly correlated with high response and has emerged as a clinically relevant biomarker of CPI efficacy [ 18 , 19 ], however DNA hypomethylation (and demethylation) was strongly correlated with immune evasive and CPI therapy resistant signatures in melanoma, independent of TMB and aneuploidy [21] . In fact, global DNA hypomethylation had a higher predictive power than TMB. Furthermore, recent studies have suggested low SAM levels within TME, and deprivation of CD8 + T cells of the precursor of SAM, methionine, makes them non-functional and unresponsive to CPI therapy in melanoma. Thus, we hypothesized that targeting DNA hypomethylation with SAM would be highly beneficial and complement CPI therapy. Here we propose a novel therapeutic strategy by combining SAM with anti-PD-1 antibody to overcome the development of treatment resistance in highly aggressive BRAF mutant and WT melanomas. During melanogenesis, melanoma cells become less aggressive as genes that repress invasion are upregulated [ 35 , 61 , 62 ]. In addition, melanosomes were shown to inhibit transmigration ability of melanoma cells mechanistically while melanin pigmentation inhibited metastasis [ 35 , 36 ]. SAM increased the number of cells synthesizing melanin and melanosomes in a dose-dependant manner ( Fig. 2 ). Increase in melanosomes is also indicated by increased expression of Mitf which regulates melanosome biogenesis, and genes such as Tyr and Pmel which are melanosomal structural proteins required for early melanogenesis and melanosome biogenesis ( Fig. 3 A) [ 11-14 , 63 ]. Indeed, when SAM was tested for metastasis in in vitro assays (migration, invasion) and in vivo experiments (lung metastasis model), it significantly decreased metastasis of melanoma cells and tumors ( Figs. 5 D, 6 E, Supplementary Fig. 8, and [22] ). MITF regulates expression of many pigmentation, MICs marker, and cell cycle regulatory genes. In a previous study, MITF amplification was found to be present in 5-20% of melanomas and MITF was defined as a lineage-addiction oncogene [64] . However, targeted-capture deep sequencing has shown no changes in copy number of MITF in clinical melanoma samples [ 65 , 66 ]. Furthermore, MITF has demonstrated to suppress melanoma invasion and metastasis, and knock-out/KD of MITF increases tumor growth and progression, and metastasis in various melanoma mouse models [ 9 , 10 , 65-69 ]. MITF acts as a rheostat and dynamically controls phenotype switching and this model is extensively discussed and established [ 8-10 , 67 , 69 ]. Various studies have indicated that cells expressing low MITF are intrinsically resistant to MAPK pathway inhibitors (such as BRAFi/ MEKi) and immunotherapies (anti-PD-1 and anti-CTLA-4 antibodies), often persist, and have the highest ability to form tumors and metastasize [ 8 , 9 , 67 , 69-72 ]. Importantly, dedifferentiated melanomas characterized by low MITF and low MDAs are resistant to immunotherapy as well [ 15 , 16 , 73 , 74 ]. Consistently, differentiated dark stained tumors are also infiltrated with higher density of immune cells (immunologically hot) [69] . In addition, tumors with low MITF expression have a pro-inflammatory secretome which ultimately affects recruitment of T cells and function within TME [63] . Interestingly, inhibition of BRAFV600E activity specifically upregulates MITF levels thereby increasing the expression of MDAs which in turn increases immunogenicity of the cancer/tumor cells [ 75 , 76 ]. Indeed, peripheral engineered T cells directed against MDAs led to tumor regression in tumors of human melanoma patients [77] . Recently, a directed phenotype switching was proposed as an effective anti-melanoma strategy . Cells were stimulated for 3h with a cocktail of PMA/Ionomycin/GolgiStop® or cRPMI as control. All histograms are represented as mean ± SD. Statistical significance was calculated using one-way ANOVA test. Data points from all groups were pooled to calculate linear correlations with tumor weight. Treatment groups are indicated by color-coding. The slope's deviation from zero was evaluated using Fisher's test. by elevating MITF levels and switching the highly proliferative cells and highly invasive cells towards differentiation and cell death [ 7 , 9 ]. Interestingly, SAM causes a similar effect by increasing Mitf expression and directing heterogenous proliferative and invasive melanoma cells towards differentiation as indicated by the low proliferation rates, low metastatic ability, melanocyte-like morphology, high melanin and melanosome production, and high MDAs and MAAs expression ( Figs. [1][2][3][4][5]. Parallel to this, upon examining the TCGA melanoma cohort data, MITF expression strongly mirrored the expression of pigmentation genes that are MITF targets [2] . Hence, low-MITF tumors had low expression of pigmentation genes and vice versa. In line with this, melanosomes contain acidic proteases that degrades proteins into antigenic peptides in the endolysosomal pathways [63] . Since SAM increased number of melanosomes in B16 cells ( Fig. 2 ), this further supports the finding of increased antigen expression of melanoma cells by SAM. Accordingly, SAM treatment had a significant anti-cancer efficacy and reduced tumor growth and progression, and metastasis in immunocompetent models ( Figs. 5 , 6 ). Compared to control, SAM treated tumors were also more inflamed (hot) as indicated by an increased T cells infiltration (including CD8 + T cells density, CD4 + T cells density and frequency, and Th17 cells frequency), and higher activation of CD8 + T cells (CD8 + T-bet + /g of tumor and MFI ICOS + ) in TME ( Figs. 7 and 8 ). Live CD45 + CD4 + Foxp3 − cells. Cells were stimulated for 3h with a cocktail of PMA/Ionomycin/GolgiStop® or cRPMI as control. All histograms are represented as mean ± SD. Statistical significance was calculated using one-way ANOVA test. Data points from all groups were pooled to calculate linear correlations with tumor weight. Treatment groups are indicated by color-coding. The slope's deviation from zero was evaluated using Fisher's test. IL6 cytokine production is upregulated by endothelial cells and fibroblasts upon IL17D stimulation [78] . Additionally, IL17D is highly induced by Nrf2 and other stress pathways and can lead to tumor rejection and enhanced anti-cancer immune response [ 79 , 80 ]. Our top significantly upregulated pathways with SAM were Nrf2 and oxidative stress ( Supplementary Fig. 2C and 4, and Supplementary Table 6), and SAM increased expression of IL17D (and its receptors) and IL6 (Supplementary Fig. 12). IL17D itself was shown to elevate anti-cancer immune response via NK recruitment [79] . Therefore, Nrf2 pathway inducing IL17D expression and IL6 expression which may then lead to high Th17 cells could be another pathway enhancing immune responses against tumor cells that is induced by SAM. The intriguing effect of SAM on frequency of IL17 + cells in melanoma TME and expression of IL6 and IL17D in melanoma cells will need to be further investigated. IFN is one of the most powerful cytokines that can cause anti-tumor activity, determines the success of CPIs and is a characteristic feature of cytotoxic T cells that produce perforin and granzymes [81][82][83] . IFN has marked anti-tumor pleiotropic effects including inhibition of immunosuppressive T regs , M1 macrophage and CD4 + T h 1 polarization, DCs maturation and MHCI and II upregulation, increased cytotoxic (killing) activity, proliferation, and motility of CD8 + T cells, apoptosis of cancer cells, and inhibition of angiogenesis [81][82][83] . T-bet regulates the transcription of IFN [81] . Similar to IFN , TNF can also cause tumor cell death by apoptosis and inhibit angiogenesis in tumors [83] . Indeed, the greatest reduction in tumor volumes were in the SAM + anti-PD-1 combination group, which is in line with significant upregulation of IFN , T-bet and TNF of cytotoxic CD8 + T cells in these tumors, compared to control. In parallel, there was a trend in increase of IFN , T-bet and TNF by CD8 + T cells with monotherapies which is in line with significant tumor reduction compared to control. Consistently, polyfunctional CD8 + T cells which are the most potent cytotoxic CD8 + T cells and can cause effective tumor lysis, were not present in the control tumors, but were significantly highest in the combination group with a trend in increase in monotherapies ( Fig. 7 ). A high percentage ( > 80%) of melanoma patients relapse on the BRAFi/ MEKi cocktail, which is due to, as mentioned above, slowgrowing low-MITF-expressing stem-cell like cells [ 1 , 2 , 8 , 9 , 67 , 69-72 , 84 ]. Hence, therapies that upregulate MITF can have distinct advantage [7] . Another challenge is that 60-70% of melanoma patients do not respond to CPI therapy [ 1 , 5 , 6 , 84 ]. The SAM + anti-PD-1 combination proposed in this study has several advantages: (a) SAM upregulates MITF levels thereby decreasing the pool of slow-growing low-MITF-expressing stemcell like cells that have high probability of initiating tumors, metastasis, and tumor relapse after treatment. Additionally, SAM increases the immunogenicity, in part by elevating MITF levels, of the melanoma cells. In line with this, we have not found development of pharmacological resistance to long-term treatment of SAM in this study or any other cancer model ( in vitro and in vivo ) [ 22 , 23 , 85-87 ]. (b) While anti-PD-1 antibody has some immune-related adverse effects, SAM being an approved supplement has shown no severe adverse effects in pre-clinical and clinical studies except a transient adverse behavioral effect in an individual [ 22 , 23 , 85-88 ]. (c) The SAM + anti-PD-1 antibody therapy is effective against both models of BRAF WT and mutant melanoma subtypes which are representative of 80-85% of the melanoma patients. Additionally, we have also observed beneficial effect of the combination in breast cancer [89] . (d) The current study along with our previous published study [22] was conducted using syngeneic melanoma cell lines and immunocompetent mouse tumor models, instead of xenograft models that are severely immune-deficient. Accordingly, these models avoid interspecies immune responses but have a complete immune system against melanoma tumors, therefore, faithfully represent the human pathophysiology that occurs in tumors of human melanoma patients [ 9 , 90 ]. Along these lines, we have used only one CPI (anti-PD-1 antibody) with SAM. This also has advantages compared to the use of drug cocktails such as BRAFi and MEKi with anti-PD-1 and anti-CTLA-4 antibodies currently employed in the clinic which, for example, could have a higher risk of adverse events and more complicated therapeutic regimes. (e) Lastly, SAM + anti-PD-1 antibody combination led to complete tumor elimination in 20% of YUMMER1.7-tumor bearing mice. This is a promising finding which would need further investigation. Although SAM and anti-PD-1 antibody combination shows significant potential against BRAF mutant and WT melanomas, the current study had some limitations. For instance, since we observed SAM incremented immunogenicity and sensitivity of the melanoma cells in vitro , we had started SAM treatment of the tumor-bearing mice at 2-4 days post-tumor inoculation. The rationale behind this was to investigate the effect of SAM on priming the melanoma cells for anti-PD-1 antibody which has been carried out with other epigenetic therapies like DNA methyltransferase inhibitors (DNMTi) and histone deacetylase inhibitors (HDACi) [ 91 , 92 ]. Also, SAM had to be given daily during the treatment period because of low bioavailability of orally taken SAM reported in the past [ 23 , 85 , 88 ]. Future studies could implement other CPIs with SAM. One promising candidate is anti-CTLA-4 antibody. This is because anti-CTLA-4 antibody elevates the expansion of tumor-infiltrating T h 1 (PD-1 + , ICOS + , Tbet + ) cells and CD8 + T cells, and there is some evidence that SAM is required for T cells activation and proliferation [25][26][27][28][29] . Moreover, triple combination of SAM + anti-PD-1 + anti-CTLA-4 antibodies could also serve as a potential therapeutic strategy. However, for triple combination therapy, CPI dose would have to be reduced to avoid immune related adverse effects by CPI therapies. To the best of our knowledge, the current study together with our previous study [22] is the first potential evidence of the beneficial anticancer efficacy of SAM against BRAF WT and BRAF mutant melanomas which represent 80-85% of the melanoma patients. Moreover, our studies also to demonstrate the unique anti-cancer therapeutic effects of the novel SAM and anti-PD-1 antibody combination against melanomas. The particularly attractive nature of these studies is that both SAM and anti-PD-1 antibody are approved agents and therefore can be easily translated into the clinic. Of note, a safe and relatively cheap nutritional supplement, SAM, exhibits anti-cancer/anti-metastatic and immunestimulatory activity which are similar to the effects seen by potentially toxic and more expensive therapies. Our study points out to the potential of this agent in repurposing it for cancer therapy to reduce morbidity and mortality rates of melanoma patients. Murine B16-F1 (B16) BRAF wild-type (RRID:CVCL_F936) melanoma cell line was obtained from ATCC (Manassas, Virginia). Apart from YUM-MER1.7, all cell lines were cultured in DMEM media supplemented with 1% penicillin-streptomycin sulfate and 10% fetal bovine serum (FBS), and 1% non-essential amino acids (NEAA) was also added for YUMM1.7 cells. YUMMER1.7 was cultured in DMEM/F12 media supplemented with 10% FBS, 1%P/S, 1% NEAA. Only early passage cell lines were utilized unless indicated. All cell lines were maintained in incubators at 37°C and 5% CO 2 and found to be mycoplasma-free. Proliferation and wound-healing assays For proliferation assays, YUMMER1.7 (1 × 10 4 cells), YUMM1.7 (0.5 × 10 4 cells), B16 (1.5 × 10 4 cells) and A375 (2.5 × 10 4 cells) were seeded in 6-well plates. The cells were treated with two different concentrations, 200μM and 500μM, of SAM (cat# B9003S, NEB, Canada) on day 2, 4 and 6 after seeding. On day 7, the cells were collected by trypsinization, neutralized by complete media, and counted with Beckman Coulter counter (Hertfordshire, UK). The cell pellets were either frozen or used for downstream applications. Proliferation assay data is the mean of two independent experiments. Percentage proliferation (%) is calculated as: [(Mean number of cells in (treatment group/ Control group)) x100]. Migration assay followed the regular proliferation assay protocol and then YUMMER1.7 (5 × 10 4 cells) and A375 (1 × 10 5 cells) were seeded in a 6-well plate and were confluent on the next day. Next day, the confluent cell layer was scratched in the form of a cross using a 1mL pipette tip. The 6-well plates were kept in IncuCyte® Live-Cell Analysis System and programmed to take images at timed intervals. Confluency tool of the IncuCyte® was used to analyze the closure of width-gap percentage (compared to T = 0 hr) by the migrating cells and plotted using GraphPad Prism. Analysis of public clinical and molecular data bases RNA expression data of normal tissue, primary tumors and metastatic tissues of the healthy participants and melanoma patients samples from GTEx and TCGA databases was downloaded using the Xena platform [33] and the data were imported into GraphPad Prism for graph plotting. Xena platform was also used for Kaplan Meier survival plots of human gene expression data (e.g. FAS gene). Melanogenesis experiments For determining effect of SAM on melanin synthesis, melaninproducing B16 (1.5 × 10 4 ) cells were seeded in 6-well plate and followed the regular proliferation assay protocol. On day 7, images were taken at different magnifications using bright-field Olympus microscope (IX51) with DPController software. For intracellular melanin determination, the cells on day 7 were trypsinized, centrifuged, washed with PBS and centrifuged again. Then melanin was extracted from cell pellets by following a slight modification of previously published protocol [93] . Essentially, the cell pellets were treated with 1 N NaOH containing 10% DMSO, vortexed and boiled at 80°C for 90 minutes, with vortex after every 15 minutes. The cells were then centrifuged, and supernatant was measured at 490nm using Tecan microplate reader. Percentage relative absorbance (relative to control) was calculated and plotted using Graph-Pad Prism. RNA extraction, reverse transcription, and quantitative real-time PCR (RT-qPCR) Total RNA from cells and tumors was extracted using column extraction method utilizing the RNeasy mini kit (cat# 71404, Qiagen, Germany) and following company's guidelines. The RNA was quantified using BioDrop analyzer according to manufacturer's instructions. For reverse transcription of RNA into cDNA standard thermal cycler was utilized with M-MLV Reverse Transcriptase (cat# 28025013, Ther-moFisher Scientific, Canada) enzyme following standard company's guidelines. Then, quantitative real-time qPCR system (AB StepOne-Plus) with PowerUp TM SYBR TM Green Master Mix (cat# A25742, Ther-moFisher Scientific, Canada) was used to obtain Ct values according to the manufacturer's instructions [ 22 , 85 ]. Analysis of gene expression was carried out using the 2-ΔΔCT method. Primers are tabulated in Supplementary Table 1. RNA sequencing and bioinformatics analysis Total RNA was extracted from cells and checked for quality control (QC) by Bioanalyzer (Agilent) and NanoDrop where only RNA Integrity Number (RIN) > 6.5 and an absorbance A260/280 ratio of > 2.0 was used for RNA-seq. Paired-end RNA sequencing using Illumina NovaSeq 6000 platform (with a depth of 25 million reads) following standard protocols was carried out. The obtained data was checked for QC, normalized, converted into HT-seq count files, and differential gene expression analysis carried out using DESeq2 (RRID:SCR_015687) in Galaxy ( www.usegalaxy.org ) according to writer's recommendations [94] . The final gene list was annotated using the Annotation tool ( "Annotate DE-Seq2/DEXSeq output tables "). Pathway analysis was carried out using SeqGSEA software (RRID:SCR_005724) [ 95 , 96 ]. Mouse studies Male C57BL/6 mice (RRID:IMSR_CRL:027), six to eight weeks of age, were purchased from Charles River Lab (QC, Canada) and housed at ARD division of the RI-MUHC (Montreal, QC, Canada). To generate tumors, 5 × 10 5 YUMMER1.7 (in 20% Matrigel and 80% saline), and 5 × 10 5 B16 cells (in saline), were subcutaneously injected into the shaved right flank of mice. Mice were randomized into four groups; control (IgG and PBS); SAM; anti-PD-1 antibody; and SAM + anti-PD-1 antibody combination. When tumor became palpable (2-4 days), treatments were started wherein SAM (Life Science Laboratories, NJ, US) at 80mg/kg dose was diluted in PBS and given daily via oral gavage using feeding needles [ 22 , 85 ]. Anti-PD-1 antibody (clone RMP1-14, BioXCell cat# BE0146, RRID:AB_10949053) and isotype matched control IgG (IgG2a, clone 2A3, BioXcell, cat# BE0089, RRID:AB_1107769) was given at 10mg/kg via intra-peritoneal (i.p.) injection twice a week and diluted in InVivo Pure pH 7.0 Dilution Buffer (BioXcell, US) [ 22 , 85 ]. The control mice were also given PBS via oral gavage. Measurement of tumor volume (T.V) was carried out by palpation using a digital calliper at timed intervals and determined using the formula; T.V = (length × width 2 )/2. Tumor growth inhibition percentage (%) was calculated as ((1 -[changes of T.V in treatment group/changes of T.V in control group] × 100) [97] . For survival studies, the YUMMER1.7 tumor bearing mice (n ≥ 8/group) were treated with anti-PD-1 antibody until day 22 and continued SAM treatment until the end of the study (day 65). The mice were euthanized as their tumors reached humane endpoint (a T.V of ≥ 2000mm 3 ). The data for survival studies was plotted with Kaplan Meier curve using GraphPad Prism. For generating pulmonary metastasis mouse model of melanoma, B16 (5 × 10 5 ) cells were intravenously injected (I.V) into the tail vein of the C57BL/6 mice (n = 7/group) and treated with either control (IgG and PBS), SAM, anti-PD-1 antibody, or SAM + anti-PD-1 antibody combination. The mice were euthanized at day 15 post tumor injection, lungs harvested and fixed with formalin solution, and metastatic lung nodules counted. Percentage proportion of metastatic nodules (%) was calculated relative to control as ([total lung nodules in treatment group/ mean lung nodules in control group] × 100). Mice were regularly examined physically, measuring body weight, and for other potential adverse effects [98] . All mouse studies were carried out under standard conditions and in accordance with McGill University Facility Animal Care Committee guidelines. Immunophenotyping Immunophenotyping was carried out to study the effect that SAM and anti-PD-1 antibody has on immune cells within TME. Briefly, YUMMER1.7-tumor bearing mice (n = 8/group) were treated with either control (isotype matched IgG and PBS), SAM, anti-PD-1 antibody, or combination. The mice were sacrificed, primary tumors were harvested, processed into single cell suspensions, and stained with extracellular and intracellular markers and cytokines as previously detailed by us [22] . Samples were then acquired using the BD Fortessa LSR-X20 and analysis was performed using FlowJo (RRID:SCR_008520) [ 22 , 99 ]. All fluorescence-conjugated antibodies utilized for flow cytometry are tabulated in Supplementary Table 2. Immunohistochemistry (IHC) Tumors treated with control and SAM were harvested at endpoint (n = 4/group). Tumors were fixed with formalin for 3-5 days and washed with 70% ethanol. An automated IHC was performed on Ventana Discovery Ultra Instrument (Roche, US). Slides were deparaffinized and rehydrated, treated with EDTA buffer for antigen retrieval and then incubated with mouse anti-Ki67 antibody (Abcam cat# ab15580, RRID:AB_443209) at 1:300 dilution. Then, anti-rabbit horseradish peroxidase (HRP)-conjugated secondary antibody was added, and signal detected using DAB chromogen kit (Biocare Medical). Slides were counter stained with Haemotoxylin and Eosin (H&E). Slides were scanned with Aperio AT Turbo digital. Images (at 40x magnification) of the ki67 stained slides were taken (n = 5 images/sample) randomly using ImageScope (RRID:SCR_014311) and analyzed using Fiji (RRID:SCR_002285). In Fiji, a colour deconvolution tool was utilized to separate H&E (total cell stain) and DAB (ki67 + stain) sections, and then using analyze particles tool, optimal total area of H&E and DAB staining was carried out. Then a macro was created that automatically carried out the above for one image. Then the images were input one by one for each sample into the macro. Percentage of ki67 staining was calculated as [total area of ((DAB/H&E) staining) x100%)] and was plotted using GraphPad Prism (RRID:SCR_002798). Data availability statement The data analyzed or generated is available within the main file and in the supplementary files.
2023-01-14T16:06:17.478Z
2023-01-11T00:00:00.000
{ "year": 2023, "sha1": "9163986c687cddf3a5705d0d3e714780ae4419d0", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "68a2a2d6828ffcdac549fba970b9df8b27fa4430", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
253226334
pes2o/s2orc
v3-fos-license
Understanding of bacterial lignin extracellular degradation mechanisms by Pseudomonas putida KT2440 via secretomic analysis Background Bacterial lignin degradation is believed to be primarily achieved by a secreted enzyme system. Effects of such extracellular enzyme systems on lignin structural changes and degradation pathways are still not clearly understood, which remains as a bottleneck in the bacterial lignin bioconversion process. Results This study investigated lignin degradation using an isolated secretome secreted by Pseudomonas putida KT2440 that grew on glucose as the only carbon source. Enzyme assays revealed that the secretome harbored oxidase and peroxidase/Mn2+-peroxidase capacity and reached the highest activity at 120 h of the fermentation time. The degradation rate of alkali lignin was found to be only 8.1% by oxidases, but increased to 14.5% with the activation of peroxidase/Mn2+-peroxidase. Gas chromatography–mass spectrometry (GC–MS) and two-dimensional 1H–13C heteronuclear single-quantum coherence (HSQC) NMR analysis revealed that the oxidases exhibited strong C–C bond (β-β, β-5, and β-1) cleavage. The activation of peroxidases enhanced lignin degradation by stimulating C–O bond (β-O-4) cleavage, resulting in increased yields of aromatic monomers and dimers. Further mass spectrometry-based quantitative proteomics measurements comprehensively identified different groups of enzymes particularly oxidoreductases in P. putida secretome, including reductases, peroxidases, monooxygenases, dioxygenases, oxidases, and dehydrogenases, potentially contributed to the lignin degradation process. Conclusions Overall, we discovered that bacterial extracellular degradation of alkali lignin to vanillin, vanillic acid, and other lignin-derived aromatics involved a series of oxidative cleavage, catalyzed by active DyP-type peroxidase, multicopper oxidase, and other accessory enzymes. These results will guide further metabolic engineering design to improve the efficiency of lignin bioconversion. Graphical Abstract Supplementary Information The online version contains supplementary material available at 10.1186/s13068-022-02214-x. As a lignin-degrading bacteria, Pseudomonas putida KT2440 has catabolic potential against a wide range of natural aromatic compounds. Linger et al. demonstrated the lignin breakdown products bioconversion capacity in P. putida KT2440 [35]. The presence of P. putida KT2440 with fungal secretome enhanced lignin degradation possible due to the catabolism of lowmolecular-weight lignin and partially prevent of repolymerization [36]. Continuous oxidative enzymes that might be involved in lignin break down processes are identified consistently in P. putida. Screening studies reported P. putida KT2440 secreted ligninolytic enzymes such as laccase, Mn 2+ -independent peroxidase (e.g., DyP), and Mn 2+ -oxidizing peroxidase (e.g., MnP and DyP) [17]. Multicopper oxidase (CopA) from P. putida KT2440 was characterized afterward with the lignin model compounds oxidizing ability [37].Furthermore, β-etherase (glutathione S-transferases, GSTs) and dioxygenases (e.g., 2,3-quercetin dioxygenase) were upregulated in the secretome of P. putida when exposed to lignin. Moreover, multicopper oxidase such as CopA was also detected in the secretome when P. putida grew on lignin-free media (glucose), suggesting the possibility of other carbon sources (e.g., glucose) could also induce oxidoreductases that might be involved in Graphical Abstract Page 3 of 16 Xu et al. Biotechnology for Biofuels and Bioproducts (2022) 15:117 lignin catabolism process. In addition, spatiotemporal mechanisms for lignin catabolism in P. putida KT2440 have revealed that outer membrane vesicle (OMVs) would encapsulate enzymes involved in the catabolism of lignin-derived aromatic structures [33]. However, the oxidative lignin degradation enzymes in P. putida KT2440 are under-identified, which requires more efforts to explore potential enzymes. Understanding the extracellular degradation pathways and lignin structural changes by Pseudomonas putida KT2440 is critical for developing processes for efficient lignin bioconversion. Lignin structural changes have been investigated in the presence of P. putida. For example, 13 P NMR results revealed the β-5 phenolic group from lignin significantly decreased when Dyp was overexpressed in P. putida [38]. 2D NMR results revealed ferulic acid and p-coumaric acid signals from lignin disappeared at 72 h of fermentation in P. putida KT2440 [33]. Although bacterial lignin conversion examples are accumulating, most of the studies fermented the wild-type or mutated P. putida together with lignin which overlapped the function of secreted enzyme system with the strain. The specific function of extracellular enzyme systems on lignin structural change and degradation pathway is still unknown in P. putida compared to fungi cocktails [33,36,39]. In this study, we tracked the activity of oxidases and peroxidase/Mn 2+ -peroxidase in the secretome of Pseudomonas putida KT2440 that grew on glucose as the only carbon source. The active secretome was isolated to react with alkali lignin in order to measure the lignin degradation rate with or without the addition of H 2 O 2 and Mn 2+ . The degraded lignin products and structural change were analyzed by GC-MS and NMR to investigate the possible reaction mechanisms of lignin degradation by secretome in P. putida KT2440. In addition, global proteomics analysis was carried out for secretome and intracellular proteome to identify potential enzymes involved in the lignin degradation process (Additional file 1: Fig. S1). Cell growth and concentration of extracellular and intracellular protein To generate an effective secretome with active extracellular enzymes in Pseudomonas putida, P. putida cells were first grown on M9 medium with 5 g·L −1 glucose and 1 g·L −1 NH 4 Cl for 8 days. The optical density (OD600) and extracellular and total protein (intracellular + extracellular) concentrations were measured. Figure 1 depicts the cell density (OD600), and extracellular and total protein concentrations over time. As shown in Fig. 1a, with 5 g·L −1 glucose, cells grew fast for the first 72 h of fermentation, and the optical density reached 2.1 and maintained a similar level until the end of the fermentation. A distinct pattern was revealed for protein concentration. As shown in Fig. 1b (blue line), the extracellular protein concentration reached 1.1 mg·mL −1 at 24 h and dropped to 0.6 mg·mL −1 at 48 h, then gradually increased to 0.9 mg·mL −1 at 144 h and further decreased to 0.4 mg·mL −1 at the end of fermentation. The same trend was also observed for total protein concentration (red line). The total protein concentration reached 1.2 mg·mL −1 at 24 h and dropped to 0.7 mg·mL −1 at 48 h, then gradually increased to 1.0 mg·mL −1 at 144 h and further decreased to 0.7 mg·mL −1 at the end of fermentation. The total protein concentrations were higher than the extracellular protein, including the lysate intracellular protein. These results indicated that the P. putida cells secreted protein to extracellular space and the extracellular protein concentration fluctuated over time. However, further enzyme activity assays are still required to examine whether these proteins are secreted as active form or not. Extracellular vs. intracellular enzyme activity P. putida cells secreted proteins to extracellular space when grown on glucose. However, whether the secretome harbors active ligninolytic enzymes still requires specific investigation. The phenolic substrate 2,6-dimethoxyphenol (DMP) is usually used to track laccase-like oxidases and peroxidase activities [17]. Therefore, oxidases, peroxidases, and Mn 2+ -oxidizing peroxidases were tracked in the culture supernatants and intracellular over 8-day incubations. In Fig. 1c, extracellular oxidase activity was detected at a low level at 24 h, and gradually increased to 0.25 U/L at 120 h. In Fig. 1d, the profiles for intracellular enzymes are similar to extracellular enzymes. The oxidase activity was maintained at a low level at 24 h and then gradually increased to 0.12 U/L at 120 h. The peroxidase activity (e.g., Dyp) was also measured with DMP and H 2 O 2 addition [17]. In Fig. 1c, extracellular peroxidase activity was detected at a low level at 24 h and gradually increased to 1.8 U/L at 120 h. In Fig. 1d, the profiles for intracellular enzymes are similar to extracellular enzymes. The peroxidase activity was maintained at a low level at 24 h and then gradually increased to 1.6 U/L at 120 h. DMP was also used to measure the reaction's Mn 2+ -peroxidase activity with H 2 O 2 and Mn 2+ [17]. In Fig. 1c, extracellular peroxidase activity was detected at a low level at 24 h, and gradually increased to 2.5 U/L at 120 h. In Fig. 1d, the profiles for intracellular enzymes are similar to extracellular enzymes. The Mn 2+ -peroxidase activity was maintained at a low level at 24 h and then gradually increased to 1.7 U/L at 120 h. Collectively, the results in Fig. 1c demonstrate that P. putida secreted the enzyme to the extracellular space, and the maximum enzyme activity appeared at 120 h of fermentation. The intracellular enzyme activity showed a similar pattern as extracellular in Fig. 1d, with the maximum enzyme activity at 120 h. Therefore, the extracellular enzyme at 120 h of fermentation was selected for the following lignin degradation reaction. Amount of alkali lignin degraded The lignin degradation reaction solution was set up with secretome and 2 g·L −1 alkali lignin with or without 0.1 mM H 2 O 2 and 0.1 mM Mn 2+ addition and reacted for 5 days. Apart from the secretome alone treatment, H 2 O 2 and/or Mn 2+ were added separately to activate the capacity of peroxidase and Mn 2+ -peroxidase performance on lignin degradation. In Fig. 2 GC-MS analysis of lignin breakdown products GC-MS analysis was performed to identify the lignin breakdown products with secretome and the addition of H 2 O 2 and Mn 2+ . The NIST library was used to identify and assign the chromatographic peaks. The amount of each aromatic degradation product was determined from the corresponding peak area in the chromatogram. The GC-MS chromatogram identified aromatic compounds, furan, aldehydes, esters, organic acids, and alkanes among different samples (Additional file 2: Figs. S2-S4, and Table S1, S2). These identifications should be interpreted as unvalidated candidates; some may be incompatible with known or plausible lignin degradation mechanisms. Peak area for all observed compounds was analyzed and shown in Additional file 1: Fig. S4, which were further separated as four groups after being compared with their alternatives in lignin control (Additional file 1: Table S2). Notably, when lignin was treated with H 2 O 2 , a wide range of aromatic monomers and dimers (No. 1, 4, 5, 6, 12, 14, 16-18, 20-21) were significantly increased. These compounds were only present in limited amounts or even not detected when treated with secretome alone or in the presence of H 2 O 2 and Mn 2+ , suggesting the non-specific oxidative cleavage (e.g., H 2 O 2 ) showed stronger products release ability compared to enzymatic cleavage. Moreover, the peak area of identified compounds among all the secretome treatments showed a similar distribution pattern (compounds No. 3,7,8,9,10,13,14,25,30,31,33) but with difference abundance, suggesting the increased compounds might be associated with different activated enzymes. For example, phenol, 2-methoxy-4-propyland 3-benzofurancarboxylic acid, 2,3-dihydro-2-methoxy-, methyl ester, trans-(compounds 8 and 13) were increased in secretome alone treatment and maintained the similar level or decreased by H 2 O 2 and Mn 2+ addition, indicating these products might be associated with oxidases behavior. Similarly, the peak area of phenol, 4-ethyl-2-methoxy-and acetovanillone (compounds 3 and 9) were enhanced by H 2 O 2 , and vanillic acid (compound 11) was enhanced by Mn 2+ addition, suggesting these compounds might be associated with peroxidase behavior. How lignin structural changes were impacted by these treatments was further explored by NMR analysis. 2D HSQC NMR spectra from the hydrolyzed lignin The cleavage of lignin C-O-C linkages is vital in the bacterial lignin degradation process. The quantification of each linkage is based on the volume integration of crosspeak contours in the HSQC spectra. In this study, lignin linkages such as β-O-4, β-β, β-5, and β-1 were cleaved with H 2 O 2 or secretome treatments ( Fig. 3a-j, Table 1 and Additional file 1: Fig. S5, Table S3). As shown in Table 1, in general, the cleavage of β-5 was less extensive than the cleavages of β-O-4, β-β, and β-1, which indicated that β-5 bonds were relatively more stable. The cleavage of these linkages was increased when lignin was treated with H 2 O 2 . Among all the detected linkages, β-O-4 bonds exhibited the most extensive cleavage, demonstrating the non-enzymatic chemical reaction with H 2 O 2 mainly cleavage β-O-4 bonds. In contrast, when lignin was treated with secretome alone, β-O-4 bonds only exhibited limited cleavage compared to that with Secretome and cellular proteome profiles in P. putida GC-MS and NMR results demonstrated that P. putida secretome is able to depolymerize lignin. Enzyme assay revealed that the secretome exhibited the oxidase and peroxidase activity. However, isoenzymes with similar activities cannot be distinguished based on simple activity analysis. To deeply characterize the secretome, mass spectrometry-based global proteomics was utilized to profile the proteome of secretome and intracellular extracts in P. putida KT2440. Bacterial secretome and cell pellet were harvested at 120 h of fermentation with the highest oxidase and peroxidase activity. Only those proteins presented in more than three replicates were considered as reliable detection and quantitation. Results showed that 1312 proteins were identified in the secretome sample, which was lower than that in the intracellular extracts for 2388 identified proteins. Around 94.8% of the secretome proteins were shared with the intracellular protein, and only 68 proteins were exclusively detected in the secretome (Fig. 4a). Secretome proteins were then organized by function and presented in Fig. 4b. Despite the other function and unknown function groups, the oxidoreductase group was the most abundant among the rest of the functional groups, which accounted for 14.1% in secretome proteome. Moreover, six glutathione S-transferases were detected and grouped in the transferase group. A deeper analysis of oxidoreductase is presented in Fig. 4b as well. The dehydrogenase group was the most abundant (42.2%), followed by reductase (17.3%), peroxidase (7.6%), oxidase (6.5%), monooxygenase (4.9%), and dioxygenase (2.2%). Notably, there are 14 peroxidases detected, such as Dyp-type peroxidase (PP_3248), cytochrome c551 peroxidase (PP_2943), and alkyl hydroperoxide reductase (e.g., ahpC). Besides, 12 oxidases were detected, including three multicopper oxidases (copA/B and CumA). Besides, a couple of dehydrogenases were detected in the secretome, including NAD(P)H dehydrogenase (e.g., PP_1644), choline dehydrogenase (e.g., betA), aldehyde dehydrogenase (e.g., aldB-I), and alcohol dehydrogenase(e.g., PP_2827). In addition, some hydrogen peroxide alleviating enzymes were also detected, such as catalase (e.g., katG), superoxide dismutase (sodB), and thioredoxin (e.g., trx). Overall, proteomics analysis revealed the oxidoreductase enzymes in the secretome, which not only confirmed the enzyme assay results, but also revealed the specific ability to selectively depolymerize lignin. Besides, the addition of H 2 O 2 stimulated the resinol linkage cleavage, suggesting the peroxidase might be involved in β-β bond degradation. Therefore, peroxidase might be involved in the resinol linkage cleavage and forming vanillic acid as the intermediate (Fig. 5). NMR results demonstrated the secretome alone contributed to spirodienone linkage cleavage, suggesting the β-1 bond cleavage capacity in the secretome. NCBI blast revealed there was no homologous protein for lsdE and lsdA in P. putida KT2440 [41].Therefore, there might be other mechanisms for β-1 linkage in P. putida. Besides, the presence of H 2 O 2 stimulating the β-1 cleavage, and Mn 2+ addition further enhanced β-1 cleavage, suggesting the peroxidase might be involved in β-1 cleavage as well. Therefore, the β-1 linkage degradation pathway for P. putida KT2440 is presented in Fig. 5. Discussion Overall, the extracellular lignin degradation mechanism has been revealed by P. putida KT2440 via secretomic analysis. The enzymatic assay demonstrated that the secretome contains active oxidase and peroxidase. The following lignin degradation rate analysis proved that the secretome could degrade 8.6% of the lignin, and the addition of H 2 O 2 and Mn 2+ would increase the lignin degradation rate to 14.5%. C-C bond cleavage was observed by 2D NMR in secretome alone treatment, and β-O-4 bond and C-C bond (β-β and β-1) cleavage were elevated when H 2 O 2 was introduced to secretome due to the activated peroxidase system. Further Mn 2+ addition enhanced β-O-4 and C-C bond (β-β and β-1) cleavage. GC-MS results reinforced the NMR results by demonstrating elevated peak areas of aromatic monomers, such as vanillin and vanillic acid were correlated with C-C and C-O bonds cleavage. Proteomics results revealed the different groups of oxidoreductases involved in lignin degradation, and the degradation pathways were proposed. The bacterial lignin degradation process requires extensive reducing power and energy to cope with the polymeric structure and corresponding oxidative stress during aromatic compound catabolism [34]. Therefore, many studies chose to co-fermentation lignin with nutrient-rich substrates like glucose [17,33,35,51]. Our results demonstrated that the secretome harbored active oxidase, suggesting the presence of multicopper oxidase in secretome generated from P. putida grown on glucose. Moreover, active peroxidase was also observed in secretome, suggesting that besides Dyp-type peroxidase, other potential peroxidases are also involved in lignin catabolism. Proteomics analysis revealed that the secretome contained multicopper oxidase (CopA) and Dyp-type peroxidase (PP_3248) and presented abundant oxidoreductase enzymes. Therefore, our results demonstrated that P. putida also secreted active ligninolytic enzymes into extracellular space when grown on nutrient-rich media (e.g., glucose). Our results showed that secretome alone exhibited limited lignin degradation capacity, which was lower than in the P. putida KT2440 strain. Moreover, our results further demonstrated that the addition of H 2 O 2 and Mn 2+ can activate peroxidases and increase lignin degradation, suggesting that the synergistic effect between bacteria and the enzyme system could be enhanced through chemical addition on lignin degradation. H 2 O 2 is the oxidant and electron acceptor for bacterial peroxidase during the enzymatic reaction. Purified dye-decoloring peroxidase from Rhodococcus and Pseudomonas (e.g., Rh_Dyp, and PpDyp) revealed that the addition of Mn 2+ in the presence of H 2 O 2 further enhanced the enzyme activity [31,[52][53][54]. These results point to an alternative way to enhance the bacterial lignin bioconversion efficiency by introducing H 2 O 2 and Mn 2+ to work together with the enzyme system and bacteria cells. NMR results showed secretome alone presented extensive C-C bond cleavage and limited C-O bond cleavage, suggesting the function of oxidases in the secretome. Bacteria oxidases (e.g., multicopper oxidase) are laccaselike oxidase and exhibit the C-C bond cleavage. Our results demonstrated that vanillic acid (compound 11) peak area was presented in secretome + lignin treatments (Additional file 1: Fig. S4), suggesting the involvement of multicopper oxidases in lignin degradation through Cα-Cβ cleavage [40]. Apart from the compounds that were confirmed by previous literature, we also identified some aromatic dimer compounds in different treatments. For example, [1,1'-biphenyl]-3,3'-dicarboxaldehyde, 6,6'-dihydroxy-5,5'-dimethoxy-(compound 30) which might be correlated with 5-5' linkage cleavage and 3-(4-acetoxy-3-methoxyphenyl)-7-methoxy-4-oxo-4H-chromene (compound 26) might be related to tricin cleavage. However, more compounds were not correlated with specific 15:117 reaction mechanisms that showed more reactions in the lignin degradation process. Our results also demonstrated inconsistency in secretome treatments with a lower aromatic compound release but larger linkage cleavage than H 2 O 2 treatment, implying the low aromatic compounds released might be another bottleneck in the bacterial lignin degradation process. Future studies can focus on engineering oxidative degradation enzymes towards a higher aromatic compounds release. Proteomics-guided systems-biology approaches have been proven to discover potential pathways involved in lignin catabolism effectively [57]. Previously Dyp only reportedly appeared overexpressed in exoproteome of P. putida from the lignin-rich media [33]. However, our secretomic analysis revealed that Dyp, SOD, CopA, and other accessory enzymes such as catalase, dehydrogenases, reductases, dioxygenases, and oxidases all existed in the secretome from glucose which was different from the previously reported lignin inducing hypothesis. Overall, the extracellular lignin degradation pathways by P. putida are not well understood, primarily due to their broad prospects under varied environmental conditions [21,27,37,[40][41][42][43][44][45][46]. This distinctive study provided detailed information on lignin degradation products and oxidoreductase enzymes, resulting in new insights into the lignin depolymerization pathways in P. putida KT2440 for future metabolic engineering design to improve lignin bioconversion efficiency. Conclusions In summary, based on enzyme assay, GC-MS, NMR, and proteomics analysis, the extracellular lignin degradation mechanisms of the secretome from P. putida KT2440 were elucidated as follows: (1) oxidase (e.g., multicopper oxidase, CopA) exhibited limited β-Ο-4 bond cleavage capacity; (2) peroxidase (e.g., dye-decoloring peroxidase, DyP) was activated by H 2 O 2 and enhanced by Mn 2+ addition, stimulating the β-Ο-4, β-β, and β-1 bond cleavage; (3) degradation reaction mechanisms involved in Cα-Cβ cleavage, Cα-oxidation, Cα-hydroxylation, followed by aromatic intermediates release (e.g., vanillin and vanillic acid); (4) abundant oxidoreductase enzymes in P. putida secretome participated in lignin degradation. To be specific, Dyp-type peroxidase, multicopper oxidase, NAD(P) H dehydrogenase, glutathione reductase, glutathione S-transferase, and choline dehydrogenase participated in β-Ο-4 bond cleavage. Results showed that FAD-binding oxidoreductase involved in β-β bond cleavage, choline dehydrogenase and aldehyde dehydrogenase involved in β-5 bond cleavage, and peroxidases involved in β-β and β-1 bond cleavage. Overall, our study provides a comprehensive understanding and list of functional enzymes of the extracellular lignin degradation pathway in P. putida KT2440 in order to build a roadmap for future metabolic engineering design to improve the efficiency of lignin bioconversion. Further research can be carried out to overexpress identified functional proteins for specific lignin degradation mechanism studies. Protein concentration measurement 1 mL of fermentation broth was taken from each M9 medium culture every 24 h of fermentation with two replicates. The supernatant was separated by centrifuge at 8014 g (8000 rpm) for 5 min. Parallelly, another 1 mL cell culture was taken from each M9 medium for total protein measurement. Cell cultures were frozen at -80 °C for 15 min at first. The cell suspension was taken into a water bath (42 °C) for 5 min and repeated the freeze-thaw cycle three times. The supernatant and total protein concentrations were estimated by the Pierce ™ BCA protein assay kit (Thermo Scientific, San Jose, CA). Ligninolytic activity assays for intracellular and secretome A 1 mL cell culture was taken from each M9 medium culture at different fermentation times and stored in a 1.5-mL centrifuge tube for total enzyme activity assay measurement (intracellular + secretome) with two replicates. Cell cultures were frozen under -80 °C for 15 min at first. The cell suspension was taken into a water bath (42 °C) for 5 min and repeated the freeze-thaw cycle three times. Another 1 mL cell culture was also taken from each M9 medium culture and centrifuged at 8000 rpm for 5 min. Then, the supernatant was transferred to a new 1.5-mL centrifuge tube for secretome enzyme activity assay measurement. Laccase and peroxidase activity were examined daily in the culture supernatants by the oxidation of 5 mM DMP (synonym syringol) to dimeric cerulignone (ε469 = 55 000 M −1 cm −1 ) in 0.1 mM sodium malonate buffer at pH 7. For peroxidase activity assays, 0.1 mM H 2 O 2 was also added to initiate the reaction. In addition, the latter assays were 15:117 performed in the absence and the presence of Mn 2+ (0.1 mM MnSO 4 ). Absorbance from peroxidase activity was corrected with that caused by laccase activity. Measurements were carried out at room temperature. One unit (1 U) of activity is defined as the amount of enzyme releasing 1 μmol of product per minute under the defined reaction conditions [17]. Secretome harvest P. putida strains were fed in 100 mL M9 medium in a 250-mL flask with 5 g·L −1 glucose and 1 g·L −1 NH 4 Cl for 5 days. The fermentation broth was transferred to a couple of 50 mL centrifuge tubes. The supernatant and cells were separated by centrifugation at 8014 g (8000 rpm) for 5 min. The supernatant was transferred to a new precooled flask (1 L) in ice batch. The supernatant was centrifuged again at 8014 g (8000 rpm) for 5 min and then passed through 0.22 μm sterilized filter (Millipore ® Stericup ® filtration system) to a new flask (1 L) for further degradation experiment. Ligninolytic enzyme activities and protein concentration were measured prior to the lignin degradation reaction. Lignin degradation reaction with secretome Corn stover alkali lignin was purified following a published methodology (more details about the alkali lignin purifying process, composition and structural analysis can be found elsewhere) [60]. The reaction was stopped after 120 h, and the reaction solution was transferred to a 50-mL centrifuge tube. The lignin amount determination followed NREL Laboratory Analytical Procedures (LAPs) [61,62]. The reaction solution was first adjusted pH to 2 by adding the hydrochloric acid (1 mol·L −1 ). The acid-insoluble lignin was harvested by centrifugation at 8014 g (8000 rpm) for 5 min. The supernatant was carefully transferred to a new 50 mL centrifuge tube for degradation products analysis. The acid-insoluble lignin was freeze-dried and weighed. The acid-insoluble lignin from secretome treatments was corrected with the acid-precipitated and weighed secretome control to remove the protein amount. The lignin degradation rate was indicated as the difference between the remaining lignin after different treatments and control, followed by dividing the control [17]. Analysis of lignin degradation products The lignin degradation products in the reaction solution were determined by gas chromatography-mass spectroscopy (GC-MS). Every treatment was a set of three replicates. 15 mL of ethyl acetate was added to 15 mL solutions from the different samples in a 50-mL centrifuge tube and was vortexed for 5 min at room temperature. The top ethyl acetate layer was then transferred to a glass tube. The remained reaction solution was extracted again with 15 mL ethyl acetate and the two extracts were merged. The glass tubes were left in the fume hood for 7 days to let the ethyl acetate evaporate naturally to around 2 mL. The ethyl acetate was concentrated to around 1 mL by passing nitrogen for 20 min and then filtered with 0.45 μm PTFE membrane into the GC vials [63]. The organic solvent-extracted samples (1 μL) were injected into a stream of He (carrier gas) flowing at 1.2 mL·min −1 into a DB5 (30 m × 0.250 μm × 0.25 μm) capillary column fitted in an Agilent Technologies 7890A GC system set in the splitless mode. The GC oven was programmed to reach 45 °C and maintained this temperature for 2 min; then ramp up at the rate of 15 °C min −1 until the temperature reached 200 °C, and held at this temperature for 1 min, after which the temperature was increased at a rate of 5 °C min −1 until the temperature reached 280 °C. At this temperature, it was held for 7 min. Eluting compounds were detected with an MS (Agilent Technologies 5975C) inert XL EI/CI MSD with a triple-axis detector and compared using NIST libraries [64]. 2D HSQC NMR analysis 2D-1 H-13 C heteronuclear single-quantum coherence (HSQC) nuclear magnetic resonance (NMR) spectra were obtained using a Bruker Avance III HD 500 MHz spectrometer operating at a frequency of 125.12 MHz for the 13 C nucleus. 30-50 mg of the dry lignin samples were dissolved in 0.6 mL deuterated dimethylsulfoxide (DMSO)-d 6 and the spectra were collected at 298 K. A standard Bruker adiabatic HSQC pulse sequence (hsqcetgpsisp2.2) was used with the following spetra acquisition condition: 1.0 s pulse delay, 64 scans, 1024 data points for 1 H, 256 increments for 13 C, and a 1 J C-H of 145 Hz. The 1 H and 13 C spectral widths are 13.0 and 220.0 ppm, respectively. The central DMSO solvent peak (δ 13 C/δ 1 H = 39.5/2.49 ppm) was used for chemical shifts calibration. HSQC spectra were processed and analyzed with Mestrenova (version 12.0.2) with a matched cosinebell apodization and 2 × zero filling in both dimensions. The content of the inter-linkages are expressed as a percentage relative to the total lignin subunits (G + H + S) [65]. Protein extraction and tryptic digestion P. putida strains were fed in 100 mL M9 medium in a 250 mL flask with 5 g·L −1 glucose and 1 g·L −1 NH 4 Cl. The experiment was conducted with four replicates. The cells and supernatant were harvested at 120 h of fermentation. The fermentation broth was centrifuged at 8014 g (8000 rpm) for 5 min using Eppendorf 5804 to separate the supernatant and cell pellet. The cell pellet was washed twice with 5 mL of 0.9% sodium chloride solution. The supernatant was filtered with 0.22-μm PTFE membrane into new 50-mL centrifuge tubes. Cell pellet and supernatant were stored at − 80 °C fridge for further protein extraction. Cell pellets were stored in regular 1.5-mL centrifuge tubes. Cell pellets were resuspended in a 250 μL lysis buffer solution (8 M urea, 75 mM NaCl in 100 mM NH 4 HCO 3 , pH 7.8), transferred to a 1.5-mL safelock centrifuge tube. A scoop of zirconia/silica beads (~ 100 μL) was added to each tube, and the bead beating experiment was performed by 8 rounds of 30 s using a Bullet Blender (Homogenizers, Atkinson, NH) [66]. After bead beating, a needle was used to poke a hole at the bottom of the 1.5-mL tube and put on a 15-mL falcon tube to collect the supernatant by centrifugation at 2000 rpm, 4 °C for 5 min. The beads were then washed with 100 µL lysis buffer and centrifuged at 2000 rpm at 4 °C for 5 min. The lysate was then transferred to a new 2-mL centrifuge tube and pellet the cellular debris at 14,000 rpm for 10 min at 4 °C. The supernatant was transferred to a new 2-mL centrifuge tube for further protein digestion procedures. Parallelly, 20 mL of supernatant was collected, and the proteins were first denatured by adding 15 g urea (final concentration 8 M) and incubating for 1 h at 37 °C. The supernatant was then concentrated using a 30-kDa filter (EMD Millipore, Billerica, MA) by centrifugation at 4000 rpm, 4 °C for 30 min. The concentrated supernatant was transferred to a clean 2-mL tube for further protein digestion procedures. The protein purification and digestion of intracellular protein and supernatant samples were conducted with the FASP Protein Digestion Kit (Expedeon, San Diego, CA) with trypsin (Promega, Madison, WI) following the manufacturer's instruction. The protein concentration was estimated by the Pierce ™ BCA protein assay (Thermo Scientific, San Jose, CA) and normalized to 0.1 μg μL −1 before LC-MS/MS analysis. Four biological replicates were applied during the entire process [34]. Proteomic data acquisition and analysis LC-MS/MS analysis was performed using an Orbitrap Fusion Lumos mass spectrometer (Thermo Scientific, San Jose, CA). Tryptic peptide digests were separated using a nanoACQUITY UPLC systems (Waters, Milford, MA) by reversed-phase HPLC with 110 min gradient time at a column flow rate of 200 nL/min. The detailed equipment parameters setup was described in a recent publication by Wang et al. [67] In terms of proteomic data analysis, raw MS/MS data files were processed with MaxQuant (version 1.6.7.0). After loading all the raw data and giving appropriate names, labelfree quantification (LFQ) algorithm was used with a minimum LFQ ratio count of 2 for relative quantification in the Group-specific parameters section. Trypsin was selected for digestion mode with a maximum of two missed cleavages. The peptide tandem mass spec raw data were searched against the Uniport FASTA files of strain P. putida KT2440 (released at 07, April 2017, Taxonomy ID: 160488). In the global parameters section, the second peptides and match between runs features were enabled with a 0.7-min match time window and 20-min alignment time window. The spectral level false discovery rate (FDR, q value) was < = 1% based on a decoy search [68]. Other parameters just followed the default settings. The protein intensities obtained from Maxquant software were log2 transformed. The Venn chart was generated by Venn-Diagram-Plotter (version 1.6.7458, https:// github. com/ PNNL -Comp-Mass-Spec/Venn-Diagram-Plotter/releases/ tag/v1.6.7458). Pie chart was generated by Origin software (version 9.8.5.204).
2022-10-31T14:04:05.692Z
2022-10-31T00:00:00.000
{ "year": 2022, "sha1": "b64793fa848fc89bffac9b349ba0fd80868c8aef", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "2f8009e62add7d8e4698df916a4ab6708f1e2768", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
7884249
pes2o/s2orc
v3-fos-license
Eating Behaviour and Weight in Children Objective: To test the hypothesis that quantitative variation in eating behaviour traits shows a graded association with weight in children. Design: Cross-sectional design in a community setting. Subjects: Data were from 406 families participating in the Physical Exercise and Appetite in CHildren Study (PEACHES) or the Twins Early Development Study (TEDS). Children were aged 7 to 9 years (PEACHES) and 9 to 12 years old (TEDS). Measurements: Weights and heights were measured by researchers. BMI SD-scores were used to categorise participants into healthy-weight, overweight and obese groups, with an additional division of the healthy-weight group into higher- and lower-healthy-weight at the 50th centile. Eating behaviour traits were assessed with the Child Eating Behaviour Questionnaire (CEBQ), completed by the parent on behalf of their child. Linear trend analyses compared CEBQ sub-scale scores across the five weight groups. Results: Satiety Responsiveness/Slowness in Eating and Food Fussiness showed a graded negative association with weight, while Food Responsiveness, Enjoyment of Food, Emotional Overeating and Desire to Drink were positively associated. All effects were maintained after controlling for age, sex, ethnicity, parental education and sample. There was no systematic association with weight for Emotional Undereating. Conclusion: These results support the idea that approach-related and avoidance-related appetitive traits are systematically (and oppositely) related to adiposity, and not exclusively associated with obesity. Early assessment of these traits could be used as indicators of susceptibility to weight gain. Introduction Research into the behavioural correlates of obesity has identified a variety of eating behaviour traits in obese children and adults (1)(2)(3). In the paediatric literature, behavioural studies have shown that obese children have lower responsiveness to internal satiety signals (4)(5)(6), eat faster during the course of a meal (7)(8)(9) and are more sensitive to external food cues (10) than healthy-weight children. Heightened desire to consume may go beyond food to include palatable drinks, with intake of high-energy drinks being associated with weight gain (11). Food fussiness (or pickiness) has also been linked with weight, with picky girls having lower BMI and lower rates of overweight (12;13), although this finding has not always been replicated (14)(15)(16). Ingestive responses to stress or distress have also been proposed as significant influences on food intake. The idea originated in the 'psychosomatic theory' of obesity, which proposed that eating in response to emotional distress rather than hunger was one cause of excessive weight gain (17), while more recent analyses suggest that stress can deregulate eating through effects on the reward system (18). In practice, empirical studies analysing associations between food intake, weight and stress have not produced entirely consistent results (19;20). However, one recent study in young adults found that those who had either gained or lost weight reported more stress than those whose weights were stable (21), suggesting that individuals may vary in the extent to which they experience hyperphagic or hypophagic responses to emotional distress. It has recently been argued that appetitive traits are not only pathogenic for obesity per se, but vary quantitatively across the entire weight distribution and could determine risk of positive energy balance (1;2). This is consistent with evidence that the same genetic influences determine variation in weight within the healthy range as determine the extremes of obesity (22). Positive responses to food -enjoyment, responsiveness to palatability and rapid consumption -are hypothesised to promote food intake, while sensitivity to internal cues of satiety or fussiness about food are likely to reduce intake. In environments with multiple opportunities to eat highly palatable, energy-dense foods, these appetitive traits will moderate the risk of weight gain. One recent study used the Child Eating Behaviour Questionnaire (CEBQ) (23) to index a number of positive and negative appetitive traits, and compared obese/overweight and healthy-weight children, as well as a small sample of underweight children (24). Consistent with previous work, obese/overweight children showed higher responsiveness to food cues, more emotional eating, lower satiety responsiveness and less fussiness than healthy-weight children. In addition, the underweight group differed from the healthy-weight group, with lower responsiveness to food cues, lower emotional eating, higher satiety responsiveness and greater fussiness. These results supported the idea of a graded association between appetite and weight, rather than an aberrant eating style that is specific to clinically overweight populations, although the authors noted that the small sample size for the underweight group, and the different recruitment methods for overweight/obese and healthyweight/underweight children, limited the confidence with which conclusions could be drawn. However, a very similar association was reported in a recent community-based study, although this study assessed only two aspects of appetite, Satiety Responsiveness and Enjoyment of Food (1). In the present study we examined associations with weight for the seven appetitive traits included in the CEBQ scale in a large, community-based, sample of 7 to 12-year-old children. To examine the patterning of associations between appetite and weight in more detail, we not only compared obese, overweight, healthy-weight and underweight groups, but also subdivided the healthy-weight group into higher-healthy-weight and lower-healthyweight at the 50 th centile. We predicted that CEBQ sub-scales indicating 'food approach' (Enjoyment of Food, Food Responsiveness, Emotional Overeating, Desire to Drink) would show a positive association with weight, while Satiety Responsiveness/Slowness in Eating, Food Fussiness and Emotional Under-eating would be highest in the thinnest children and show a negative association with weight. Method Participants The data for these analyses came from two community-based samples. One sample consisted of 7 to 9-year-old children and their parents recruited through five schools around London as part of the Physical Exercise and Appetite in CHildren (PEACHES) study (Sample 1). This is a longitudinal study examining associations between eating behaviour, physical activity and adiposity during childhood, and the data reported here come from the first year of the study. Out of a total of 531 families invited to take part, 400 parents (206 parents of boys and 194 parents of girls), replied to the invitation giving consent to their child's participation. Parents were sent a questionnaire containing the CEBQ and sociodemographic questions (including education level and ethnicity), of which 268 were returned, a response rate of 67%. The second sample was from a sub-sample of 214 families from the Twins Early Development Study (TEDS) -a cohort study beginning when the twins were 4 years-old (25), enrolled in a longitudinal study of eating behaviour and weight. Of these, 100 families had overweight or obese parents and 114 had normal-weight parents. Families were matched on social class. 173 (81%) families participated in the 2006 follow-up. The data here come from one child per family selected at random from the follow-up when the children were aged 9 to 12 years. Children were weighed and measured and parents were asked to complete the CEBQ and questions relating to their socio-economic status (including education level and ethnicity) during a study home visit. Psychometric measures The CEBQ is 35-item validated instrument designed to assess a range of eating behaviours in children (23). It has good internal consistency, test-retest reliability, and stability over time (23;26) and has been shown to be related to food intake in behavioural tests (27). It includes four sub-scales that measure food approach behaviours (Enjoyment of Food, Food Responsiveness, Emotional Overeating, Desire to Drink) and four that index more avoidanttype responses (Satiety Responsiveness/Slowness in Eating, Emotional Under-eating, Food Fussiness). Because Satiety Responsiveness and Slowness in Eating have been found to load onto the same factor (23) they were combined to form a single sub-scale. Response options for all sub-scales were from 'never' to 'always' on a 1-5 likert scale. Anthropometrics In both samples, a stadiometer (Leicester height measure, Seca, Birmingham, UK) was used to measure children's height to the nearest millimetre. Children were weighed to the nearest tenth of a kilogram using Tanita TBF-300MA Body Composition Analyser in sample 1 and a digital Tanita scale in sample 2 (Tanita Corporation, Tokyo, Japan). In both samples, waist circumference was measured using standard instructions (28). BMI was calculated from children's weight and height and the Imsgrowth macro (http:// homepage.mac.com/tjcole) was used to transform BMI into age and sex appropriate BMI SD-scores from British 1990 reference data (29). BMI SD-scores were categorised according to recommended groupings for underweight (thinness grade 1, 2, 3; (30)) and IOTF (International Obesity Taskforce) categories for healthy-weight, overweight and obese (31). The healthy-weight group was subdivided into lower-healthy-weight (≤50th centile) and higher-healthy-weight (>50th centile but not meeting criteria for overweight) for comparison of eating behaviours across the weight spectrum. Statistical analysis Relationships between possible covariates (ethnicity, education, age, sex) and BMI SD-score were assessed using Pearson's correlations and t-tests where appropriate. Mean scores for each CEBQ sub-scale were calculated if at least 70% of the items (or 67% in the case of the 3-item 'Desire to Drink' sub-scale) were completed, in line with recommendations for dealing with missing data when calculating sub-scale scores (32). Cases with any missing sub-scales were excluded. To establish that it was appropriate to combine data from the two samples, we tested the sample-by-CEBQ interactions for each sub-scale adjusting for age, sex, ethnicity and parent education in regression analyses on BMI SD-score. When significant, we inspected regression coefficients, confidence intervals and mean scores to check the direction and pattern of the association between CEBQ sub-scale scores and BMI SD in each of the two samples separately. Samples were combined if the patterning of these figures were comparable. For each sub-scale, trend analysis was performed across the five weight groups in order to examine whether adiposity showed a graded association with eating behaviours. When the weighted linear term was significant, each sub-scale was entered into separate regression models, adjusted for age, sex, ethnicity, parent education and sample to determine the proportion of variance explained in BMI SD-score. All analyses were carried out using SPSS v14.0 (SPSS Inc., Chicago, IL). Statement of Ethics Full ethical approval was gained from University College London Committee on the Ethics of Non-NHS Human Research. All applicable institutional and governmental regulations concerning the ethical use of human volunteers were followed in both studies. Sample characteristics Complete data were available for 406 children (Sample 1: n=239; Sample 2: n=167) and they are described in Table 1. In Sample 1, 49% were girls (n=117). Based on IOTF cutoffs, 13.0% of the children were overweight and 3.3% were obese which is low compared with population data for the UK (Health Survey for England, 2006; available from the UK Data Archive: www.data-archive.ac.uk). BMI SD-scores were significantly higher in boys (0.28 ± 1.28) than girls (−0.12 ± 1.27) (t(238)=2.41, p=.016). There was high ethnic diversity compared with UK population data, with 46% of participating children classified as non-white (Office of National Statistics, 2005, available at: www.ons.gov.uk/). Parent education level was similar to the UK average (National Statistics, 2005: www.dcsf.gov.uk/). In Sample 2, 60.5% were girls. Based on IOTF cut-offs, 18.6% were overweight and 7.2% were obese, which is again slightly lower than UK population data (HSE, 2006). Differences in BMI SD-scores between boys (0.55±1.2) and girls (0.40±1.2) were not significant. The majority of children were white (92.9%), which is comparable to UK population data (ONS, 2005). Parent education level was slightly lower than the national average (National Statistics, 2005). CEBQ sub-scales Sub-scales were normally distributed (skewness and kurtosis between 1 and −1) and had good internal consistency, with Cronbach's alphas over .74. There was a significant interaction between study sample and Enjoyment of Food for BMI SD-score in the adjusted model (F(1,398) Linear Trend Analysis CEBQ sub-scale scores are shown by IOTF category in Table 2. There were significant positive linear trends by weight group for Food Responsiveness (p<.001), Emotional Overeating (p<.001), Enjoyment of Food (p<.001) and Desire to Drink (p=.037) (Box 1), and significant negative linear trends for Satiety Responsiveness/Slowness in Eating (p<. 001) and Food Fussiness in girls (p=.02) and boys (p=.05). The linear trend was not significant for Emotional Undereating (Box 2). Table 3 presents a summary of the regression analyses. The baseline model was significant (p=.001) and explained 4.9% of the variance in BMI SD-score. All sub-scales significantly predicted BMI SD-score, explaining 1.0% to 6.3% of additional variance. For every unit increase in Food Responsiveness, Enjoyment of Food, Emotional Overeating and Desire to Drink, BMI SD increased by 0.39 (p<.0001), 0.25 (p=.003), 0.41 (p<.0001) and 0.16 (p=.04) respectively. Every unit increase in Satiety Responsiveness/Slowness in Eating corresponded to a 0.49 decrease in BMI SD-score (p<.0001). Every unit increase in Food Fussiness corresponded to a 0.27 decrease in girls' BMI SD-score but was not significantly associated with adiposity in boys. Analyses with waist SD-score showed similar results and they are not presented here. DISCUSSION An appetitive profile characterised by more responsiveness to and enjoyment of food, more emotional eating, lower responsiveness to internal satiety cues, and lower fussiness, was quantitatively associated with weight in a large sample of 7 to 12-year-old, ethnically-diverse, British children. Similar results were obtained using BMI SD-scores and waist SDscores as outcomes and all effects were independent of age, ethnicity and parental education. These findings are consistent with earlier research demonstrating that obese children tend to eat more rapidly (7;8), are less sensitive to satiety cues (4;5), and are more responsive to food cues (10) than healthy-weight children. They are also comparable to results recently reported by Viana and colleagues in 3 to 13-year-old Portuguese children, although theirs was a clinical sample with a very small underweight group and they did not subdivide the healthy-weight group (24). They found that the lean children had the lowest positive responses to food and highest avoidant responses, and we found the same results in this study. These results might equally be construed as showing that certain traits are protective from contemporary obesogenic environments as that the opposite traits are risk factors, lending support to the behavioural susceptibility theory of obesity (27). Our results add to debate regarding the association between fussy eating and BMI (14-16;33). They suggest that, at least in childhood, fussiness could be protective against overeating by reducing the effective choices for a child because of a greater number of dislikes. Alternatively, fussiness may reflect the reverse of enjoyment, so that if eating per se is not as well liked, then desire to eat may not be enough to overcome fussiness. As far as we are aware, this is the first study to report an association between wish to drink and adiposity, at least in an environment where many of the available drinks are high in energy. The literature in this area is limited and the mechanism by which desire to drink might be related to weight needs more investigation. At one level, it could simply be an expression of wanting something in the mouth; if such an individual is offered caloric drinks, energy intake would be a side-effect, and energy intake from drinks is notoriously non-compensated (34). Alternatively, it could be a specific liking for higher calorie drinks. One study found that higher-risk children (based on maternal pre-pregnancy weight) consumed more fruit juice and less milk at 3 and 4 years, and more sweetened-drinks at 6 years, than lower-risk children, suggesting that caloric drink consumption is a behavioural phenotype for obesity risk (35). Extension of the Desire to Drink subscale to include specific types of drink is necessary to explore these ideas. Another possibility is that increased thirst is associated with weight as a consequence of snack intake; salt -often present in savoury snacks -increases thirst, and recent research describes an association between salt and total fluid and sweet-drink consumption in 4 to 18-year-olds (36). Assessing associations between thirst and food and drink intake is an interesting line of further study. Our finding that heavier children show higher emotional eating supports the hypothesis that a hyperphagic response to stress and distress increases the risk of weight gain (37). Intake in emotional states tends to favour sweet or fatty foods, which would augment the impact of energy intake (38;39). Emotional undereating was not related to BMI SD-score in this study, and has only been weakly related to BMI in previous work (24). However, we observed higher emotional undereating in both the heaviest and lightest groups, and further research is needed to determine whether this is a true effect, but it does at least indicate that emotional overeating and undereating are not opposites. Associations between food responsiveness and enjoyment of food and weight group were stronger in sample 2 (from families with twins). Age differences did not explain the effect, because it persisted after controlling for age. It is possible that these effects are due to residual cultural differences as our sample size did not allow full adjustment of the variety of ethnic groups in sample 1. However, because there was no significant sample interaction with the other eating behaviours, it is possible that the findings are due to sampling error. Future research will be needed to identify environmental factors that might moderate the associations between eating behaviour and weight outcomes. Although limited to English speakers, this study is strengthened by having an ethnically diverse, community-based sample. Including waist circumference as an alternative measure of adiposity -and finding the same results as with BMI SD score -was also valuable in light of evidence that abdominal fatness is associated with negative health outcomes in children (40)(41)(42). These features permit generalisability of results to the wider UK population. The study also has limitations. The cross-sectional design means it is not possible to ascertain whether the eating behaviours we assessed were consequences or determinants of adiposity. However, this issue has been most troublesome in studies comparing clinical samples of obese children with healthy-weight groups, because appetitive differences in the obese might reasonably be thought to be a consequence of the worry, dieting, family arguments, school teasing, etc, which obese children receive. The fact that eating behaviours show significant stability (26), and that very thin children were shown to have lower appetites than thinner healthy-weight children, who in turn had lower appetites than fatter healthy-weight children makes it less likely that changes in eating behaviour are a psychosocial consequence of higher weight. Children in sample 1 will be followed up and so it will be possible to assess whether developmental changes during early adolescence (e.g. pubertal growth) influence appetite. Another limitation is that the sample was relatively lean in comparison with UK population data, which we assume to be due to self-selection out of the study of families with heavier children. In sample 1, it is more likely that heavier children opt-out because measurement takes place at school, whereas in sample 2, measurements took place at home. This may explain weight differences between the samples. However a smaller variation in weight compared to the general population means associations with eating behaviours are more likely to be underestimated than overestimated. The evidence that CEBQ scores are markers of risk of weight gain makes it important to understand the determinants of these behaviours. Recent work points to a genetic basis (43; 44) with one study demonstrating that genetic influences on weight are partially mediated by satiety (45). As obesity has a strong heritable component, eating behaviours may be one mechanism by which familial risk is transmitted. As a result, food environments in which high energy foods (and drinks) are easily accessible will allow at-risk eating behaviours to manifest. Genetically sensitive, longitudinal designs are necessary to understand the respective influence of genes and environment on the expression of eating behaviours and subsequent weight gain. Because weight tracks into adulthood, early intervention to prevent expression of 'at risk' eating behaviours could be fundamental in curbing further increases in obesity. This study makes it possible to identify healthy-weight individuals who are at risk of weight gain by virtue of their eating behaviour profiles. The challenge then is to discover whether appetitive traits can be modified in ways that reduce obesity risk. Knowing that eating behaviours are quantitatively distributed, it may also be possible to learn more about the behavioural profile that protects children from weight gain and constitutes a 'lean phenotype'. This could be used as a point of departure from which to develop interventions to modify risky eating behaviours, as well as offering insights into tackling the obesogenic environment. Table 2 Unadjusted mean (SD) CEBQ scale score by international obesity taskforce category (IOTF).
2017-11-08T19:00:20.879Z
2008-11-11T00:00:00.000
{ "year": 2008, "sha1": "6eceb6e6ce0f8a00de205f05d6e2156562157445", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc2817450?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "2d1f638f579a169787ceef6b45e8265e75746fde", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
5972789
pes2o/s2orc
v3-fos-license
Conformal mapping of some non-harmonic functions in transport theory Conformal mapping has been applied mostly to harmonic functions, i.e. solutions of Laplace's equation. In this paper, it is noted that some other equations are also conformally invariant and thus equally well suited for conformal mapping in two dimensions. In physics, these include steady states of various nonlinear diffusion equations, the advection-diffusion equations for potential flows, and the Nernst-Planck equations for bulk electrochemical transport. Exact solutions for complicated geometries are obtained by conformal mapping to simple geometries in the usual way. Novel examples include nonlinear advection-diffusion layers around absorbing objects and concentration polarizations in electrochemical cells. Although some of these results could be obtained by other methods, such as Boussinesq's streamline coordinates, the present approach is based on a simple unifying principle of more general applicability. It reveals a basic geometrical equivalence of similarity solutions for a broad class of transport processes and paves the way for new applications of conformal mapping, e.g. to non-Laplacian fractal growth. Introduction Complex analysis is one of the most beautiful subjects in mathematics, and, in spite of involving imaginary numbers, it has remarkable relevance for 'real' applications. One of its most useful techniques is conformal mapping, which transforms planar domains according to analytic functions, w = f (z), with f ′ (z) = 0. Geometrically, such mappings induce upon the plane a uniform, local stretching by |f ′ (z)| and a rotation by arg f ′ (z). This 'ampli-twist' interpretation of the derivative implies conformality, the preservation of angles between intersecting curves (Needham 1997). The classical application of conformal mapping is to solve Laplace's equation, i.e. to determine harmonic functions, in complicated planar domains by mapping to simple domains. The method relies on the conformal invariance of Eq. (1.1), which remains the same after a conformal change of variables. Before the advent of computers, important analytical solutions were thus obtained for electric fields in capacitors, thermal fluxes around pipes, inviscid flows past airfoils, etc. (Needham 1997;Churchill & Brown 1990;Batchelor 1967). Today, conformal mapping is still used extensively in numerical methods (Trefethen 1986). Currently in physics, a veritable renaissance in conformal mapping is centering around 'Laplacian-growth' phenomena, in which the motion of a free boundary is determined by the normal derivative of a harmonic function. Continuous problems of this type include viscous fingering, where the pressure is harmonic (Saffman & Taylor 1958;Bensimon et al. 1986;Saffman 1986), and solidification from a supercooled melt, where the temperature is harmonic in some approximations (Kessler et al. 1988;Cummings et al. 1999). Such problems can be elegantly formulated in terms of time-dependent conformal maps, which generate the moving boundary from its initial position. This idea was first developed by Polubarinova-Kochina (1945a;1945b) and Galin (1945) with recent interest stimulated by Shraiman & Bensimon (1984) focusing on finite-time singularities and pattern selection (Howison 1986;Tanveer 1987;Dai et al. 1991;Ben Amar 1991;Howison 1992;Tanveer 1993;Ben Amar & Brener 1996;Ben Amar & Poiré 1999;Feigenbaum et al. 2001). Stochastic problems of a similar type include diffusion-limited aggregation (DLA) (Witten & Sander 1981) and dielectric breakdown (Niemeyer et al. 1984). Recently, Hastings & Levitov (1998) proposed an analogous method to describe DLA using iterated conformal maps, which initiated a flurry of activity applying conformal mapping to Laplacian fractal-growth phenomena (Davidovitch et al. 1999(Davidovitch et al. , 2000Barra et al. 2002aBarra et al. , 2002bStepanov & Levitov 2001;Hastings 2001;Somfai et al. 1999;Ball & Somfai 2002). One of our motivations here is to extend such powerful analytical methods to fractal growth phenomena limited by non-Laplacian transport processes. Compared to the vast literature on conformal mapping for Laplace's equation, the technique has scarcely been applied to any other equations. The difficulty with non-harmonic functions is illustrated by Helmholtz's equation, (1.2) which arises in transient diffusion and electromagnetic radiation (Morse & Feshbach 1953). After conformal mapping, w = f (z), it acquires a cumbersome, non-constant coefficient (the Jacobian of the map), Similarly, the bi-harmonic equation, which arises in two-dimensional viscous flows (Batchelor 1967) and elasticity (Muskhelishvili 1953), transforms with an extra Laplacian term (see below), (1.5) In this special case, conformal mapping is commonly used (e.g. Chan et al. 1997;Crowdy 1999Crowdy , 2002Barra et al. 2002b) because solutions can be expressed in terms of analytic functions in Goursat form (Muskhelishvili 1953). Nevertheless, given the singular ease of applying conformal mapping to Laplace's equation, it is natural to ask whether any other equations share its conformal invariance, which is widely believed to be unique. In this paper, we show that certain systems of nonlinear equations, with nonharmonic solutions, are also conformally invariant. In section 2, we give a simple proof of this fact and some of its consequences. In section 3, we discuss applications to nonlinear diffusion phenomena and show that single conformally invariant equations can always be reduced to Laplace's equation (which is not true for coupled systems). In section 4, we apply conformal mapping to nonlinear advection-diffusion in a potential flow, which is equivalent to streamline coordinates in a special case (Boussinesq 1905). In section 5, we apply conformal mapping to nonlinear electrochemical transport, apparently for the first time. In section 6, we summarize the main results. Applications to non-Laplacian fractal growth are a common theme throughout the paper. (See sections 2, 4, and 6.) The standard application of conformal mapping is based on two facts: Mathematical Theory 1. Any harmonic function, φ, in a singly connected planar domain, Ω w , is the real part of a analytic function, Φ, the 'complex potential' (which is unique up to an additive constant): φ = Re Φ(w). Presented like this, it seems that conformal mapping is closely tied to harmonic functions, but Fact 2 simply expresses the conformal invariance of Laplace's equation: A solution, φ(w), is the same in any mapped coordinate system, φ(f (z)). Fact 1, a special relation between harmonic functions and analytic functions, is not really needed. If another equation were also conformally invariant, then its non-harmonic solutions, φ(w, w), would be preserved under conformal mapping in the same way, φ(f (z), f (z)). (See Fig. 1.) In order to seek such non-Laplacian invariant equations, we review the transformation properties of some basic differential operators. Following Argand and Gauss, it is convenient to represent two-dimensional vectors, a = a xx + a yŷ , as complex numbers, a = a x + a y i. We thus express the gradient vector operator in the plane as a complex scalar operator †, which has the essential property that ∇f = 0 if and only if f is analytic, in which case, ∇f = 2f ′ (Needham 1997). Since a · b = Re ab, the Laplacian operator can be expressed as, ∇ · ∇ = Re ∇∇ = ∇∇ (if mixed partial derivatives can be taken in any order). Similarly, the 'advection operator', which acts on two real functions φ and c, takes the form, ∇φ · ∇c = Re (∇φ)∇c. Under a conformal mapping of the plane, w = f (z), the gradient transforms as, ∇ z = f ′ ∇ w . This basic fact, combining the ampli-twist property and the chain rule, makes it easy to transform differential operators (Needham 1997). The Laplacian transforms as, where ∇ z f ′ = 0 because f ′ is also analytic. This immediately implies the conformal invariance of Laplace's equation (1.1), and the non-invariance of Helmholtz's equation (1.2). The transformation of the bi-harmonic equation (1.4) in Eq. (1.5) is also easily derived with the help of Needham's identity, ∆|f | 2 = 4|f ′ | 2 , applied to f ′ . Everything in this paper follows from the simple observation that the advection operator transforms just like the Laplacian, Each operator involves a 'dot product of two gradients', so the same Jacobian factor, |f ′ | 2 , appears in both cases. The transformation laws, Eq. (2.2) and Eq. (2.3), are surely well known, but it seems that some general implications have been overlooked, or at least not fully exploited in physical applications. (b) Conformally Invariant Systems of Equations The identities (2.2) and (2.3) imply the conformal invariance of any system of equations of the general form, where the coefficients a i (φ) and a ij (φ) may be nonlinear functions of the unknowns, φ = (φ 1 , φ 2 , . . . , φ N ), but not of the independent variables or any derivatives of the unknowns. Thus we arrive at our main result: Theorem 2.1. (Conformal Mapping Theorem.) Let φ (w, w) satisfy Eq. (2.4) in a domain Ω w of the complex plane, and let w = f (z) be a conformal mapping from Ω z to Ω w . Then φ(f (z), f (z)) satisfies Eq. (2.4) in Ω z . Whenever the system (2.4) can be solved analytically in some simple domain, the Theorem produces a family of exact solutions for all topologically equivalent domains. Otherwise, it allows a convenient numerical solution to be mapped to more complicated domains of interest. This is an enormous simplification for free boundary problems, where the solution in an evolving domain can be obtained by time-dependent conformal mapping to a single, static domain. Conformal mapping is most useful when the boundary conditions are also invariant. Dirichlet (φ i = constant) or Neumann (n · ∇φ i = 0) conditions are typically assumed, but here we consider the straight-forward generalizations, respectively, where b i (φ) and b ij (φ) are nonlinear functions of the unknowns, α i is a constant, andn is the unit normal. The conformal invariance of the former is obvious, so we briefly consider the latter. It is convenient to locally transform a vector field, F , along a given contour as,F = t F , so that ReF and ImF are the projections onto the unit tangent, t = dz/|dz|, and the (right-handed) unit normal, n = −it, respectively. Since the tangent transforms as, t w = t z f ′ /|f ′ |, and the gradient as, ∇ z = f ′ ∇ w , we find, ∇ z = |f ′ |∇ w . The invariance of Eq. (2.5) follows after taking the imaginary part on the boundary contour. (c) Gradient-Driven Flux Densities Generalizing ∇φ for Laplacian problems, we define a 'flux density' for solutions of Eq. (2.4) to be any quasi-linear combination of gradients, where c ij (φ) are nonlinear functions. The transformation rules above for the gradient apply more generally to any flux density, These basic identities imply a curious geometrical equivalence between solutions to different conformally invariant systems: Theorem 2.2. (Equivalence Theorem.) Let φ (1) and φ (2) satisfy equations of the form (2.4) with corresponding flux densities, F (1) and F (2) , of the form (2.6). If F (1) z = a F (2) z on a contour C z for some complex constant a, then F (1) w on the image, C w = f (C z ), after a conformal mapping, w = f (z). An important corollary pertains to 'similarity solutions' of Eqs. (2.4) and (2.5) in which certain variables {φ i } involved in a flux density depend on only one Cartesian coordinate, say Re w, after conformal mapping: φ i = G i (Re f (z)). (Our examples below are mostly of this type.) Such special solutions share the same flux lines (level curves of Im f (z)) and iso-potentials (level curves of Re f (z)) in any geometry attainable by conformal mapping. They also share the same spatial distribution of flux density on an iso-potential, although the magnitudes generally differ. An important physical quantity is the total normal flux through a contour, often called the 'Nusselt number', Nu . For any contour, C, we define a complex total flux, I(C) = CF |dz| = C F dz, such that Re I(C) is the integrated tangential flux and Im I(C) = Nu (C). From Eq. (2.7) and dw = f ′ dz, we conclude, I(C z ) = I(C w ). Therefore, flux integrals can be calculated in any convenient geometry. This basic fact has many applications. For example, ifF w is constant on a contour C w = f (C z ), then for any conformal mapping, we have, is simply the length of C w . For fluxes driven by gradients of harmonic functions, this is the basis for the method of iterated conformal maps for DLA, in which the 'harmonic measure' for random growth events on a fractal cluster is replaced by a uniform probability measure on the unit circle (Hastings & Levitov 1998). More generally, a non-harmonic probability measure for fractal growth can be constructed for any flux law of the form (2.6) for fields satisfying Eq. (2.4). According to the results above, the growth probability is simply proportional to the normal flux density on the unit circle for the same transport problem after conformal mapping to the exterior of the unit disk. A nontrivial example is given below in section 4(c). This allows the Hastings-Levitov method to be extended to a broad class of non-Laplacian fractal-growth processes (Bazant, Choi & Davidovitch 2003). (d ) Conformal Mapping to Curved Surfaces The Conformal Mapping Theorem is even more general than it might appear from our proof: The domain, Ω z , may be contained in any two-dimensional manifold. This becomes clear from the recent work of Entov and Etingof (1991;1997), who solved viscous fingering problems on various curved surfaces by conformal mapping to the complex plane, e.g. via stereographic projection from the Riemann sphere. They exploited the fact that Laplace's equation is invariant under any conformal mapping, w = f(z), from the plane to a curved surface because the Laplacian transforms as ∇ 2 z = J ∇ 2 w , where J(f(z)) is the Jacobian. The system (2.4) shares this general conformal invariance because the advection operator transforms in the same way, ∇ z φ·∇ z c = J ∇ w φ·∇ w c. The application of these ideas to non-Laplacian transport-limited growth phenomena on curved surfaces is work in progress with J. Choi and D. Crowdy; here we focus on conformal mappings in the plane, described by analytic functions. Physical Applications to Diffusion Phenomena Conformally invariant boundary-value problems of the form (2.4) and (2.5) commonly arise in physics from steady conservation laws, for gradient-driven flux densities, Eq. (2.6), with algebraic (b(c i ) = 0) or zero-flux (n · F i = 0) boundary conditions, where c i is the concentration and F i the flux of substance i. Hereafter, we focus on flux densities of the form, where D i (c i ) is a nonlinear diffusivity, u i is an irrotational vector field causing advection, and φ is a (possibly non-harmonic) potential. Examples include s advectiondiffusion in potential flows and bulk electrochemical transport. Before discussing these cases of coupled dependent variables, it is instructive to consider nonlinear diffusion in only one variable. The most general equation of the type (2.4) for one variable is, This equation arises in the Stefan problem of dendritic solidification, where c is the dimensionless temperature of a supercooled melt and a(c) is Ivantsov's function, which implicitly determines the position of the liquid-solid interface via a(c) = 1 (Ivantsov 1947). In two dimensions, Bedia & Ben Amar (1994) prove the conformal invariance of Eq. (3.3) and then study similarity solutions, c(ξ, η) = G(η), by conformal mapping, w = ξ + iη, to a plane of parallel flux lines, where an ordinary differential equation is solved. More generally, reversing these steps, it is straight-forward to show that any monotonic solution of Eq. it is Kirchhoff's transformation (Crank 1975 which is equivalent to the KPZ equation without noise (Kardar et al. 1986), it is the Cole-Hopf transformation (Whitham 1974), φ = G −1 (h) = e λh/2ν , which yields the diffusion equation, ∂φ ∂t = ν∇ 2 φ, and thus Laplace's equation in steady state. In summary, the general solutions to Equation (3.3) are simply nonlinear functions of harmonic functions, so, in the case of one variable, our theorems can be easily understood in terms of standard conformal mapping. For two or more coupled variables, however, this is no longer true, except for special similarity solutions. The following sections discuss some truly non-Laplacian physical problems. Steady Advection-Diffusion in a Potential Flow We begin with a well known system of the form (2.5), the only one to which conformal mapping has previously been applied (see below), albeit not in the present, more general context. Consider the steady diffusion of particles or heat passively advected in a potential flow, allowing for a concentration-dependent diffusivity. For a characteristic length, L, speed, U , concentration, C, and diffusivity, D(C), the dimensionless equations are where φ is the velocity potential (scaled to U L), c is the concentration (scaled to C), b(c) is the dimensionless diffusivity, and Pe = U L/D is the Péclet number. The latter equation is a steady conservation law for the dimensionless flux density, F = Pe c∇φ − b(c) ∇c (scaled to DC/L). For b(c) = 1, these classical equations have been studied recently in two dimensions, e.g. in the contexts of tracer dispersion in porous media (Koplik et al. 1994(Koplik et al. , 1995, vorticity diffusion in strained wakes (Eames & Bush 1999;Hunt & Eames 2002), thermal advection-diffusion (Morega & Behan 1994;Sen & Yang 2000), and dendritic solidification in flowing melts (Kornev & Mukhamadullina 1994;Cummings et al. 1999). (a) Similarity Solutions for Absorbing Leading Edges Let us rederive a classical solution in the upper half plane, w = ξ + iη (η > 0), which we will then map to other geometries. As shown in the top left panel of Fig. 2, consider a straining velocity field, which is straightforward to solve, at least numerically. For b(S) = 1, Equation (4.2) has a simple, analytical solution, S(η) = erf (η) (e.g. Cummings et al. 1999). If extended to the entire w-plane, where two fluids of different concentrations flow towards each other, this solution also describes a Burgers' vortex sheet under uniform strain (Burgers 1948). In that case, (φ ξ , φ η , c) is a three-dimensional velocity field satisfying the Navier-Stokes equations, and Pe is the Reynolds number. Inserting a boundary, such as the stationary wall on the real axis, however, is not consistent with Burgers' solution because the no-slip condition cannot be satisfied. The wall is crucial for conformal mapping to other geometries because it enables singularities to be placed in the lower half plane. For every conformal map to the upper half plane, w = f (z), we obtain a solution, which describes the nonlinear advection-diffusion layer in a potential flow of concentrated fluid around the leading edge of an absorbing object. For a linear diffusivity, S(η) = erfη, various examples are shown in Fig. 2. The choice, f (z) = √ z − a, in the middle left panel, describes a parabolic leading edge, x = (y/2α) 2 − α 2 , where α = Im a ≥ 0. The limit of uniform flow past a half plate (a = 0), in the upper right panel is a special case discussed below. The less familiar mapping, f (z) = z 1/2 + z −1/2 , which plays a crucial role in non-Laplacian growth problems (see below), places a cylindrical rim on the end of a semi-infinite flat plate, as shown in the lower left panel. The solution has a pleasing form in polar coordinates, where we have shifted the velocity potential, Φ = f (z) 2 − 2 = z + z −1 . Far from the rim, we recover the half-plate similarity solution, since f (z) ∼ √ z as |z| → ∞, but close to the rim, as shown in Fig. 3, there is a nontrivial dependence on Pe . For Pe ≫ 1, a boundary layer of O(Pe −1/2 ) thickness forms on the front of the rim and extends to within an O(Pe −1/2 ) distance from the rear stagnation point. The flux density is easily calculated in the w-plane and then mapped to the z-plane using Eq. (2.7): where the first term describes advection and the second, diffusion. The lines of advective and diffusive flux, which are level curves of Im f (z) 2 and Re f (z), respectively, are independent of Pe and b(c), as required by the Equivalence Theorem. In particular, the diffusive flux lines have the same shape for any flow speed or nonlinear diffusivity as in the case of simple linear diffusion (Pe = 0, b(c) = 1, c ∝ Im f (z)), even though advection and nonlinearity affect the lines of total flux. The lines of total flux, called 'heatlines' in thermal advection-diffusion, are level curves of the 'heat function' † (Kimura & Bejan 1983), which we define in complex notation via ∇H = iF . For a linear diffusivity, we integrate Eq. (4.6) to obtain the heat function for any conformal mapping, (4.7) which shows how the total-flux lines cross over smoothly from fluid streamlines outside the diffusion layer (H ∼ Pe Im f (z) 2 , Pe Im f (z) ≫ 1) to diffusive-flux lines near the absorbing surface (H ∼ 2 Pe /π Re f (z), Pe Im f (z) ≪ 1). On the absorbing surface, Im f (z) = 0, the flux density is purely diffusive and in the normal direction. Its spatial distribution is determined geometrically by the conformal map, and only its magnitude depends on Pe , as predicted by the Equivalence Theorem. (For a linear diffusivity, S ′ (0) = 2/ √ π.) What appears to be the only previous result of this kind is due to Koplik et al. (1994Koplik et al. ( , 1995 in the context of tracer dispersion by linear advection-diffusion in porous media. In the case of planar potential flow from a dipole source to an equipotential absorbing sink, they proved that the spatial distribution of surface flux is independent of Pe . Here we see that the same conclusion holds for all similarity solutions to Eq. (4.1), even if (i) diffusive flux is not directed along streamlines; (ii) the diffusivity is a nonlinear function of the concentration; and (iii) the domain is on a curved surface. (b) Streamline Coordinates In proving their equivalence theorem, Koplik et al. (1994Koplik et al. ( , 1995 transform Eq. (4.1) in the linear case, b(c) = 1, to 'streamline coordinates', where Φ = φ + iψ is the complex potential, φ, the velocity potential, and ψ, the streamfunction. Because the independent and dependent variables are interchanged, † Sen & Yang (2000) have recently shown that the heat function satisfies Laplace's equation, ∇ 2 H = 0, in certain potential-dependent coordinates,∇ ≡ e −Pe φ ∇. This might seem related to our theorems, but it does not provide a basis for conformal mapping of the domain because the coordinate transformation is not analytic. Its value is also limited by the fact that the boundary conditions on H are not known a priori. For example, on a surface where the concentration is specified, the unknown flux is also required. These difficulties underscore the fact that the solutions of Eq. (4.1) are fundamentally non-harmonic. this is a type of hodographic transformation (Whitham 1974;Ben Amar & Poiré 1999). The physical interpretation of Eq. (4.9) is that advection (the left-hand side) is directed along streamlines, while diffusion (the right-hand side) is also perpendicular to the streamlines, along iso-potential lines. In high-Reynolds-number fluid mechanics, this is a well known trick due to Boussinesq (1905) still in use today (Hunt & Eames 2002). Streamline coordinates are also used in Maksimov's method for dendritic solidification from a flowing melt (Cummings et al. 1999). Boussinesq's transformation is simply a conformal mapping to a geometry of uniform flow. Any obstacles in the flow are mapped to line segments (branch cuts of the inverse map) parallel to the streamlines. Among the solutions (4.3), streamline coordinates correspond to the map, f (z) = √ z, from a plane of uniform flow past an absorbing flat plate on the positive real axis (the branch cut), as shown in the top right panel of Fig. 2. In this geometry, we have the boundary-value problem, Pe ∂c ∂x = ∇ 2 c, c(x > 0, 0) = 0, c(−∞, y) = 1, which Carrier et al. (1983) have solved using the Weiner-Hopf technique. More simply, Greenspan has introduced parabolic coordinates (as in Greenspan 1961), to immediately obtain the similarity solution derived above, c(x, y) = erf ( √ Pe η), where 2η 2 = −x + x 2 + y 2 . The reason why this solution exists, however, only becomes clear after conformal mapping to non-streamline coordinates in the upper half plane. (See also Cummings et al. 1999.) As this example illustrates, streamline coordinates are not always convenient, so it is useful to exploit the possibility of conformal mapping to other geometries. For similarity solutions, it is easier to work in a plane where the diffusive flux lines are parallel. Streamline coordinates are also often poorly suited for numerical methods because stagnation points are associated with branch-point singularities. This is especially problematic for free boundary problems: For flows toward infinite dendrites, it is easier to determine the evolving map from a half plane (Cummings et al. 1999); for flows past finite growing objects, it is easier to map from the exterior of the unit circle (Bazant, Choi & Davidovitch 2003). (c) Non-similarity Solutions for Finite Absorbing Objects It is tempting to try to eliminate the plate from the cylindrical rim in Fig. 3 by conformal mapping from the exterior of a finite object to the upper half plane. Any such mapping in Eq. (4.3), however, requires a quadrupole point source of flow (mapped to ∞) on the object's surface. This is illustrated in the lower right panel of Fig. 2 by a Möbius transformation from the exterior of the unit circle, f (z) = (1 + z)/i(1 − z), where a source at z = 1 ejects concentrated fluid in the +1 direction and sucks in fluid along the ±i directions. Thus we see that, due to the boundary conditions at ∞, uniform flow past an absorbing cylinder (or any other finite object) is in a different class of solutions, where the diffusive flux lines depend nontrivially on Pe . In streamline coordinates, this includes the problem of uniform flow past a finite absorbing strip, which requires solving Wijngaarden's integral equation (Cummings et al. 1999). Here, we study only the high-Pe asymptotics of advection-diffusion layers around finite absorbing objects. Consider again the example of flow past a cylindrical rim on a flat plate (Fig. 3). Because disturbances in the concentration decay exponentially upstream beyond an O(Pe −1/2 ) distance, removing the plate on the downstream side of the cylinder has no effect in the limit Pe → ∞, except on the plate itself (the branch cut), so the solution (4.4)-(4.5) is also asymptotically valid near a finite absorbing cylinder (without the plate). More generally, if z = h(q) is the conformal map from the exterior of any singly connected finite object to the exterior of the unit circle, then the non-harmonic concentration field has the asymptotic form, as Pe → ∞ everywhere except in the wake near the pre-image of the positive real axis, a branch cut corresponding to the 'false plate'. The convergence is not uniform, since the false plate always spoils the approximation sufficiently far downstream, for a fixed Pe ≫ 1. The validity of Eq. (4.10) near the surface of the object, however, allows us to calculate the normal flux density using Eq. (4.8), n · ∇c ∼ 2 Pe π sin θ 2 (4.11) as Pe → ∞ for all θ = arg h(q) ≫ Pe −1/2 away from the rear stagnation point, θ = 0. The limiting Nusselt number, Nu ∼ 8 Pe /π, is also easily calculated by mapping the rim (with the false plate) to the upper half plane where the normal flux density is uniform, 2 Pe /π, on a line segment of length four (from -2 to 2). As explained in section 2(c), Equation (4.11) describes the non-harmonic probability measure for fractal growth by steady advection-diffusion in a uniform potential flow in the limit Pe → ∞. This model, which we might call 'advectiondiffusion-limited aggregation' (ADLA), is perhaps the simplest generalization of the famous DLA model of Witten and Sander (1981) allowing for more than one bulk transport process. The resulting competition between advection and diffusion produces a crossover between two distinct statistical 'phases' of growth. As expected from renormalization-group theory (Goldenfeld 1992), the crossover connects 'fixed points' of the growth measure, describing self-similar dynamics. For small initial Péclet numbers, Pe (0) ≪ 1, the growth measure of ADLA is well approximated by the uniform harmonic measure of DLA and the concentration by the similarity solution, c(q, q) ∝ Im log h(q), but this is an unstable fixed point. Regardless of the initial conditions, the Péclet number diverges, Pe (t) = U L(t)/D → ∞, as the object grows, so the concentration eventually approaches the new similarity solution in Eq. (4.10). At this advection-dominated stable fixed point, the growth measure obeys Eq. (4.11). The sin θ/2 dependence causes anisotropic fractal growth at long times favoring the direction of incoming, concentrated fluid, θ = π, and the total growth rate (Nu ) is proportional to Pe (t). Such analytical results serve to illustrate the power of conformal mapping applied to systems of invariant equations. Electrochemical Transport (a) Simple Approximations and Conformal Mapping Conservation laws for gradient-driven fluxes also describe ionic transport in dilute electrolytes. Because the complete set of equations and boundary conditions (below) are nonlinear and rather complicated, the classical theory of electrochemical systems involves a hierarchy of approximations (Newman 1991). Conformal mapping has long been applied in the simplest case where the current density, J, is proportional to the gradient of a harmonic function, φ, the electrostatic potential (Moulton 1905;Hine 1956). This approximation, the 'primary current distribution', describes the linear response of a homogeneous electrolyte to a small applied voltage or current, as well as more general conduction in a supporting electrolyte (a great excess of inactive ions). The assumptions of Ohm's Law, J = σE = −σ∇φ (with a constant conductivity, σ) and no bulk charge sources or sinks, ∇ · J = 0, are analogous to those of potential flow and incompressibility describe above. Each electrode is assumed to be an equipotential surface (see below), so the potential is simply that of a capacitor -harmonic with Dirichlet boundary conditions. Naturally, classical conformal mapping from electrostatics (Churchill & Brown 1990;Needham 1997) have been routinely applied, but it seems conformal mapping has never been applied to any more realistic models of electrochemical systems. The 'secondary current distribution' introduces a kinetic boundary condition, n·J = R(φ), which equates the normal current with a potential-dependent reaction rate, e.g. given by the Butler-Volmer equation (see below). In this case, conformal mapping could be of some use. Although the boundary condition acquires a nonconstant coefficient, |f ′ |, from Eq. (2.7), Laplace's equation is preserved. A more serious complication in the 'tertiary current distribution' is to allow the bulk ionic concentrations to vary in space (but not time). Ohm's law is then replaced by a nonlinear current-voltage relation. Our main insight here is that conformal mapping can still be applied in the usual way, even though the equations are nonlinear and the potential, non-harmonic. (b) Dilute-Solution Theory In the usual case of a dilute electrolyte, the ionic concentrations, {c 1 , c 2 , . . . , c N }, and the electrostatic potential, φ, satisfy the Nernst-Planck equations (Newman 1991), which have the form of Eqs. (3.1) and (3.2), where the 'advection' velocities, u i = −z i eµ i ∇φ, are due to migration in the electric field, E = −∇φ. Here, z i e is the charge (positive or negative) and µ i the mobility of the ith ionic species. The diffusivities are given by the Einstein relation, D i = k B T µ i , where k B is Boltzmann's constant and T , the temperature. Scaling concentrations to a reference value, C, potential to the thermal voltage, kT /e, length to a typical electrode separation, L, and assuming that D i , T , and ε are constants, the steady-state equations take the dimensionless form, Because dissolved ions are very effective at charge screening, significant diffuse charge can only exist in very thin (1 − 100nm) interfacial double layers, where boundary conditions break the symmetry between opposite charge carriers. The 'bulk' potential (outside the double layers) is then determined implicitly by the condition of electroneutrality (Newman 1991), N i=1 z i c i = 0, which is trivially conformally invariant. Therefore, the most common model of steady electrochemical transport, Eq. (5.1), satisfies the assumptions of the Conformal Mapping Theorem for any number of ionic species (N ≥ 2). Although the equations differ from those of advection-diffusion in a potential flow, we can still map to electric-field coordinates (the analog of streamline coordinates), or any other convenient geometry. Although the equations are conformally invariant, the boundary conditions are so only in certain limits. General boundary conditions express mass conservation, eithern · F i = 0 for an inert species, or for an active species at an electrode, where R i (c i , φ) is the Faradaic reaction-rate density (scaled to D i C/L). It is common to assume Ahrrenius kinetics, where k + and k − are rate constants for deposition and dissolution, respectively (scaled to D i /L), α ± are transfer coefficients, c r is the concentration of the reduced species (scaled to C) and φ e is the electrode potential (scaled to kT /e). Taking diffuse interfacial charge into account somewhat modifies R(c i , φ), but the basic structure of Eq. (5.2) is unchanged (Newman 1991;Bonnefont et al. 2001). Conformal mapping introduces a non-constant coefficient, |f ′ |, in Eq. (5.2), but conformal invariance is restored in the case of 'fast reactions' (k + ≫ 1, k − c r ≫ 1), in which equilibrium conditions prevail, R = 0, even during the passage of current. For a single active species (say i = 1), the bulk potential at an electrode is then given by the (dimensionless) Nernst equation, where k = k + /k − c r is an equilibrium constant †. (c) Conformal Mapping with Concentration Polarization The voltage across an electrochemical cell is conceptually divided into three parts (Newman 1991): (i) the 'Ohmic polarization' of the primary current distribution, (ii) the 'surface polarization' of the secondary current distribution, and (iii) 'concentration polarization', the remaining voltage attributed to non-uniform bulk concentrations. Although concentration polarization can be significant, especially at large currents in binary electrolytes, it is difficult to calculate. Analytical results are available only for very simple geometries (mainly in one dimension), so our method easily produces new results. For example, consider a symmetric binary electrolyte (N = 2) of charge number, z = z + = −z − , where the concentration, c = c + = c − , and the potential satisfy, ∇c) = 0, and (to break degeneracy) a constraint on the integral of c, which sets the total number of anions (Bonnefont et al. 2001). In the limit of fast reactions, the bulk potential at each electrode is given by the Nernst equation, φ = φ e − log kc, where we scale φ to k B T /ze and assume α + − α − = 1. A class of similarity solutions is obtained by conformal mapping, w = f (z), to a strip, −1 < Im w < 1, representing parallel-plate electrodes. We set φ e = 0 at the cathode (Im w = −1) and φ e = V , the applied voltage (in units of k B T /ze), at the anode (Im w = 1). We then solve c ′′ = 0 and (cφ ′ ) ′ = 0 with appropriate boundary conditions to obtain a general solution for any conformal mapping to the strip: where J = tanh(V /4) is the uniform current density in the strip, scaled to its limiting value, J lim = 2zeD + C/L. As J → 1, strong concentration polarization develops near the cathode, as shown in Fig. 4 for J = 0.9. At J = 1, the bulk concentration at the cathode vanishes, and the cell voltage diverges due to diffusion limitation. The classical conformal map, z = f −1 (w) = πw+e πw (Churchill & Brown 1990), unfolds the strip like a 'fan' to cover the z-plane and maps the electrodes onto two half plates (Im z = ±π , Re z < −1). As shown in Fig. 4, this solution describes the fringe fields of semi-infinite, parallel-plate electrodes. The field and current lines are cycloids, z a (η) = πa + iπη + e πa e iπη , as in the limit of a harmonic potential at low currents, φ ∼ J Im f (z) − log k. At high currents, the magnitude of the electric field is greatly amplified near the cathode (the lower plate) by concentration polarization, but the shape of the field lines is always the same. This conclusion also holds for all other conformal mappings to the strip, such as the Möbius-log transformation, w = f (z) = i(1 + log(5z − 3)/(5 − 3z)), in Fig. 4 from the region between two non-concentric circles. It is interesting to note that the Equivalence Theorem applies to some physical situations and not others. Similarity solutions like the ones above can only be derived for two equipotential electrodes by conformal mapping to a strip, where the current is uniform. In all such geometries, the electric field lines have the same shape as in the primary current distribution. For three or more equipotential electrodes, however, this is no longer true because conformal mapping to the strip is topologically impossible, and thus similarity solutions do not exist. When the bulk potential varies at the electrodes according to Eq. (5.2), the electric field lines generally differ from both the primary and secondary current distributions, even for just two electrodes. Conclusion We have observed that the nonlinear system of equations (2.4) involving 'dot products of two gradients' is conformally invariant. This has allowed us to extend the classical technique of conformal mapping to some non-harmonic functions arising in physics. Examples from transport theory are steady conservation laws for gradientdriven fluxes, Eq. (3.2). For one variable, the equations in our class (including some familiar examples in nonlinear diffusion) can always be reduced to Laplace's equation. For two or more variables, the general solutions are not simply related to harmonic functions, but all similarity solutions exhibit an interesting geometrical equivalence. For two variables, there is one example in our class, steady advection-diffusion in a potential flow, to which conformal mapping has previously been applied. In this case, our method is equivalent to Boussinesq's streamline coordinates, but somewhat more general. A nonlinear diffusivity is also allowed, and the mapping need not be to a plane of uniform flow (parallel streamlines). In a series of examples, we have considered flows past absorbing leading edges and have generalized a recent equivalence theorem of Hinch (1994, 1995). We have also considered the flows past finite absorbing objects at high Péclet number. Our class also contains the Nernst-Planck equations for steady, bulk electrochemical transport, for which very few exact solutions are known in more than one dimension. In electrochemistry, conformal mapping has been applied only to harmonic functions, so we have presented some new results, such as the concentration polarizations for semi-infinite, parallel-plate electrodes and for misaligned coaxial electrodes. More generally, we have shown that Ohm's Law gives the correct spatial distribution (but not the correct magnitude) of the electric field on any pair of equipotential electrodes in two dimensions, even if the transport is nonlinear and non-Laplacian, although this is not true for three or more electrodes. Such results could be useful in modeling micro-electrochemical systems, where steady states are easily attained (due to short diffusion lengths) and quasi-planar geometries are often arise. As mentioned thoughout the paper, our results can be applied to a broad class of moving free boundary problems for systems of non-Laplacian transport equations (Bazant, Choi & Davidovitch 2003). In contrast, the vast literature on conformalmap dynamics (cited in section 1) relies on complex-potential theory, which only applies to Laplacian transport processes. Nevertheless, standard formulations, such as the Polubarinova-Galin equation for continuous Laplacian growth (Howison 1992) and the Hastings-Levitov (1998) method of iterated maps for DLA, can be easily generalized for coupled non-Laplacian transport processes in our class. In the stochastic case, non-harmonic probability measures for fractal growth can be defined on any convenient contour, such as the unit circle. As an example, we have derived the stable fixed point of the growth measure for an arbitrary absorbing object in a uniform background potential flow, Eq. (4.11). This sets the stage for conformal-mapping simulations of ADLA, which might otherwise seem intractable.
2014-10-01T00:00:00.000Z
2003-02-24T00:00:00.000
{ "year": 2003, "sha1": "cc932a08723ab7f2b17af40c9f5c302bf63fa582", "oa_license": null, "oa_url": "http://arxiv.org/pdf/physics/0302086v3.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a8b29e4dbb8d4d464e3f24ea8f68db1e95b66bd9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
259015439
pes2o/s2orc
v3-fos-license
Pharmacogenetics of Lethal Opioid Overdose: Review of Current Evidence and Preliminary Results from a Pilot Study There has been a worldwide substantial increase in accidental opioid-overdose deaths. The aim of this review, along with preliminary results from our pilot study, is to highlight the use of pharmacogenetics as a tool to predict causes of accidental opioid-overdose death. For this review, a systematic literature search of PubMed® between the time period of January 2000 to March 2023 was carried out. We included study cohorts, case–controls, or case reports that investigated the frequency of genetic variants in opioid-related post-mortem samples and the association between these variants and opioid plasma concentrations. A total of 18 studies were included in our systematic review. The systematic review provides evidence of the use of CYP2D6, and to a lower extent, CYP2B6 and CYP3A4/5 genotyping in identifying unexpectedly high or low opioid and metabolite blood concentrations from post-mortem samples. Our own pilot study provides support for an enrichment of the CYP2B6*4-allele in our methadone-overdose sample (n = 41) compared to the anticipated frequency in the general population. The results from our systematic review and the pilot study highlight the potential of pharmacogenetics in determining vulnerability to overdose of opioids. Introduction Opioids are commonly prescribed as pain medication but are also abused as illicit drugs. Opioid-related mortality caused by adverse drug reactions and unintentional overdose is a serious and global health concern. According to provisional data from the National Centre for Health Statistics at the Centre for Disease Control and Prevention (CDC), there were more than 100,000 drug overdose deaths in the United States in 2021 alone, including illicit and prescription opioids, which is a nearly 17% increase in opioidrelated mortality compared to the same period in 2020 [1]. In Canada, recent statistics show that the apparent opioid-related mortality rate increased by 91% over the two years of the COVID-19 pandemic (from April 2020 to March 2022) compared with previous years (2018-2019) [2]. Specifically, between January and June 2022, there were at least 3500 apparent opioid-related deaths in Canada, of which 97% were accidental. The increased availability of potent synthetic opioids, mainly fentanyl and fentanyl analogs, contributed to the increased overdose fatalities and were involved in nearly 75% of those deaths [2]. Furthermore, opioid-related deaths involved multiple drugs, including psychostimulants (e.g., cocaine, amphetamine), benzodiazepine and alcohol, highlighting the polysubstance nature of the opioid crisis [1,2]. In Canada, opioid prescribing for pain management is strongly regulated and controlled and their use is legal only when they are prescribed by licensed practitioners [3]. In 2018, almost one in eight people in Canada were prescribed an opioid, mainly codeine, hydromorphone, morphine, oxycodone, or fentanyl [3]. While the dispensing of opioids is strictly regulated, the increased demand for pain relief medications has led to the widespread redistribution or re-selling of prescription opioids (i.e., prescription diversion) via illicit street markets [3]. In a study conducted on the patterns of opioid prescribing in Canada, it was revealed that 37% of opioid-dependent individuals received their opioids solely from a licensed physician, while 21% received their opioids from the illicit street markets [4]. Furthermore, prescription opioids not marketed in Canada can be diverted illegally into the country [3]. This increased availability of non-prescription or illicit opioids has contributed to the rise in accidental overdose deaths [2]. The long-term administration of opioids should be medically supervised as their chronic use can lead to the development of physiological dependence, addictive behavior and misuse [5]. Opioids work by activating one or more of the several opioid receptors (mu, kappa and delta) and the nociceptin peptide receptors in the brain [6]. In the early stages of use, opioids stimulate the mesolimbic (midbrain) reward system. The compulsion for continued opioid use is related to the development of tolerance (the need to take higher and higher doses of opioids to achieve the same reward) and dependence (susceptibility to withdrawal symptoms) [7]. Lethal overdose with opioids occurs through excessive activation of the mu-opioid receptors in the locus coeruleus neurons in the brain, resulting in central nervous system (CNS) depression, drowsiness, suppressed respiration and a severe drop in blood pressure [5,6]. The considerable rise in opioid-related fatalities has prompted an increase in public health interventions aimed at curtailing the impact of prescription opioid analgesics on the current overdose epidemic. These interventions involve opioid prescribing, monitoring and tapering guidelines for healthcare providers, in addition to educational courses for patients that describe the risks and misuse associated with opioid analgesics [8]. Currently, there are limited proposed biological strategies available to address the global opioid overdose problem [9]. Strategies that target genetic and epigenetic factors may accelerate the development of effective interventions. For example, recent studies showed that there is genetic vulnerability to the development of substance abuse [10] and a genetic risk score composed of single nucleotide polymorphisms (SNPs) can be used to predict the risk of opioid addiction [9]. Therefore, there is a need for pharmacogenetic-based strategies to predict which individuals are at a greater risk of unintended lethal adverse effects from opioids. Pharmacogenetics describes how genetic variations affecting drug pharmacokinetics (i.e., metabolism or transport) or pharmacodynamics (i.e., receptors) may contribute to the interindividual differences in response, tolerance and adverse effects to medications [11]. The pharmacogenetics of opioids has been extensively described in clinical studies including individuals receiving pain management therapies to maximize therapeutic effects, improve treatment outcomes and minimize toxicity [12][13][14]. According to the Clinical Pharmacogenetics Implementation Consortium (CPIC), polymorphisms in CYP2D6, a drug-metabolizing enzyme, have been clearly associated with large interindividual variations in codeine and tramadol response-ranging from poor analgesia to life-threatening CNS depression at standard doses [15]. Therefore, prescribing guidelines recommending pharmacogenetic testing prior to the selection and dosing of clinically relevant opioids have been published [15]. In addition, polymorphisms in other opioid metabolizing enzymes (i.e., CYP2B6, CYP3A4), in the opioid receptors (OPRM1), and the opioid transporters (ABCB1), have shown significant associations with variability in opioid dosing requirements, efficacy and adverse effects [13]. The majority of those associations were based on clinical prospective or retrospective studies or case reports. Limited research has been done on the pharmacogenetics of opioid overdose using post-mortem data. Pharmacogenetics may be helpful in the field of post-mortem toxicology, as it can be used to definitively identify deaths related to suicide, accidents and unknown causes. Retrospective analysis of post-mortem cases revealed that polymorphisms in CYP enzymes, which metabolize selected opioids including codeine, tramadol, methadone and fentanyl, correlate with the serum level of opioids and their metabolites and may serve as an adjunct in certifying the cause of death in unexpected high or low metabolite/parent drug ratios [16][17][18]. As such, the purpose of this review is to summarize evidence of opioid pharmacogenetics in post-mortem cases to highlight the importance of using pharmacogenetics as a tool to identify causes of accidental opioid-overdose deaths. Furthermore, this brief review also provides preliminary results from our pilot study on the association of genetic variation and opioid overdose in post-mortem cases investigated by the Office of the Chief Coroner of Ontario, Canada. Identification of Data through Public Databases and Registers A systematic literature search of published articles was conducted using PubMed, from January 2000 to March 2023. The following keywords were used: (pharmacogenetics OR variants OR polymorphisms OR SNPs) AND (opioids OR *NameOfTheDrug*) AND (post-mortem OR deaths OR fatalities). Bibliographies of included research articles were hand-searched for additional references not identified in our primary searches. This systematic review followed the 2020 PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) reporting recommendations. Data Selection Articles were included if they were: (1) cohort studies, case-control, or case reports, (2) published in English between January 2000 and March 2023, (3) investigated gene variants in post-mortem studies where opioids were the cause of death. Data Extraction and Quality Assessment All articles identified by the search strategy were assessed for eligibility independently by both reviewers (LM and IG). Information extracted from each eligible article included: (1) author names, study design and publication year; (2) sample size; (3) case characteristics (i.e., age, sex ethnicity/ancestry) (4) name of opioid investigated; (5) phenotype assessed; (6) genes and SNPs assessed; and (7) main findings of the study. An assessment of study quality was conducted independently by two reviewers (LM and IG). Recruitment This study was approved by the Office of the Chief Coroner of Ontario (OCC) and the Research Ethics Board of the Centre of Addiction and Mental Health (CAMH) in Toronto, Canada. Accidental opioid-related fatalities were identified through our collaboration with the Regional Supervising Coroner, RW, and Deputy Chief Coroner, RJ, from the OCC and Ontario Forensic Pathology Service (OFPS). In accordance with Ontario's Coroners Act, all deaths that are sudden, unexpected and/or unnatural must be reported to the OCC. The coroner classifies the manner of death according to five categories: natural, accident, suicide, homicide and undetermined. The investigating coroner ascertains the cause and manner of death according to data collected in the course of the investigation, which may include autopsy, post-mortem examination and detailed toxicological testing and chemical analysis. A post-mortem chemical examination usually includes detailed toxicological testing for drugs by immunoassay and gas chromatography-mass spectrometry (GC-MS), and screening for volatiles by headspace GC. This is followed by confirmation and quantitation by GC-MS or liquid chromatography (LC)-MS/MS, as required. Deaths related to opioid overdose were identified by the OCC based on a toxicological analysis that revealed (1) an opioid concentration sufficiently high (above the fatal reference range) to cause death, or (2) a combination of drugs, including at least one opioid present at a high concentration and other intoxicants, such as CNS stimulants, benzodiazepines, or alcohol. Deaths were not considered related to opioid use if another drug was present at a high enough concentration to cause death. Deaths in which other circumstantial factors could have on their own resulted in death (i.e., suicide, homicide, external injuries, motor vehicle collisions, and disease) were not included. Based on the above criteria, 119 accidental opioid overdose cases (78 from 2021-2023 and 41 from 2013-2014) were included in this pilot study. The data collected in this study were coded and analyzed anonymously. No personal identifiers were collected. Blood Sample Collection and Genotyping A total of 41 blood samples (methadone-overdose cases only) obtained from the OCC/OFPS were sent for DNA extraction and genotyping at the CAMH Biobank and Molecular Core Facility (Centre for Addiction and Mental Health, Toronto, ON, Canada). Genomic DNA was extracted from blood samples using a modified version of the Flexi-Gene DNA kit (QIAGEN, Hilden, Germany). Genotyping was performed using standard TaqMan ® Assays (Thermo Fisher Scientific, Waltham, MA, USA) according to the manufacturer's protocol. Two SNPs were genotyped in the CYP2B6 gene, rs3745274 (*9, C516G/T) and rs2279343 (*4, A785G) and one SNP in the OPRM1 gene, rs1799971 (A118G). SNP minor allele and genotype frequencies were determined. Genotyping results were reviewed by two laboratory staff blind to the clinical data. Ten percent of the sample was re-genotyped for quality control. Statistical Analysis All analyses were conducted using R Version 4.0.4. (R Foundation for Statistical Computing Platform, 2021) and RStudio Version 1.4.1106 (RStudio Inc., Boston, MA, USA, 2021). Descriptive statistics for demographic and clinical characteristics were generated using the Fisher exact test for categorical variables and the Kruskal-Wallis test for continuous variables. Systematic Review The systematic search produced a total of 266 articles. A summary of the article selection process is presented in the PRISMA flow diagram (Figure 1). After title, abstract and full-text screening, a total of 18 articles were eligible for inclusion in this systematic review. The characteristics of available reported data from each article are presented in Table 1. Briefly, more than 66% (n = 12) of the included studies were retrospective cohorts in which forensic autopsy cases were reviewed and gene-variant frequencies were tested for an association with the opioid and respective metabolite concentrations. Only six studies were case-controls in which the frequencies of gene variants in opioid-related fatality cases were compared to the frequencies in control samples (which were identified as either healthy volunteers, individuals with opioid addiction, fatalities caused by suicide, or drugs other than opioids). The commonly investigated opioids were methadone (n = 7), codeine (n = 3), tramadol (n = 3), oxycodone (n = 3), hydrocodone (n = 1), morphine (n = 1), and fentanyl (n = 1). The most investigated gene was CYP2D6, followed by CYP2B6, OPRM1, ABCB1, CYP3A4/5 and COMT. BZD, alcohol and other drugs were present in some of the cases. The G516T and A785G variants were higher in the post-mortem population than in the control group. However, this difference was not statistically significant. - OPRM1 A118G The prevalence of the OPRM1 118G variation was significantly higher in the control population (p = 0.0046), which might indicate a protective mechanism against opioid toxicity. Individuals who carried the 1236T variant had statistically lower morphine concentrations than wild-type carriers (p = 0.004). OPRM1 COMT UGT2B7 No significant association between variants and opioid concentrations. SNPs rs3745274 (*9) and rs8192719 exhibited significant differences in the methadone-only group compared to the control group. For these two SNPs, the minor allele frequency in the methadone-only cases was greater than that of the control group. Higher blood methadone concentrations were observed in individuals who were genotyped homozygous for SNP rs3211371 (*5). Richards No co-intoxicants present in cases. CYP2D6 and Opioids CYP2D6 is a drug-metabolizing enzyme involved in the metabolism of approximately 20% of clinically used drugs, including some important opioids (see Figure 2) [36]. The CYP2D6-encoding gene is highly polymorphic, with over 130 identified variants. The combination of CYP2D6 genetic variants constitutes four different phenotypic subgroups based on the rate of drug metabolism. These four phenotypic groups are ultra-rapid metabolizers (UM), normal metabolizers (NM) or formerly extensive metabolizers (EM), poor metabolizers (PM) and intermediate metabolizers (IM) [36]. A recent retrospective cohort including 75 US military veteran deaths has found that approximately 7% of individuals who died due to an opioid overdose carried a UM phenotype [32]. CYP2D6 is a drug-metabolizing enzyme involved in the metabolism of approximately 20% of clinically used drugs, including some important opioids (see Figure 2) [36]. The CYP2D6-encoding gene is highly polymorphic, with over 130 identified variants. The combination of CYP2D6 genetic variants constitutes four different phenotypic subgroups based on the rate of drug metabolism. These four phenotypic groups are ultra-rapid metabolizers (UM), normal metabolizers (NM) or formerly extensive metabolizers (EM), poor metabolizers (PM) and intermediate metabolizers (IM) [36]. A recent retrospective cohort including 75 US military veteran deaths has found that approximately 7% of individuals who died due to an opioid overdose carried a UM phenotype [32]. Genes involved in opioids' action, metabolism, and transport. CYP2D6 metabolizes tramadol, codeine, hydrocodone and oxycodone to their more active metabolites, Odesmethyltramadol, morphine, hydromorphone and oxymorphone, respectively. Morphine is further metabolized by UGT2B7 to M3G and M6G (which has pharmacological activity). Fentanyl is metabolized by both CYP3A4 and CYP3A5 to an inactive metabolite, norfentanyl. Methadone is mainly metabolized by CYP2B6, with little contribution from CYP3A4, CYP2D6, or CYP2C19, to form an inactive metabolite, EDDP. Several opioids are substrates of the P-glycoprotein transporter, encoded by the ABCB1 gene, which is located in various locations in the body, including the liver and intestine and at the blood-brain barrier. In the brain, opioids bind and activate the mu-opioid receptor encoded by the OPRM1 gene. The COMT gene may also affect the opioid's action. Genes involved in opioids' action, metabolism, and transport. CYP2D6 metabolizes tramadol, codeine, hydrocodone and oxycodone to their more active metabolites, O-desmethyltramadol, morphine, hydromorphone and oxymorphone, respectively. Morphine is further metabolized by UGT2B7 to M3G and M6G (which has pharmacological activity). Fentanyl is metabolized by both CYP3A4 and CYP3A5 to an inactive metabolite, norfentanyl. Methadone is mainly metabolized by CYP2B6, with little contribution from CYP3A4, CYP2D6, or CYP2C19, to form an inactive metabolite, EDDP. Several opioids are substrates of the P-glycoprotein transporter, encoded by the ABCB1 gene, which is located in various locations in the body, including the liver and intestine and at the blood-brain barrier. In the brain, opioids bind and activate the mu-opioid receptor encoded by the OPRM1 gene. The COMT gene may also affect the opioid's action. ABCB1 = P-glycoprotein encoding gene; COMT = catechol-o-methyltransferase encoding gene; DRD2 = dopamine receptor D2 subtype encoding gene; EDDP = 2-ethylidene-1, 5-dimethyl-3, 3-diphenylpyrrolidine; M3G = morphine-3glucuronide; M6G = morphine-6-glucuronide; OCT1 = organic cation transporter 1 encoding gene; OPRM1 = mu-opioid receptor encoding gene; UGT2B7 = UDP-glucuronosyltransferase 2B7. CYP2D6 metabolizes codeine, tramadol and oxycodone into their more pharmacologically active metabolites, which are morphine, O-desmethyltramadol and oxymorphone, respectively ( Figure 2) [13]. Tramadol, which is a synthetic opioid, was commonly investigated in post-mortem fatalities. This is because tramadol concentrations have shown a consistent correlation with the CYP2D6 metabolizer phenotypes in clinical investigations [15]. In this review, several studies examined the association between CYP2D6 metabolizer phenotypes and tramadol post-mortem blood concentrations. These studies concluded that CYP2D6 genotyping may be important in identifying the cause of unexpectedly high or low tramadol-metabolite ratios in post-mortem blood samples [19,27]. Furthermore, one study looked at 16 variants from five genes involved in tramadol pharmacokinetics and pharmacodynamics and concluded that a set of 16 loci from these five genes can predict tramadol/metabolite ratio with over 90% accuracy, which is greater than using CYP2D6 alone [30]. Similarly, the oxycodone-to-oxymorphone concentration ratio showed a significant correlation with CYP2D6 activity when death was unrelated to intoxication and CYP2D6 PMs and IMs had significantly higher oxycodone concentrations compared to EMs and UMs [35]. In contrast, for codeine, studies concluded that there was a large variability in the calculated post-mortem concentration ratios of codeine to its metabolite, morphine, which was not explained by the CYP2D6 genotypes alone [16,24,28]. A recent systematic review has shown that the reliability and validity of measuring morphine concentrations in postmortem samples are low [37]. This is because there are post-mortem changes, including post-mortem morphine metabolism and redistribution, that could result in a wide range of morphine blood concentrations reported in deaths [37]. Morphine is mainly metabolized into morphine-3-glucuronide (M3G, inactive metabolite) and morphine-6-glucuronide (M6G, equal or greater affinity at the mu-opioid receptor than morphine) by the UDPglucuronosyltransferase 2B7 (UGT2B7) enzyme ( Figure 2). However, there is reported evidence of post-mortem of de-glucuronidation of M3G and M6G back to morphine by bacterial beta-glucuronidase, or spontaneously [37]. Moreover, drugs with a high volume of distribution and high lipophilicity such as morphine are quickly distributed to tissues. After death, these drugs are released into plasma, resulting in a post-mortem increase in concentrations, a phenomenon known as post-mortem redistribution [37]. Therefore, when interpreting post-mortem codeine findings, analysis of morphine and its glucuronide metabolites should be considered. In summary, evidence suggests a correlation between CYP2D6 genotypes and opioid blood concentration in forensic cases of tramadol and oxycodone toxicity. As for codeine, the relevance of CYP2D6 genotyping in the determination of unexpected codeine/metabolite post-mortem blood concentrations has still to be shown. CYP2B6 and Methadone CYP2B6, which mediates the metabolism of methadone, is also a highly polymorphic gene, with more than 30 variant alleles identified [38]. The CYP2B6*4 is an increased function allele, while the *9-allele produces an enzyme with decreased activity. The CYP2B6*4 allele is usually present along with the *9-allele to form the CYP2B6*6 haplotype, which is the most common and clinically significant haplotype that results in reduced CYP2B6 hepatic expression and activity [38]. CYP2B6 is the main enzyme involved in the metabolism of methadone to an inactive metabolite 2-ethylidene-1,5-dimethyl-3,3-diphenylpyrrolidine (EDDP) (Figure 2) [13]. Methadone has been the most investigated opioid in post-mortem cases. Methadone is a widely used medication in opioid replacement therapy with the goal of transitioning opioid-use disorder individuals from an abused faster-acting opioid to a clinically controlled slower-acting opioid and attenuating the occurrence of opioid withdrawal, craving and opioid-seeking behaviors [39]. Methadone is an opioid agonist, with the (R)-methadone having a 10-fold higher affinity to the opioid receptors (mu, delta and kappa) compared to the (S)-methadone [13]. Seven studies examined the frequencies of several gene variants in methadone-related deaths and their association with methadone or EDDP plasma concentrations. In one retrospective cohort including 40 cases of Caucasian ethnicity, CYP2B6*4, *9 and *6-alleles were associated with higher post-mortem methadone concentration [21]. In two case-control studies, CYP2B6*4 and *9 and *6-allele frequencies were enriched in methadone-related deaths, compared to the frequencies in deaths caused by drugs other than opioids [29], or the frequency in healthy volunteers [23]. In summary, CYP2B6 genotypes, mainly the CYP2B6*6 haplotype, have been linked with increased susceptibility to unintentional methadone fatality [21]. Because CYP2B6*6 is the most clinically significant haplotype, future post-mortem investigation should examine the CYP2B6*6 haplotype in methadone-related overdose deaths. CYP3A4/5 and Opioids CYP3A4 is the most abundant cytochrome enzyme in the liver. Numerous SNPs have been identified in the CYP3A4-encoding gene, however, most of the exonic SNPs have a minor allele frequency of less than 5% in the majority of populations and the impact of genetic variations appears to be relatively modest compared to CYP2D6 poor metabolizers. One commonly investigated variant, the CYP3A4*1B (rs2740574), has been associated with a lower CYP3A4 enzymatic activity [25]. In contrast to CYP3A4, the presence of a nonfunctional CYP3A5 (CYP3A5*3, *6, or *7) is the norm in many populations and it is present in 80-85% of Europeans [40]. CYP3A4 and CYP3A5 share common substrates, and both enzymes are involved in the metabolism of fentanyl into an inactive metabolite, norfentanyl ( Figure 2) [13]. Fentanyl is a synthetic opioid and overdose with fentanyl is most likely due to illicit non-prescription use, notably, now that the fentanyl or fentanyl derivatives are often present in the illicit drug supply. Fentanyl has high potency-it is 100 times more potent than morphine-because of its high lipophilicity [41]. This greater potency significantly lowers the threshold for the risk of overdose and death [41]. Only one study examined gene variants in CYP3A4 and CYP3A5 and fentanyl-related deaths [18]. With a small sample size (n = 25), the study found a nominal association between CYP3A4*IB and CYP3A5*3 gene variants and fentanyl/norfentanyl ratios, where homozygous CYP3A5*3 individuals showed impaired metabolism of fentanyl if they additionally carried the CYP3A4*1B variant, especially the homozygous genotype [18]. The study concluded that these genes may serve as an adjunct in certifying fentanyl toxicity in post-mortem cases. CYP3A4 is also involved in the metabolism of methadone. One study involving 136 accidental methadone-only fatalities demonstrated a correlation between polymorphisms of the CYP3A4 gene and increased likelihood of accidental fetal methadone intoxication and showed significant enrichment of the CYP3A4*1B allele in the post-mortem cases, compared with the general population [25]. In summary, CYP3A4/5 variants are highly present in the majority of populations but with a low frequency. Therefore, larger studies are needed to accurately assess the correlation of CYP3A4 and CYP3A5 gene variants with impaired opioid metabolism and accidental opioid toxicity. ABCB1, OPRM1, COMT and Opioids The ABCB1 gene encodes a P-glycoprotein efflux transporter, located in the liver and intestine and at the blood-brain barrier (BBB) [42]. The P-glycoprotein protein pump at the BBB regulates the concentration of certain opioids in the brain (e.g., methadone, morphine) [42,43]. Common polymorphisms in the ABCB1 gene, including rs1045642 (C3435T), rs2032582 (G2677T/A) and rs1128503 (C1236T), have shown an association with decreased P-gp expression and/or function [42]. These gene variants were investigated in studies including methadone-and codeine-related deaths. The studies reported that individuals who carried the 1236T variant had statistically lower morphine (codeine's metabolite) blood concentrations than wild-type carriers [24] and individuals with a 3435T genotype had higher methadone brain/blood concentration ratios [34]. These results indicate that ABCB1 genetic variants, which alter P-gp expression or function, may play a role in determining active opioid concentrations reaching their site of action in the brain. The OPRM1 gene encodes the mu-opioid receptor, which is the main site of action for all opioids. The most commonly studied variant in the OPRM1 gene is the A118G SNP, which results in reduced protein expression and reduced signal transduction pathways [13]. The A118G SNP has been consistently associated with increased morphine dosing requirements and has been linked with susceptibility to drug addictions [44,45]. The A118G SNP has also been shown to reduce the analgesic effects of opioids, providing a rationale for dose escalations [37]. For morphine, the A118G SNP was shown to have a protective effect against respiratory depression mainly caused by morphine's metabolite, M6G, in several case reports [37]. In this review, one study showed a higher prevalence of the A118G variant in a population of healthy volunteers (n = 100), compared with 84 post-mortem methadone-related fatalities [23], while another study reported no significant difference in the frequency of this variant in deceased individuals with opioid addiction (n = 274) compared to individuals living with opioid-use disorder (n = 309) [26]. Furthermore, one study has demonstrated that the 118G variant was associated with higher benzodiazepine concentrations when it is present as a co-intoxicant in methadone-related fatalities, but not with methadone or EDDP concentrations [21]. Other genes are known to be involved in the development of addiction and response to opioids, such as the Catechol-o-methyltransferase-encoding gene, COMT. The COMT enzyme affects opioids' action via modulation of the dopamine-enkephalin pathway [13]. A common polymorphism in the COMT-encoding gene is the Val158Met (rs4680) [13]. The presence of this polymorphism leads to a three-to-four-fold decrease in enzyme activity and several studies involving postoperative pain or cancer cohorts reported lower morphine dosing requirements in individuals carrying the 158Met variant [13]. While polymorphisms in the COMT-encoding gene were not as commonly investigated in post-mortem fatalities, one study reported a significantly lower frequency of the 158Met variant in methadoneand morphine-related deaths and concluded that there is a possible association between the presence of the Val158Met variant and reduced risk of death [26]. In summary, assessing ABCB1, OPRM1 and COMT polymorphisms in opioid-related deaths needs further evaluation, especially when morphine and/or methadone are involved. Other Interactions with Opioids This present analysis of opioid-related fatalities revealed that the majority of opioidrelated deaths involved mixed intoxications by CNS depressants (i.e., benzodiazepines, hypnotics and/or alcohol), antidepressants, or other co-medications. In a study that examined 68 codeine-related deaths (CRDs), the presence of CNS depressants was significantly associated with lower codeine concentration in CRDs compared to CRDs in which CNS depressants were not detected [24]. This indicates that the presence of co-intoxicants may lead to toxicity at lower opioid doses. Furthermore, in another study involving 174 oxycodone-related deaths, the CYP2D6 metabolizer phenotypes correlated with the concentration of oxymorphone/oxycodone ratio when death was unrelated to intoxications by benzodiazepines, alcohols, or other opioids [35]. In summary, when analyzing opioid/metabolite concentrations in post-mortem samples, the presence of co-intoxicants should be considered. Pilot Study Results Demographic data for the 119 cases are outlined in Table 2. The majority of cases were male (65.5% (n = 78/119)). Fentanyl and fentanyl analogs (carfentanil) were involved in 60.5% of opioid-related deaths (n = 72), followed by methadone, oxycodone, morphine, hydromorphone and heroin. Co-intoxicants with a CNS stimulant, mainly amphetamine, methamphetamine and cocaine, were present in 82.4% of cases. Co-intoxicants with alcohols, benzodiazepines, or antidepressants were present in 25.2%, 8.4% and 5.0% of accidental deaths, respectively. For the cases involving methadone-related deaths (n = 41), 25 samples have been genotyped for the CYP2B6*4, CYP2B6*9 variants and 41 samples for the OPRM1 A118G variant. Table 3 shows the observed genotype and minor allele frequency for the CYP2B6*4 and CYP2B6*9, respectively. Our initial results suggested a minor allele frequency of 0.3 for the CYP2B6 *9 variant, which is consistent with the reported global allele frequency (www.ensembl.org). Notably, we detected a difference in the distribution of the CYP2B6*9 between males and females in our sample (p-value = 0.007) ( Table 3). As for CYP2B6*4, we showed enrichment of this minor allele (0.3) compared to the reported global allele frequency (0.13) (www.ensembl.org). With respect to the OPRM1 A118G gene variant, we observed a minor allele frequency of 0.05 in our sample of methadone overdose (Table 3). Discussion Accidental death due to opioid overdose is a major public health concern worldwide. The results of this systematic review have shown that methadone, codeine, tramadol, oxycodone and fentanyl were the most commonly investigated opioids in post-mortem cases. According to the CDC, drug overdose deaths involving prescription opioids (including methadone) in the US increased from 3442 in 1999 to 17,029 in 2017 [1]. From 2017 to 2021, this number decreased to 16,706 reported deaths [1]. In contrast, synthetic opioids other than methadone (primarily fentanyl) were the main driver of drug overdose deaths with a nearly 7.5-fold increase from 2015 to 2021 [1], which explains the majority of fentanyl-involved deaths in our cases collected more recently, i.e., from 2021 to 2023. In this study, we underscore the importance of how pharmacogenetics can be used in interpreting the cause and manner of opioid-related deaths. This is mainly because gene variants of enzymes that metabolize or transport opioids change the bioavailability and the therapeutic/toxic dose of the drugs. Specifically, the results of the systematic review showed that gene variants in CYP2D6 corresponding to the different metabolizer phenotypes were strongly correlated with differences in the O-desmethyltramadol/tramadol concentration ratios in post-mortem samples [19,27,35]. To a lower extent, gene variants in the CYP2B6 and the CYP3A4/5 showed a correlation with the methadone and fentanyl concentrations, respectively [18,21]. In our pilot study, the *4-variant in the CYP2B6 gene was enriched in our methadone-overdose cases compared to the global reported variant frequency. Similarly, selected variants in the CYP2B6 and CYP3A4 genes were enriched in post-mortem opioid-overdose cases, compared to a control population [25,29]. Therefore, CYP2D6, CYP2B6 and CYP3A4 genotyping may be used as a supplementary tool to certify opioid toxicity and to interpret unexpected post-mortem opioid and metabolite concentrations. More research analyzing variants in the ABCB1, ORPM1 and COMT genes, especially in methadone-and morphine-related deaths, is needed. The use of a single-gene approach (i.e., targeted single-nucleotide polymorphisms in one gene) to explain variations in post-mortem opioid concentrations has been employed in the majority of the included studies. For example, the investigation of the SNP, A118G, in the OPRM1 gene did not show significant associations with opioid or metabolite concentrations in post-mortem cases [21,24,26]. However, a pathway-driven predictive model including genes that are involved in the absorption, distribution, metabolism, excretion and response of opioids may be more useful in predicting the opioid/metabolite concentrations in post-mortem samples [30]. For example, Wendt et al. 2019, reported that a pathwaydriven model using five genes (CYP2D6, UGT2B7, ABCB1, OPRM1, COMT) predicted the tramadol and metabolite concentrations in 208 post-mortem cases with over 90% accuracy compared to using one gene alone [30]. Furthermore, a shift from a single-gene approach to using a genome-wide genotyping approach (i.e., genome-wide association studies or GWAS) in identifying opioid response and toxicity is warranted. One study (n = 37), which used a genome-wide screen approach, identified five additional single nucleotide polymorphisms that were associated with decreased metabolite/tramadol ratios in post-mortem cases [33]. An alternative aggregated genome-wide approach, such as polygenic risk scores (PRS), which is a single variable that predicts the risk to a given trait by considering the additive effects of common variants across the human genome, should be considered. To date, it has only been used in one study related to opioid misuse [46]. Currently, a PRS for the prediction of lethal opioid overdose has not been developed. The potential role of pharmacogenetics in accidental fatal opioid overdose cases remains complex, as it can be confounded by other factors such as post-mortem drug redistribution, the presence of co-intoxications, co-medication, the risk of phenoconversion and the variation of allele frequencies across populations of different ancestries. Post-mortem tissue redistribution of opioids, especially those with a high volume of distribution or high lipophilicity, may lead to artifactually decreased or increased opioid blood concentrations at the time of sample collection [28,41,47]. However, peripheral or femoral blood sampling, which is a commonly used site for post-mortem analysis, is less subject to post-mortem redistribution than central blood sampling [17]. The presence of co-intoxicants can represent another confounder, and our pilot study has shown that co-intoxicants with a CNS depressant (mainly benzodiazepines and alcohol) or a CNS stimulant (mainly cocaine) are present in 33.6% and 30.3% of cases, respectively. The presence of CNS depressants can lead to an additive pharmacological effect with respiratory depression and sedation occurring at lower opioid doses [47], while the presence of cocaine, specifically in chronic administration, can lead to a rapid clearance of opioids, mainly methadone [48]. This demonstrates that the interplay of several factors such as genetic polymorphisms and co-intoxications may affect opioid concentrations in post-mortem samples. The presence of co-medication can present another complexity. The metabolic activity of a drug-metabolizing enzyme is not only modulated by genetics but also by the presence of co-medications. Co-medications which are inducers or inhibitors of a drug-metabolizing enzyme can alter the genotype-predicted phenotype of the enzyme, a process known as phenoconversion. For example, if an individual carrying a genotype-predicted CYP2D6 normal metabolizer (NM) phenotype is administered a strong CYP2D6 inhibitor, the patient's CYP2D6 genotype-predicted phenotype will likely be converted to a poor metabolizer [49]. A study by Lam et al. 2014, demonstrated that the presence of a strong CYP2D6 inhibitor (i.e., antidepressants including paroxetine or fluoxetine) had a large influence on the concentrations of codeine metabolites and there were wide variations in the morphine/codeine ratios in post-mortem samples that were not explained by the CYP2D6 genotype alone [16,24]. Finally, the frequencies of gene variants in cytochrome P450 genes and other genes involved in the metabolism and response of opioids vary greatly across populations of different ancestries. For example, the frequency of the 118G-allele is reported to be 1% in Africans, 16% in Europeans, but 42% in South Asians (www.ensembl.org). Therefore, information on the ancestry of the deceased must be considered when associations to show allele enrichment between post-mortem cases and control populations are conducted. To date, genetic investigation is not routinely used in sudden or accidental opioid deaths, which can lead to inaccurate determinations of the cause of death. Pharmacogenetics may hold promise in the field of forensic toxicology. Gene variants in drug-metabolizing enzymes, drug transporters and drug receptors can alter the therapeutic/toxic doses of opioids and the opioid-receptor sensitivity, thus, resulting in fatal opioid toxicity. Therefore, pharmacogenetic analysis should be considered in unintentional deaths associated with opioids. Furthermore, the documented efficacy of genome-wide approaches for predicting opioid concentrations warrants a shift from single-gene approaches to capture the polygenic nature of opioid toxicity. In addition to pharmacogenetics, a complete and thorough post-mortem toxicological investigation should be conducted, including the identification of co-intoxications, co-medications, complete medical and demographic history and site of sample collection. Once limitations are overcome, findings validated and multigene panels made available to identify subjects at risk, there is hope and promise that the number of accidental overdoses can be decreased as research progresses. Conclusions Opioid-related mortality is a worldwide concern. Here, we systematically analyzed published literature, including results from our pilot study, on the relevance of using pharmacogenetics to determine the cause of accidental opioid toxicity using post-mortem samples. Our present analysis highlights two important findings: (1) genetic variants in opioid-metabolizing enzymes, opioid transporters and opioid receptors can be analyzed in post-mortem blood samples, and (2) genetic variation in drug-metabolizing enzymes, mainly CYP2D6, CYP2B6 and CYP3A4, showed a significant correlation with the parent-opioid-to-metabolite ratios and may serve as an adjunct in certifying accidental opioid toxicity. Informed Consent Statement: Written informed consent was obtained from the next-of-kin to participate in the study. Data Availability Statement: The data that support the findings of this study are not publicly available because they contain genetic information that could compromise the privacy of research participants but are available from the corresponding author, D.J.M., upon reasonable request.
2023-06-02T15:18:04.741Z
2023-05-30T00:00:00.000
{ "year": 2023, "sha1": "12fb36ceae31680c28b9798b6bc239041cb17aa2", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/jpm13060918", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "6be446d7c83eac6614c405bd16e9c8c65185d61c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
231861679
pes2o/s2orc
v3-fos-license
GuiltyWalker: Distance to illicit nodes in the Bitcoin network Money laundering is a global phenomenon with wide-reaching social and economic consequences. Cryptocurrencies are particularly susceptible due to the lack of control by authorities and their anonymity. Thus, it is important to develop new techniques to detect and prevent illicit cryptocurrency transactions. In our work, we propose new features based on the structure of the graph and past labels to boost the performance of machine learning methods to detect money laundering. Our method, GuiltyWalker, performs random walks on the bitcoin transaction graph and computes features based on the distance to illicit transactions. We combine these new features with features proposed by Weber et al. and observe an improvement of about 5pp regarding illicit classification. Namely, we observe that our proposed features are particularly helpful during a black market shutdown, where the algorithm by Weber et al. was low performing. INTRODUCTION Money laundering is a serious financial crime that consists of the illegal process of obtaining money from criminal activities, such as drug or human trafficking, and making it appear legitimate. Cryptocurrencies, such as Bitcoin [7], are particularly susceptible to Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. KDD '21, August 14-18, 2021, Singapore © 2021 Association for Computing Machinery. money laundering schemes due to their pseudo-anonymity and the relative lack of control by authorities. Preventing money laundering is an international effort and Anti-Money Laundering (AML) laws have been trying to cope with the new threats posed by criminals using cryptocurrencies [8,15]. In 2019, Weber et al. [16] released the Elliptic data set. It contains anonymized labeled Bitcoin transactions and enables researchers to study illicit behaviour in cryptocurrencies. The data set consists of a time-series graph with 200K labeled bitcoin transaction nodes and tabular data with 166 anonymized features describing each transaction. Weber et al. [16] assesses the performance of several supervised learning algorithms on the task of detecting nodes associated with illicit activities. To improve existing supervised learning results found in the literature, we propose a new set of features that leverage the structure of the network and the existence of hubs or pockets of illicit transactions. We extract these new features with GuiltyWalker. This random walker traverses a given network starting from a seed node and computes features related to the distance of the seed node to other nodes known to be illicit. GuiltyWalker consists of two main components: • Random walker: Given a transaction graph, a set of seed nodes, and the number of desired random walks for each of the seeds, GuiltyWalker samples random walks for each seed node. Due to the temporal nature of the graph, the walker only travels to past nodes (i.e., transactions) and stops at the first illicit node found or when there are no more valid nodes to visit. • Feature extractor: Given a set of random walks for each of the seeds, GuiltyWalker computes aggregated features that summarize these walks, e.g., the average number of steps needed to reach one illicit node or the total number of different illicit nodes found. In our experiments on the Elliptic data set, we observe that adding the features computed by GuiltyWalker improves the performance of machine learning methods; namely, we achieve a 5pp increased performance in F1-score, when compared against machine learning methods that use only the original anonymized features from Weber et al. [16]. Furthermore, the gains in performance are more pronounced during a black market shutdown, where the original performance by Weber et al. [16] dropped significantly. This paper is organized as follows. Section 2 details the implementation of GuiltyWalker and how we generate new features from its output to enrich the data and convey additional information to supervised learning methods. Section 3 describes the experimental setup, and Section 4 the results consequently obtained. Section 5 presents the related work. Finally, we set out the main conclusions in Section 6. GUILTYWALKER GuiltyWalker consists of a random walker that traverses a given transaction network from a seed node and extracts features based on the distance of that node to known illicit nodes. It includes two main components -a random walker and a feature extractor, explained in the following subsections. Random Walker The random walker receives as input the original transaction graph , a list of seed nodes, ⊂ ( ), and the number of successful random walks desired, . Successful random walks are explained later in this section. GuiltyWalker's output for each seed node ∈ is the list of sampled random walks X = { 1 , 2 , ..., }. A random walk consists of a sequence of nodes such that 1 = . Due to the temporal nature of the transaction graph, the random walker can only walk backward in time. That is, it is only valid to go from node to node if represents a transaction older than . This transaction network is represented as a directed graph connecting older nodes to newer nodes by an outgoing edge. Then, to address the former condition, during the random walk process, GuiltyWalker chooses a node uniformly at random from the incoming neighbors of the current node. When GuiltyWalker is in a given state of a random walk = ( , 2 , ..., ), the process stops and returns as the final random walk if at least one of the following criteria is met: • is a known illicit node/transaction. • The set of eligible nodes to pick from is empty. This scenario happens when a given node has no incoming neighbors and, consequently, the random walker has no possible moves. Otherwise, GuiltyWalker randomly picks the next node to add to , and the process continues. Note that since the edges only connect older transactions to newer transactions, there is always an end node in any random walk. In other words, the properties of our transaction graph guarantee that GuiltyWalker will not be stuck in an endless loop. The number of successful random walks, given by the user as input, intends to set the desired number of random walks ending in an illicit node from each seed node ∈ . However, as discussed before, the random walker may find a node with no incoming neighbors. In this case, a random walker finishes traversing the graph without reaching an illicit node. To ensure the number of desired successful random walks, GuiltyWalker performs as many random walks as needed, and only those are used for feature extraction. As we mention in Section 2.2, one of the features extracted from GuiltyWalker is the fraction of successful random walks from the total number of random walks needed to perform to ensure the number of successful ones. This is a way of also considering the number of unsuccessful walks made from each node, which may be informative. It is important to note that some nodes in a transaction graph might have no paths to any illicit node. Thus, it is impossible to obtain successful random walks (as per our definition) for those nodes. To avoid this problem, GuiltyWalker first determines which nodes actually can reach an illicit node. To do that, we first invert the direction of the graph. Then, we use the descendants algorithm for directed acyclic graphs from NetworkX [4] [9]. It returns all nodes reachable from a source node in the graph . Afterwards, we inspect if at least one of them is illicit. This procedure is made for all nodes in the transaction graph, and only those that can reach an illicit node are given as input for the random walker. Features Computation The second component of GuiltyWalker receives the list of random walks from each seed node and returns a data frame of features corresponding to each transaction, summarizing the random walks. In particular, GuiltyWalker obtains the following features: • Minimum size of the random walks (min); • Maximum size of the random walks (max); • Mean size of the random walks (mean); • Standard deviation of the random walks sizes (std); • Median size of the random walks (median); • First quartile of the random walks sizes (q25); • Third quartile of the random walks sizes (q75); • Fraction of successful random walks from all the random walks performed by Random Walker (hit rate); • Number of distinct illicit nodes in the random walks (illicit). We also add information about the transaction nodes with no possible paths to fraudulent nodes to the data frame of features, with all features set accordingly (see Section 3.2), due to the lack of information regarding the distance to an illicit node. EXPERIMENTAL SETUP 3.1 Elliptic Data Set In this work, we use the Elliptic Data Set, a graph network of Bitcoin transactions 2 . Elliptic, a company focused on combating financial crime in cryptocurrencies, released this data set. The data set includes 203,769 node transactions and 234,355 directed edges, representing the flow of Bitcoin currency (BTC) going from one transaction to the next. Each transaction can be categorized into three classes: "licit", "illicit" or "unknown", based on the category of the entity that created it. Licit categories include exchanges, wallet providers, miners, and financial service providers. Illicit categories include scams, malware, terrorist organizations, and Ponzi schemes. From the total number of transactions, 21% (42,019) are labeled as licit, 2% (4,545) as illicit, and the remaining 77% (157,205) are unknown. Besides the graph structure, the data set has 166 anonymized features associated with each transaction. The first 94 relate to information about the transaction itself, such as the time step, number of inputs/outputs, and transaction fee. The remaining features relate to aggregated information about the direct neighbors of the transaction, giving the maximum, minimum, standard deviation, and correlation coefficients of each transaction. Besides, a time step from 1 to 49 is associated with each node. It represents an estimate of when the Bitcoin network confirmed the transaction. The time steps are evenly spaced with an interval of about two weeks and each one contains a single connected component of transactions that appeared on the blockchain within less than three hours between each other. Therefore, it can be considered that this data set includes 49 directed acyclic graphs associated with different sequential moments in time. Figure 1 provides an idea of the structure of this data set. Methodology This section gives an overview of the models used in our experiments and discusses our experimental setup. Following Weber et al. [16], we perform a 70/30 temporal split of training and test data, respectively, for all experiments. Therefore, the train set includes all labeled samples up to the 34 th time step, and the test set includes all labeled samples from the last 15 time steps, up to the 49 th . We use random forest for licit versus illicit prediction. First, we train the model on the train set using all 166 features and evaluate it on the entire test set. We use the scikit-learn [10] implementation of random forest, with 50 estimators, corresponding to the number of trees in the forest, and 50 max features, defined as the maximum number of features each tree can have. By doing so, we mimic the method in Weber et al. [16] enabling a fair comparison of the results. We also set the random state to 0 for the purpose of results reproducibility. Then, we train a random forest model (using the same parameters as before) using (i) only the new set of features obtained by GuiltyWalker and (ii) both the features obtained by GuiltyWalker and the original 166 features. We extract the GuiltyWalker features after performing 100 successful random walks. Missing values for the transaction nodes that cannot reach an illicit node are filled with -1 values, except feature hit, which is filled with 0 values, as it represents the fraction of random walks ending in a fraudulent node. We see that the utilization of some of these alternative sets of features improves performance in Section 4. To further improve the results, we filter the set of features obtained by GuiltyWalker to keep just the most important ones. This method characterizes every single feature's importance as the decrease in the performance score after randomly shuffling its position in the set, and is called Permutation feature importance [13]. After applying this method and assessing every feature's importance, the new features kept for further classification purposes are hit, std, illicit, max and mean. We also analyse the model's performance with these features together with the 166 original ones in Section 4. Similarly to Weber et al. [16], we evaluate the random forest classifier's performance with each set of features using the F1-score for the illicit class, hereafter referred to as illicit F1-score. This score is the weighted average of precision and recall. Moreover, it is suitable for imbalanced tasks, which is the case of our dataset (91% of licit nodes and 9% of illicit ones). We also use the ROC curve (and AUC value) and precision and recall measures to evaluate the models' performance. RESULTS In this section, we present the results obtained by using the standard model, random forest, with the 166 baseline features (referred to as AF), as well as only the new features extracted from Guilty-Walker (referred to as GWF) and the former ones together with the latter (referred to as AF+GWF). Furthermore, we show the results obtained using the 166 features in conjunction with the new ones obtained after performing feature reduction (referred to as AF+GWF*). Table 1 shows the testing results in terms of precision, recall and F1-score concerning the illicit class. For the sake of completeness, we also show the micro-averaged F1 score. An important thing to note from Table 1 is that the GuiltyWalker features alone are not informative enough. The F1-score value obtained using only these features is very low (0.20). We can also observe higher precision, recall, and F1-score when using Guilty-Walker's additional features, suggesting the importance of the graph structure. Using GuiltyWalker features, we improved precision, recall, and F1-score by 2 percentage points (pp), 4pp, and 4pp, respectively. In order to understand the importance of each one of the features created, we performed feature importance, using the method described in the previous section. We kept only the most important features to train together with the original ones. Results show that by filtering GuiltyWalker features and keeping only the most important ones (hit, std, illicit, max and mean), the performance of the model slightly improves (we improved F1-score by 1pp, comparing with the model AF+GWF). To give additional insights about the performance of the new model AF + GWF* compared against the original model, we plot the ROC curve of both models. Note that the ROC curve shows the trade-off between sensitivity/ recall and specificity. Moreover, the area under the curve (AUC) can be seen as a measure of separability. In other words, it represents how much a model is capable of distinguishing between classes. Therefore, from the observation of Figure 2, we can infer that both models are quite good at predicting illicit nodes as illicit and licit ones as licit. However, AF + GWF* is slightly better (improves AUC value by 1pp). In particular, for very low false positive rates, our method seems to be significantly better. In a real scenario, we would be more interested in low falsepositive regions of the ROC-curve since raising too many alerts is not practical. With this in mind, we compare the recall at specific low false positive rates, namely 1%, 5% and 10%, and AF+GWF* shows considerable gains when compared against AF: recall@1% increases from 73% to 78% (5pp), recall@5% increases from 75% to 80% (5pp), and recall@10% increases from 76% to 82% (6pp). Therefore, while the gain of using GuiltyWalker's features is only 1pp in the full region, in the region of interest the gain is considerably higher. As noted by Weber et al. [16], a sudden dark market shutdown occurring at time step 43 extremely affects the model performance. In particular, the random forest model trained on the 166 features, from that time step forward, cannot achieve an Illicit F1-score value above 0.25. The introduction of the new set of features extracted from GuiltyWalker improves F1 results in the entire test set (i.e., time steps 35-49). However, this improvement is more notorious after this dark market shutdown (from time step 43 to 49). In fact, from time step 43 to time step 49, we observe, on average, a F1 score improvement of about 10pp and 16pp with AF + GWF and AF + GWF* models, respectively. Note that for time steps 48 and 49, both of these models still perform poorly. As we can see in Figure 3, both AF + GWF and AF + GWF* models are able to reliably capture new illicit transactions after the dark market shutdown, in comparison with the original model. To understand the additional information those models are capturing, we compute the confusion matrices of the models AF, AF+GWF and AF+GWF*. We obtain 784, 828 and 831 true positives (referred to as TP), respectively. We also determine the new TP found and the ones lost, using AF + GWF and AF + GWF* models, in comparison to the ones found training RF with the original set of features. By doing so, we can verify that with the new sets of features, AF+GWF and AF+GWF*, we found 48 and 50 new TP and lost 4 and 3 TP that the original model could find, respectively. Concerning the AF + GWF* model, we observe that, for almost all new TP found, the features extracted from GuiltyWalker (max, mean, std, illicit and hit) have positive values. Only 2 of the 50 elements have -1 values regarding max, mean, std and illicit and 0 with respect to hit. Recall that these values indicate that the associated transaction nodes have no possible paths to known illicit nodes. This interesting fact lets us infer that the new set of features adds some new information based on the graph's structure, which allows the model to make better predictions. However, we have to notice that the fact that a given node has a path to an illicit transaction does not necessarily imply that it is also illicit and vice-versa. This information alone is not enough to make good predictions concerning the labels of the transaction nodes, as we verified from the results obtained for the GWF model. Nonetheless, it provides extra information to complement the original features in a way that boosts performance of the overall model. RELATED WORK Besides the work of Weber et al. [16], which was the baseline for our study, more recently, Lorenz et al. [6] proposed active learning techniques to study the minimum number of labels necessary to achieve high detection of illicit activity in cryptocurrencies and tested them also on the Elliptic data set. Thus, even though using a different approach to the same problem, the authors did not aim to achieve better results than the baseline. Moreover, Alarab et al. [1] proposed an ensemble learning method, using a combination of the given supervised learning models, and applied it on the Elliptic data set, improving the baseline results. Although they improved upon existing results, our results, using the new set of features, are better. While Alarab et al. [1] achieves higher precision than us (97.38% versus 96.5%), we achieve higher recall (76.7% versus 72.2%), higher F1-score (85.47% versus 82.92%), and higher accuracy (98.3% versus 98.06%). As far as we know, previous work on the application of graphrelated features and, in particular, random walks, in a supervised learning setting are scarce. Hu et al. [5] worked with Bitcoin transaction graphs and used various graph characteristics to differentiate money laundering transactions from regular transactions. They actually found that the main difference between them lies in their output values and neighbourhood information. The authors also evaluated a set of classifiers based on different types of extracted features, namely immediate neighbours, curated features, deepwalk embeddings [11], and node2vec embeddings [3] to classify money laundering and regular transactions. This approach differs from ours as we are not trying to embed the graph or a particular node's neighbourhood but instead to describe distances to a specific target (i.e., malicious activity). Nonetheless, the descriptive power of random walks in networks is still recognized. Smriti Bhagat and Muthukrishnan [14] studied methods based on the iterative application of traditional classifiers using graph information as features, and methods that propagate the existing labels via random walks. Moreover, concerning the application of random walks in the context of classification problems, Samer Hassan and Banea [12] proposed a new approach for estimating term weights in a document based on a random walk model. They showed that the new random walk based approach outperforms the traditional term frequency approach of feature weighting. Therefore, with this work, we extend the existing knowledge regarding random walks to improve classifiers' performance in graph datasets. CONCLUSION In this study, we set out to improve the performance of supervised models in an anti-money laundering classification task. Given a transaction network, we propose a method called GuiltyWalker that extracts information from the structure of the network and the existence of past labels to create new features for a supervised model. It consists of a random walker that traverses the transaction network starting from a seed node and a feature extractor that computes features related to the distance of the seed node to other nodes known to be illicit. We test our method on a public dataset of Bitcoin transactions published by Weber et al. [16]. Using a supervised setting similar to the original authors as our baseline, we showed that by training the same classifier considering the original 166 features and the new ones extracted from GuiltyWalker, we could obtain better results. In particular, by filtering the features extracted from GuiltyWalker and considering only the most important ones, the results were even better. The performance differences were more notorious for time steps associated with a black market shutdown, where the baseline model performed poorly. Moreover, we observed that the models that considered GuiltyWalker features could reliably capture new illicit transactions that were not captured by the model from Weber et al. [16].
2021-02-11T02:15:43.077Z
2021-02-10T00:00:00.000
{ "year": 2021, "sha1": "5ce0c09c9cd9fd6698d8573875a8874f0f1a4ba8", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "5ce0c09c9cd9fd6698d8573875a8874f0f1a4ba8", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Computer Science" ] }
253427329
pes2o/s2orc
v3-fos-license
Karen Tei Yamashita and Magical Realism: Re-Membering Community, Undoing Borders : Yamashita’s use of mythic verism in Tropic of Orange and a reimagined doppelgänger trope in I Hotel depicts the ir/real nature of the taxonomy of identity and of Asian America and other minority groups being constituted in and beyond the mainstream or conventional understanding of the idea of America and of the identity of the US nation-state as being built upon discursive technologies of amnesia and misinterpellation of the subject of US history and its Other. Through the Arc of the Rain Forest (Yamashita 2017a) is Karen Tei Yamashita's first novel and the one for which her writing is most identified as "magic realist". 1 Scholars of Yamashita's corpus have noted that her fiction never significantly repeats plot or character; if there is a diegetic Yoknapatawpha County in Yamashita's fiction, it is global in scale, consisting of Asian America, the Americas, and Asia (particularly, Japan, China, Vietnam, the Philippines, Korea). 2,3 However, Yamashita's writing (novels, drama, short stories, essays) do exhibit recurring themes that are presented in various diegetic universes, nonlinear chronotopes, and manipulations of realist and nonrealist Western and Asian literary traditions. Yamashita's innovative storytelling is often satirical, parodic, erudite, and ultimately unbeholden to any single genre or literary aesthetic. Her writing has been described as an example of Asian American literary avant-garde (Ling 2012, p. 17). 4 Yamashita's body of work shows an abiding commitment to representing her understanding of the complex transnational flows between and among geopolitical locations of history, power, identity in Asia and the Americas, and, most of all, the depiction of fictional characters whose lives can be thought of as mundane and yet beneath the surface are filled with vitality and courage. Moreover, a recurring effect of reading Yamashita is the effect of a reader perceiving that culture consists of cultures, produced through everyday living, and that geographical and political identifiers, such as national identity, consists of constructed, ir/real space and time. I use the virgule (slash, solidus) in ir/real to mean that the real and the ir-real are mutually constitutive, linked in dynamic contestation (not binary opposition) and, hence, make clear that borders of segregation and alienation are impossible. The ontological and existential ramifications of the idea that identity at all scaler levels is a discursive construct is urgently relevant to the present moment of worldwide and polycentric existential crisis; identity (regional, national, group, individual) as a discursive construct and the extremist and nihilistic violence that an essentialist view of identity rationalizes is one of the issues that this essay on Yamashita's use of magical realism aims to unpack. 5 Briefly, at this introductory stage, I would like to explain that this essay examines two types of magical realism-classical magical realism in Tropic of Orange and a more secular version, located in Yamashita's Ulysses-like I Hotel, through her use of the trope of the doppelgänger. My analysis of magical realism in Tropic of Orange focuses on Mesoamerican mythological elements in the characterization of Arcangel and Rafaela as well as their roots in Christian mythology; when analyzing these two characters within the Mesoamerican traditions, I will borrow the term, mythic verism, first proposed by Vizenor (1989, p. 190). The core argument of this essay is that Yamashita's use of mythic verism and a reimagined doppelgänger trope depicts the ir/real nature of the taxonomy of identity and of Asian America and other minority groups being constituted in and beyond the mainstream or conventional understanding of the idea of America and of the identity of the United States (US) nation-state as being built upon discursive technologies of amnesia and misinterpellation of the subject of US history and its Other. 6 Asian Americans-in terms of historical and cultural representations-have been part of a global and longstanding network of peoples and beliefs both within and outside of the political borders of the US; Asian Americans are distinct, yet have similar (not to be read as identical) worldviews and should be viewed as allies with similarly othered peoples-no matter how contingently-through our experience of Western colonialism and a rapidly deteriorating Earth. (Yamashita's (2017a) Through the Arc of the Rain Forest is a prescient portrayal of environmental degradation brought about largely by transnational corporations and globalizing capitalism.) Yamashita's representation of people of color is expansive, consisting of multiplicity and dynamism across time and space; Asian American and other minority-group characters are agential storytellers and not the stock or flat character types found in popular texts on television and film. In order to underline the breadth and depth of Yamashita's deployments of magical realism, I will contextualize the previous points within the larger canvas of her literary corpus. Her writing revolves around the lives of politically, socially, and economically marginalized peoples in the Americas and in Asia; her work deals with the many ravages of global capitalism, colonialism, and imperialism; with the racial and ethnic and gendered taxonomy enabling systemic oppression; and with the many forms of resistance, including the victories and defeats, of the marginalized. Consistently, her storytelling foregrounds the inherent dignity of ordinary persons and their lives. Dissertations, monographs, anthologies, chapters in books, essays in academic journals, and numerous book reviews are being written and have been written on her work, from the disciplinary perspective of economics, ethnic studies, critical cultural studies, aesthetics, human geography, women's studies, environmental studies, literature departments, and many more. Her writing is taught in numerous college departments (Hsu and Thoma 2022). Additionally, her writing has been credited with repositioning Asian American literary studies-its rhizomatic roots and eruptive evolutionary paths-from one that is captive, the minor, to the white supremacist nationalistic mythos of the US to one that foregrounds a nonwhite, native, and transnational and transcultural worldview. Further, the transnational turn in Asian American literary scholarship is a notable feature of the hemispheric turn of US literary studies as a whole. Yamashita's may not be a household name; nonetheless, her writing has a devoted following in and beyond academia, in the US, and in academic communities in Brazil, Okinawa, Japan, Germany, Ukraine, Austria, the People's Republic of China, and more. The National Book Foundation awarded Yamashita the 2021 Lifetime Achievement Medal for Distinguished Contribution to American Letters. The NBF's description of Yamashita's work reads partly, "Yamashita's deeply creative body of work has made a lasting impact on our literary landscape" and "[t]hrough adept crafting, passionate research, and timely narratives, Yamashita defines, and re-defines . . . what storytelling can do . . . she compels and challenges readers to engage with ideas, identities, and complicated worlds that mirror the complexity of life". 7 Yamashita's innovative use of mythic verism in Tropic of Orange and a secularized magical realism in I Hotel reveals her complex view of identity and reality, itself; indeed, both these novels consist of thematic clusters connecting to other clusters on notions of the past, of space-time, and of radically non-Western forms of storytelling and worldviews. The topical concerns found in Yamashita's books often assume the shape of a potent rhizome, consisting of multiple and centripetal thematic nodes. 8,9 Tropic of Orange-We Are Still Here The innovative aspects of the magical realist characters, Arcangel and Rafaela, reside in how they are depicted within dual ontological registers. Both agential characters can be read as part of the Christian angelic pantheon and, simultaneously, as part of Mesoamerican traditions. Arcangel is at least 500 hundred years old; he possesses superhuman strength (for example, he uses hooks and chains to drag a truck and its load of oranges many meters in distance) and supernatural powers (he pulls the latitudinal line of the Tropic of Cancer northward with him to the US). Arcangel's supernatural vision enables him to map a geohistory of the Americas consisting of the enormous suffering of the people over many centuries. Arcangel, 'Perhaps, I have seen more than a man may ever wish to see.' He closed his eyes for a long moment. He could see, Haitian farmers burning and slashing cane, Workers stirring molasses into white gold. Guatemalans loading trucks with Crates of bananas and corn. Chewing coca and drinking aguardiente to Dull the pain of their labor . . . All of them crowded into his memory in a single moment (Yamashita 2017b, Arcangel-"Chapter 23: To Labor: East and West Forever" in the column "Thursday: The Eternal Buzz," p. 125). 10 Arcangel in "Chapter 7: To Wake: The Marketplace" in "Monday: Summer Solstice", laments, Ah woe is the great land of Brazil, In this year also, Hernan Cortes discovered Mexico. (Yamashita 2017b, p. 45) 11 The narration constructs Arcangel as the figure of a spirit of the Americas; for several hundred years, he has been both witness to and has taken on the suffering endured by the many peoples of the land; he is also a warrior-El Gran Mojado-on behalf of native peoples, and he journeys to the US to battle SUPERNAFTA, the textual iteration of Western colonialism and imperialism. It is important to note that Arcangel is not a transcendental abstraction; he consists of muscle, sinew, sweat, blood. For instance, earlier in the book, in a scene that evokes for readers familiar with the Passion of Christ, Arcangel "clenched his fist and moved forward" (Yamashita 2017b, p. (Yamashita 2017b, "Chapter 11: To Wash: On the Tropic" on "Tuesday: Diamond Lane," p. 66) "And so Arcangel, attached to his great burden, inched his way down the street . . . women and children had run forward to cup their hands to catch the blood and sweat from his torn stigmata . . . " (Yamashita 2017b, "Chapter 11: To Wash: On the Tropic" on "Tuesday: Diamond Lane," pp. 66-67). Arcangel literally and figuratively takes on the suffering of the people. This is an appropriate location in the essay to unpack the term mythic verism. Warnes (2020) and Camayd-Freixas (2020), in their separate chapters in Magic Realism and Literature, unpack the complicated history of magical realism: in the period of the European Enlightenment, concepts of rationality, consciousness, and scientific objectivity came to dominate the production of knowledge, aesthetic judgement, and the realm of ethics and morality. Enlightenment ideas were thought to mean the inherent superiority and supremacy of Western man above all other creatures, including native peoples. In short, every part of the world outside of Europe was recast by Western epistemology as the Other, that is, as inherently irrational, primitive-minded, and subjugated by beliefs in savage gods and witches and their witchcraft and magic. Moreover, reality consists of the physical, observable, and tangible, whereas the magical-phenomenon that cannot be explained by scientific methods or by applications of logic-is therefore false or not real. Magical realism is a term that refers to an inter/ir/ruption of reality, which in Western thought is the norm. Bowers, in "Indigeneity and Magic Realism: From Appropriation to Resurgence", explains the ways in which Native scholars rightly argue that magical realism does not adequately describe the function of and value of myth, story, or the understanding of the natural in works by Native authors (in Warnes and Sasser 2020, pp. 49-63). Magic, for many non-Western cultures, is indelibly part of the mundane world; magic and reality are both forms of existence that are mutually constitutive and complementary. Bowers writes that for Native scholars, myth and story do not connote "no basis in reality"; instead, these words point to natural and necessary dimensions of human expressions and ways of being in the world. Bowers further notes that what is understood to be myth in the west has been replaced by the word, story, by many Native American scholars and writers, for "[s]tory is a concept understood from within Native American cultures to be at the heart of the creation and continuity of Native American cultural thought" (Bowers 2020 in Warnes and Sasser 2020, p. 57). Stated somewhat differently, within Western epistemology, myth is false, "made-up", stories of fantastical beasts and gods that existed only in the human imagination; story, on the other hand, implies a kernel of truth garnered from the daily affairs of Man. Contrastively, in the view of many Native American scholars, the term story consists of words that create reality. 12 Mythic verism, the term favored by Vizenor, consists of two equally valued concepts. Vizenor, Bowers argues, envisions "a version of [literary] realism that is holistic and inclusive of aspects of the world beyond what can be seen" (Bowers 2020 in Warnes and Sasser 2020, p. 57). Native American authors, poets, and scholars-by using mythic verism instead of magical realism-aim to undo Western epistemological colonialism. The use of mythic verism in Tropic of Orange, and the use of a secular form of magical realism in I Hotel, asks readers to engage in "crossreading", in which the reader attempts a transcultural "crossing over between 'epistemologically new worlds'", rather than tethering herself to a singular, and often essentialized, worldview (Owens 1998qtd. in Bowers 2020 in Warnes and Sasser 2020, p. 56). Admittedly, a transcultural reading persona is a difficult one to build in a sustainable fashion; nonetheless, reading Yamashita is a means of acquiring the skills of a crossreader. Arcangel, a.k.a., El Gran Mojado, battles SUPERNAFTA, while Rafaela battles drug and human organ traffickers, epitomized by Hernando, Dona Maria's son; Arcangel and Rafaela are doubles of each other, fighting on behalf of the dispossessed. In the penultimate battle scene between Rafaela and Hernando, Rafaela manifests as the Mesoamerican god called the Plumed or Feathered Serpent. This stirring battle is worth quoting at some length: The sound of her screams traveled south but not north. He jammed her into the leather cavern of the black Jaguar [Hernando's car]-suddenly a great yawning universe in the night . . . Her writhing twisted her body into a muscular serpentsinuous and suddenly powerful. She thrashed at him with vicious fangs-ripping his ears, gouging his neck, drawing blood . . . Her mouth gaped a torch of fire, scorching his black fur. Two tremendous beasts wailed and groaned, momentarily stunned by their transformations, yet poised for war. Battles passed as memories: massacred men and women, their bloated and twisted bodies black with blood, stacked in ruined buildings and canals; one million more decaying with smallpox; kings and revolutionaries betrayed . . . As night fell, they began their horrific dance with death, gutting and searing the tissue of their existence, copulating in rage, destroying and creating at once-the apocalyptic fulfillment of a prophecy-blood and semen commingling among shredded serpent and feline remains. (Yamashita 2017b, "Chapter 38: Nightfall Aztlán" on "Saturday: Queen of Angels" p. 189) Rafaela most certainly survives this battle with Hernando, whose trafficking business is a consequence, if not a systemic slice of the global capitalist economy, and its brutal rationale of supply and demand. Rafaela's physical injuries attest to the human aspect of the powerful Plumed Serpent. In my essay, "Karen Tei Yamashita's Tropic of Orange and Chaos Theory", I offer an extensive description of this powerful god, also known as Quetzalcoatl, who possibly dates back to "the middle of the second millennium" (Hsu 2018 in Lee 2018, p. 114). The Plumed Serpent assumed a protean quality in Mesoamerican traditions and came to mark renewal and "the fluid force of the wind" (Brundage, qtd. in Hsu 2018 in Lee 2018, p. 115). Rafaela, as the modern embodiment of the Plumed Serpent, illustrates that Mesoamerican gods are alive and working to bring more justice to indigenous peoples. This character, in its human form, is also a maternal figure (to Sol) and a counterweight to Bobby's materialistic ambitions. Her decision to leave Bobby jolts him into realizing that material possessions are poor substitutes to Sol and to Rafaela; in short, she cures him of his addiction to wealth and his addiction to the role of the model minority, which is to be enslaved to the nationalistic trope of the immigrant climbing the ladder of the American Dream. In James Martel's terms, the regime of capitalist and neoliberal interpellation-to which Bobby seemingly willingly submits-always contains gaps, that is, the distance between the promise of financial success and belonging and actual results-most of the time, persons identified as people of color are prevented from achieving as much as they are promised. The magical nature of Rafaela derives in addition from her links to the Archangel Raphael, who is considered a healer in three religions, Christianity, Judaism, and Islam. The figure of the Archangel Raphael in Tropic of Orange illustrates the close kinship of these three major religions and the political exigencies accumulated over the centuries that have erased the history and memory of their kinship; the followers of these three monotheistic religions have become enemies. The essential point in Tropic of Orange is that identity in terms of religion is constructed from discursive and historical conditions. 13 In short, it is the clinging to fundamentalist identity that enables the othering of those whom we perceive as different from us and that enables our rationalizing our feelings of hatred and enmity toward them. Yamashita's use of the Archangel Raphael connects this figure to the Archangel Gabriel (the character Gabriel in Tropic of Orange) and Archangel Michael (Arcangel). Crossreading Rafaela and Arcangel from within the Mesoamerican and Judaic, Islamic, and Christian traditions will bring readers closer to the way that Rafaela is a blending of these traditions and to the way that Tropic of Orange is thematically organized. I Hotel-We Are Multitudes Karen Tei Yamashita's I Hotel is a 604-page tour de force novel focusing on the Asian American Movement within the historical context of the Civil Rights, anti-Vietnam War, and counter-cultural movements of the 1950s to the 1970s. The novel consists of ten novellas, representing the years 1968 to 1977; in 1977, the multiethnic battle to save the International Hotel ended when police stormed the International Hotel building and ejected any remaining residents, mostly indigent Asian Americans. As in Tropic of Orange, Yamashita's narrative canvas consists of polycentric thematic clusters to do with identity constructs; and the common cause that exists among racial and ethnic groups; and the idea that social, political, and economic justice is an ongoing battle that must be persistently fought for. Well-known philosophers, politicians, and revolutionaries make cameo appearances in I Hotel, and the diegesis focuses predominantly on the everyday living of ordinary characters. Due to space constraints, I focus this essay on the chapter Theatre of the Double Ax from the novella, 1971: Aiiieeeee! Hotel, and specifically on the literary double or doppelgänger trope. The figure of the doppelgänger, in Theatre of the Double Ax, is not supernatural nor mythical figures from the mists of prehistory; however, the doubles in this chapter can be read as secularized magical characters, particularly when linked with the chapter Chiquita Banana! (in 1971: Aiiieeeee! Hotel). I will unpack the significance of the doppelgänger as an instantiation of secular realism. I use the term secular magic realism to mean that the doppelgängers in Theatre of the Double Ax are diegetically constructed as if they have magical powers, but only in the very narrow sense that these characters appear out of nowhere, as if out of the air; however, they are not other-worldly nor supernatural beings. Rather, the doppelgängers are Homo sapiens situated in the physical and material world represented in I Hotel-Yamashita tells a story about revolutions that rise and fall due to human actions; the divine has no part. In the section "Doppelgangsters" in Theatre of the Double Ax, Gerald K. Li encounters a white man who appears suddenly walking toward Gerald. This white man looks like him and is also named Gerald K. Li. Gerald then runs into another twin, also with the same name, before he encounters a third double. The nonrational, almost magical way in which the Gerald K. Li doubles show up in the story is worth exploring in detail: Looking down the road, he sees a man approaching with two black cases, one in [sic] each hand . . . "What do you mean you're Gerald K. Li?" "You don't know Gerald K. Li, the great Chinese saxophonist?" "Well, yeah, but you're not even Chinese". . . . Gerald looks hard at the white guy and thinks it could be true. This guy could be the white version of his Chinese self (Yamashita 2019, p. 266). Even though Gerald recognizes that the man could be his twin, he fights the white Gerald and takes the other man's money, one hundred dollars (later, Gerald finds only ten) earned playing saxophone as Gerald K. Li, the "great Chinese saxophonist"; Gerald K. Li also takes the two black cases presumably containing saxophones-the original Gerald is able to play a saxophone from each side of his mouth. This magical ability stands for this character's ability to blow simultaneously hot and cold, or to speak fraudulently out of both sides of his mouth. Next, near Stockton, Gerald runs into a Chinese American who confesses that he is known as Gerald K. Li. Then, Gerald arrives in Merced, decides he badly needs a drink, walks into a bar and " [T]he bartender looks at Gerald significantly, and when Gerald bothers to look back, he freezes in shock. He's staring at his twin, his spitting image, his actual doppelgänger" (Yamashita 2019, p. 271). Gerald is then surprised to be addressed by his "doppelgänger" as Jack, Jack Sung, the poet. The Merced Gerald K. Li tells Gerald/Jack that he is Joe, Jack's brother. Gerald tells Gerald K. Li/Joe Sung at one point, "'You do look like me'. Gerald is still amazed at the mirror image" (Yamashita 2019, p. 274). Gerald K. Li/Joe Sung and Gerald/Jack Sung switch places; Joe Sung, the former bartender, wants to work actively on revolution. After Gerald/Joe leaves, Gerald/Jack takes over the bar. For over two weeks, no one notices that it is Gerald and not Joe behind the bar, or perhaps, Gerald is Jack's double, making him also Joe's twin. It is the magical quality of these doubles' interjection into realist diegesis (ordinary streets, bars along Route 99, an actual storied highway) that brings these Gerald K. Lis and Jack Sungs under the category of magical realism. However, unlike the supernatural entities that appear in classical magical realist fiction, the Gerald K. Li/Jack Sungs are not angelic or supernatural. For that reason, I would use the term secular magical realism to describe this literary device in I Hotel. Eran Dorfman in Double Trouble: The Doppelgänger from Romanticism to Postmodernism writes that the dopplegänger in Western literature is frequently portrayed as a furtive or secretive figure, a malignant shadow portending disaster, or a figure with which the "real" original, the protagonist, must combat. The doppelgänger becomes in the first half of the twentieth century the container of aspects of the self that must be denied, that must be rejected and exiled outside of the self. This move sets up an existential battle between different parts of the self, which is ultimately self-destructive. Dorfman argues that, instead, "the crucial thing about the double . . . is to reconcile it with the pretentious original . . . the question is how to make them complement rather than contradict and fight each other" (Dorfman 2020, p. 2). To Dorfman, the rejected pieces of the self are aspects of the self's "multiple identity, namely, an identity that accepts the double as an integral part of itself . . . the double is never a singular entity . . . the double is not simply an alter ego, a similar Other with which I somehow need to cope. The double . . . incarnates the inevitable remainder and opaque element of singular subjectivity and stable self-identity, connecting them to other beings and identities. The double is what defies unicity and opens up the subject to multiplicity, since it reveals that the boundaries between I and world, I and Other, I and me, are far from being clear" (Dorfman 2020, p. 3). The dopplegängers in Theatre of the Double Ax significantly reimagine the conventional Western metaphor by taking it out of the service of Western narratives based on fear, insecurities, and denial. Yamashita, in short, has appropriated the Western dopplegänger for her own purpose in I Hotel. The first important point to note about the doubles in Theatre of the Double Ax is that Gerald does not fear his doubles; he does not see them as malignant entities: Gerald is initially merely incredulous that a white man can look like his twin; the Stockton Gerald K. Li resembles Gerald to such a degree that he does not even remark on Stockton Gerald's features; the Merced doubling is the most astounding-the Gerald K. Li/Jack/Joe Sung doubling occurs as if it is an optical illusion, a magic trick. Additionally, the doubles do not evoke the trope of the abject, alienated other nor the trope of the vengeful return of the rejected or orphaned self nor the intervention of angelic visitations in the affairs of humanity. 14,15 Gerald fights his doubles but not in mortal combat, not in desperate fear nor aversion of the othered self. The kung fu-style fight scenes consist of drawings and descriptions of the 108 moves of a t'ai qi set, but these creative and exciting descriptions are not identical to the descriptions of actual t'ai qi sets (Yamashita 2019, pp. 267-68, 270-71, 273-74, 276-77). Gerald K. Li is intertextually associated with Iron Ox of the 108 outlaws in The Water Margin. Like Iron Ox, Gerald is a big eater and likes to drink alcohol (Yamashita 2019, p. 269); he likes to get into fights; he has a double saxophone and Iron Ox wields two axes. On the other hand, even though Gerald easily gets into fights, he is a reluctant revolutionary and does not see himself as a champion of the masses. Nonetheless, Gerald is well-known in local jazz scenes as the "Chinese saxophonist". Additionally, in the narrative world of I Hotel, which seeks to reimagine Asian American culture as central to the narrative of America, Gerald is a cultural revolutionary in that he helps to undercut the racist stereotype of Chinese and Asian Americans as merely coolie labor or computer nerd; in I Hotel, Asian American and other people of color are revolutionaries, musicians, singers, architects, and protagonists. The second key point regarding Yamashita's innovative secularization of magical realism has to do with her extended allusion to The Water Margin, a novel that has been passed down through generations in the Sinophone world and that has been adapted in modern times into comics, films, and television. Yamashita brings The Water Margin, specifically the story of Iron Ox, into modern-day US in order to spotlight the power of the common people in resisting and forging meaningful changes in their lives. Notably, one of the distinctive features of the epic The Water Margin is that it is the first novel to be written in the vernacular (speech used by ordinary persons as opposed to the Chinese language used by intellectuals or by court officials) of the time, the Northern Sung Dynasty. In terms of the larger canvas of I Hotel, Theatre of the Double Ax points to the close similarities between the narrative world of The Water Margin and the narrative world of I Hotel of the US Civil Rights, counter-cultural, and Asian American movements, when outnumbered and relatively powerless individuals and small groups combat mendacious and corrupt society on behalf of commoners. Like much of the history of Asian Americans, The Water Margin tells the story of honorable men and women forced into exile or a marginalized life and labeled as merely bandits. Yamashita's allusion to the 108 outlaws links these heroes to Asian American cultural and political revolutionaries fighting an oppressive US socio-political and economical system on behalf of the marginalized and disenfranchised. The overall structure of I Hotel asks readers to perceive the marginalized as multiracial and multiethnic, and not only as a particular minority group. Yamashita's intertextual deployment of the outlaw, Iron Ox, foregrounds his filial piety toward his mother, the central woman in his story. This character's heroism, in I Hotel, is not narcissistic individualism modeled after an Enlightenment-based American individualism; it consists of acts of valor in the service of the disempowered and of the aged. The third key point is that Yamashita's dopplegänger depicts the vital, life-giving utility of the legacy of Asian cultural heroes in Asian America. The story arc in Theatre of the Double Ax illustrates that descendants of Asian immigrants should not feel shame or ambivalence about their cultural ancestry; instead, they can claim and integrate aspects of their past into their American subjectivity. "Where is Asian America?" is a refrain tactically embedded in the story that eventually elicits from a reader the realization that Asian America is everywhere in America, where railroads were built, where migrant workers set up camp, where urban centers took root and grew, in jazz clubs, in sit-ins, and in political demonstrations. Asian American culture as portrayed in I Hotel is dynamic and blended with non-Asian worldviews. The fourth key point about Yamashita's doppelgänger is that it helps to unpack the discursive and material nature of identity constructs: Gerald's double is the white Gerald and the Stockton Gerald and the Merced Gerald and Jack/Joe Sung. The narrative technique of doubling re-members all marginalized peoples as sharing a common historya marginalized subject is constituted by the socio-political superstructure of wherever that subject is located. A racist discourse would have the racialized subject split off parts of the self that a Euro-centric, white supremacist, normative discourse signifies as the Other. The doppelgänger figure in this chapter works against that reductive and essentialist formulation of identity. Further, while magical realism-the intervention of magical beings, frequently angelic in nature-focuses the core of magical realist stories on supernatural interventions; one of the essential implied theses of I Hotel is that the future of Asian Americans lies in the hands of Asian Americans (and their allies) rather than in otherworldly interventions. This reading of the doppelgänger is aligned with the interlinked narratives of the ten novellas consisting of I Hotel, that is, revolutions or substantial changes in common lives is forged in the red-hot crucible of everyday living. "Chiquita Banana!", immediately preceding "Doppelgangsters", is a parody of the masculinist worldview that dominates The Theatre of the Double Ax and its analogous text, The Water Margin. First, Chiquita Banana's two daughters (drawn as bananas)-Suzie and Anna May Wong 16 -are conjoined twins, a double of Chang and Eng, the actual conjoined twins, originally from Siam, and presumably from which the phrase Siamese twins derives. Readers in the US may recognize the name Suzie Wong and its connotative meaning in American popular culture. Anna May Wong has become better known in recent years; her image is on one of an unprecedented set of quarters rolled out in 2022 by the US Mint. Chiquita Banana tries to shoot her lover, Don Juan Samuel, who has been drugging and pimping out Anna May and Suzie. Instead, Don Juan grabs Chiquita Banana's gun and shoots her. Suddenly, out of nowhere, Suzie's and Anna May's sister, Moulan Rouge from China, magically arrives on the scene; she chops Don Juan in half with one of her two broad swords and separates the conjoined twins, Suzie and Anna May, with a blow from her other sword. The final frame of this short graphic narrative shows one of the newly separated (or released?) twins asking, "Now what?", while Moulan Rouge from China hugs her. This narrative allows for multiple layers of interpretation. Chiquita Banana is dressed to remind the reader of Carmen Miranda. Additionally, the name Chiquita stands for Chiquita International, an American company that was the largest producer and distributer of bananas and other produce. The company in late 2014 merged with two Brazilian companies. Banana (and pineapple and orange production in the Americas) has a long and convoluted colonial history that is best dealt with in a different venue. For purposes of this analysis of the doppelgänger in I Hotel, Suzie and Anna May are conjoined twins; both cope with "low self-esteem;" both have secret dreams of becoming performers, they are the children of a colonial history primarily via their mother, Chiquita. Suzie and Anna May mirror each other and they are constituted in the transnational discursive Imaginary of Asia and the Americas. Mother and daughters continue to be exploited by a neoimperialist, multinational capitalistic system. Significantly, it is Moulan Rouge who magically appears to save Suzie and Anna May; Chiquita cannot be saved. "Chiquita Banana" underscores the necessity of understanding colonialism in Asia in order to more fully grasp the history of the indigenous and Asian America in the US. The taxonomy of identity is constructed by a Euro-centric and white supremacist mythology; on occasion, lasting and meaningful transgression of US nationalistic hegemony needs outside intervention, in the form of legacy stories and cultural heroes transported from stories from beyond the confines of the US. The narrative world of I Hotel is not defined as an essentialist or singular cultural reality. In "War & Peace", the narrative gradually undoes or, at least, troubles essentialist and binary definitions of identity. The novella that includes this chapter is entitled 1971: Aiiieeeee! Hotel (Chan et al. 2019), a reference to the two anthologies that attempted to delimit the real (versus fake) Asian American and Chinese American and Chinese cultures. The first page of "War & Peace" consists of two drawings of Frank Chin and Maxine Hong Kingston. The captions say "Son" and "Daughter" (Yamashita 2019, p. 244); the backgrounds of the two drawings are two nondescript houses, probably representations of their childhood homes. On the next page are two drawings of a more mature or older Maxine Hong Kingston and Frank Chin; the captions say "Sister" and "Brother", meaning that they are figurative siblings. The backgrounds of both borderless drawings, separated by a gutter, are drawings of the Golden Gate bridge, one larger than the other. The next set of drawings furthers the suggestion that these two literary personae are doubles of each other (Yamashita 2019, p. 246): the drawing of a monkey is in the background of the drawing of Frank Chin; the caption says "Wittman Ah Sing", which is the name of the protagonist of Tripmaster Monkey: His Fake Book, written by Maxine Hong Kingston. Kingston, in that novel, links Chin intertextually with one of the most famous and powerful mythical characters in Chinese literature, the Monkey King. Simply stated, Chin is the contemporary and American Monkey King (Wu and Yu 1977). The caption under the drawing of Kingston is Pandora Toy, which is the name of a character in Gunga Din Highway by Chin; Pandora Toy is Chin's unforgiving satire of Kingston. Under both captions on this page is the word "Fake". Both Wittman Ah Sing and Pandora Toy are fake in the sense that they are fictional creations. "Fake" also refers to the polemic that Chin launched at Kingston, criticizing her writing as disseminating fake Chinese culture: Chin accused Kingston of pandering to her white readers and their fetish for the myths, legends, and Chinese American characters in her breakout novel, Woman Warrior. The obvious problem with this argument is that no one has the authority or ability to determine what makes an authentic culture. The two drawings on page 246 (Yamashita 2019) are drawn as if they are moving toward each other, across the gutter, and will soon merge. On the next page, readers see Kingston associated with the famous Fa Mulan, supposedly "Real" (the caption under the caption, Fa Mulan) but only in the sense that this character is the protagonist of a poem about a woman who disguises herself as a man in order to fight in an army against rebels. Frank Chin is likened to Kwan Kung, who is both the god of war and literature. Kwan Kung stands for the belief that literature can be a method of warfare, and that a warrior should be adept in both the pen and in martial arts. The last two pages of "War & Peace" are noteworthy. They show a more elderly Chin and Kingston; the captions say father and mother and father and mother in Chinese writing. The backgrounds show parts of a Chinese dragon, which is a supreme entity, god of both the sky and waters of the Earth, and the emblem of emperors. The captions for the two drawings on the last page say, simply, Patriarch and Matriarch, meaning that they are both, in their own way, the father and mother, the patriarch and matriarch of Asian American literature. In conclusion, Karen Tei Yamashita's deployment of mythic verism in Tropic of Orange and of a secularized magical realism in the form of the doppelgänger in I Hotel bring about radical deconstructions of the underlying structuring that enables binary and essentialist formulations of identity categories. These identity categories restrain, distort, and discipline the bodies of immigrants and of people of color under the regime of heteronormativity and Euro-centric, white supremacy. Yamashita's work seeks to foreground nonessentialist worldviews and favors multiplicity-in-identity. Her idea of community is expansive, inclusive, and always open, fluid in membership; her vision of community is that of a coming-community, or many such communities of ordinary persons. Rody writes about Yamashita and "[h]er gigantic canvasses and striking designs, as they accommodate a transnational scope and histories of global migration, become arenas for the dramatic interactions of people of multiple histories, languages memories, and tastes in food and music who tend to morph into crowds of distinct classes or ethnicities, which then converge in spectacular crowd-meets-crowd scenes" (Rody 2022, p. 99). 4 Jinqi Ling, in analyzing Yamashita's novels, offers a complex and extended analysis of the term avant garde, "I redeploy the term to show that the concept "avant-garde" is not a self-sufficient aesthetic category tied to particular historical situations or moments, but a multiaccentual configuration able to link radically experimental aesthetics to radical political critique beyond the context of its birth. 'Asian American literary avant-garde' thus implies, in my usage, an expansion of the term beyond its traditional emphasis on Euro-American formalism rooted either in modernist or, according to Robert Boyers, certain early Euro-American postmodernist (thence high modernist) literature (1991,(726)(727)(728)". (Ling 2012, p. 17). 5 Eran Dorfman writes in Double Trouble: The Doppelgänger from Romanticism to Postmodernism, "I consider the significance of the double in an age of identity politics. The double unveils the interdependency not only between me and the Other but also between different social and political groups. The personal drama of love and hate provoked by the double can serve as a model to understand the broader convolutions in which peoples and social groups are enmeshed. I consider the bilateral complex relationship that binds together Israelis and Palestinians. These two peoples are actually doubles of each other, and I use Sartre's text Anti-Semite and Jew, as well as Girard's theory of the surrogate victim, to show that rival groups need each other to define themselves, yet they refuse to admit that this is so" (Dorfman 2020, p. 8). 6 James Martel's The Misinterpellated Subject has deeply influenced my thinking while preparing this essay on Yamashita's use of magical realism in Tropic of Orange and I Hotel. Martell's thesis is that the ruling elite's interpellation, that is, the construction of the other, reveals its own lack, its inability to completely subsume the target of its call to submit to the ruling elites' disciplinary regime. Much of Yamashita's writing undercuts "global capitalism and liberal ideology" (Martel 2017, p. 4). Martell continues, "I see the subject as having always been anarchist, decentralized, and multiple within herself . . . it takes a phenomenon like misinterpellation to make that evident to the subject herself" (Martel 2017, p. 6). Yamashita uses magical realism (and satire and parodic laughter, and a vast number of blended genres) to unseat the notion of the unitary subject, of narrative linearity, of Western literary elements of the protagonist and antagonist, and much more. In Yamashita's writing, literature mirrors societal oppressive, disciplinary regimes at the same time as her writing reveals the dominant regime's opposite double. 7 Other awardees over the years: Toni Morrison, Maxine Hong Kingston, Ursula K. Le Guin, Don Delillo. The NBF also noted that even though Yamashita's writing may not have a wide popular following, she enjoys a committed readership among scholars and other avid, general readers. https://www.nationalbook.org/national-book-foundation-to-present-lifetime-achievementaward-to-karen-tei-yamashita/ (accessed since November 2021). 8 The image of the rhizome is commonly associated with its appearance in Deleuze and Guattari's A Thousand Plateaus (Felix and Guattari 1987); its anarchical organization fits with Yamashita's experimental and highly unconventional stories. 9 Vernon Cisney's analysis of A Thousand Plateaus, Cisney, Vernon W. "The Writer Is a Sorcerer: Literature and the Becomings of A Thousand Plateaus". Deleuze and Guattari Studies, vol. 14, no. 3, 2020Studies, vol. 14, no. 3, , pp. 457-80, https://doi.org/10.3366/dlgs.2020 A brief explanation of the dual chapter headings in the body of the text and in the citations: Tropic of Orange contains two tables of contents. "Contents" is a list that begins with "Monday: Summer Solstice" and concludes with "Sunday: Pacific Rim". Each day of the week consists of a list of chapters, from 1 to 49 in total. Readers can decide to read the novel from the first page to the final page, 230; from chapter 1 to 49. The novel has another table of contents, in a grid format. The top row of this grid runs from Monday to Sunday; the left-most column of this grid lists the names of the seven characters of Tropic of Orange: Rafaela Cortes, Bobby Ngu, Emi, Buzzworm, Manzanar Murakami, Gabriel Balboa, and Arcangel. Readers can read from column to column or from row to row-column to column will mean reading what happens with each character on Monday, then Tuesday, and so on. Reading row to row allows readers to follow a character, say, Bobby Ngu, day by day, from Monday to Sunday. In this essay, I will give the entries from "HyperContexts". The sequence of chapters used to read through Tropic of Orange does affect a reader's interpretation of the novel. 11 Yamashita reworked the litany of Western colonial invasion in Tropic of Orange into an incisive, comedic satire entitled "Manifesto Anthrobscene", published in an issue of McSweeney's, "Plundered". Artwork in "Manifesto Anthrobscene" is by Ronaldo Lopes de Oliveira. 12 In Leslie Marmon Silko's Ceremony (Silko 1977), words do not only tell a story, words can heal or bring about destruction. 13 As this essay was being finalized, Salmon Rushdie was stabbed multiple times on stage at the Chautaugua Institution; media reports and responses from other writers and literary institutions show the widespread belief that the attacker was acting on the fatwa placed on Rushdie in 1989 by the Grand Ayatolla Khomeini due to Rushdie's The Satanic Verses, deemed by clerics such as Khomeini to be blasphemous against Mohammad and, hence, Islam. 14 Dorfman, in discussing the doppelgänger in Henri Guy de Maupassant's "The Horla", writes, " . . . the Horla . . . does not seem to represent anything ideal but rather everything that is dark and demonic" (Dorfman 2020, p. 30). On "The Shadow" by Hans Christian Anderson, Dorfman argues that "the conflict between the man and his shadow signifies a split between essence and appearance, that is, two forms of sight. The result of the split it the gradual transformation of the learned man himself into a shadow" to the utter abnegation of the original (Dorfman 2020, p. 6). In the epilogue of Double Trouble, Dorfman writes about the doublings-the feared Other, the denied Other-in the Palestinian-Jewish conflict. I am minimizing what I find to be one of the most compelling chapters of Dorfman's book when I quote, "The double is a savage, uncontrollable force, and the individual, if it wants to live and love, must admit its existence yet never completely possess it. It is by addressing myself but also others that I may come to terms with my doubles: others who preceded me and others who will come after me" (Dorfman 2020, p. 189). Yamashita's use of the doppelgänger is an attempt to embrace our multitudes of doubles, including the heroic and the shadow. 15 For example, in Gabriel Garcia Marquez's "A very old man with enormous wings". 16 The caption of the first frame reads, " . . . by some guy named Wong" (Yamashita 2019, p. 262).
2022-11-10T16:29:29.031Z
2022-11-07T00:00:00.000
{ "year": 2022, "sha1": "5e5850f92e0b4f01f2cb81ceafb4f44bdf58b247", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2410-9789/2/4/24/pdf?version=1667830318", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "e9f59beb0f3e4bda6b4fe19862fa10612859ccb8", "s2fieldsofstudy": [ "Sociology", "Art" ], "extfieldsofstudy": [] }
17096773
pes2o/s2orc
v3-fos-license
The polarised gluon density from di-jet events in DIS at a polarised HERA We present a possible direct measurement of the polarised gluon density $\Delta G(x)$ in LO from di-jet production in polarised deep inelastic ep scattering, assuming the kinematics of the HERA collider. We show the sensitivity to the x-dependence of $\Delta G(x)$ and to the first moment $\int \Delta G(x) dx$ in the range $0.002<x<0.2$, assuming the electron and proton beam of HERA being polarised to 70% and an integrated luminosity of at least 200 pb$^{-1}$. We include in our study hadronisation and higher order effects, as well as realistic detector smearing and acceptance. We find that the statistical and systematic uncertainties are small enough to distinguish between different parametrizations for $\Delta G(x)$, which all are in accordance with present data. We stress that at HERA an x-range could be measured, that is not accessible to any other present or proposed experiment. Introduction The precise study of the nucleon spin structure has evolved over the last years into a broad field allowing to test many aspects of QCD. The surprising EMC [1] result, that the quarks carry only a small fraction of the nucleon spin, has been confirmed by new high precision measurements. This dramatic improvement in quality of the data and theoretical analysis has lead to a generally accepted range of polarised parton distribution parametrizations which imply that the quarks carry only about 30% of the nucleon spin. Nearly all models predict a substantial polarisation for both the gluons and the strange quarks which has to be confirmed by direct experimental tests before the present standard interpretation of the data can be regarded as established. The polarised gluon distribution is of special interest since it could be surprisingly large. In the next to leading order (NLO) evolution equations the quark and gluon distributions mix, hence polarised gluons also contribute to g 1 (x, Q 2 ). The quality of the present g 1 data allow for QCD fits to be made, and ∆G to be extracted. The precision is however rather poor, and only some information on the first moment on ∆G is obtained. Therefore the hunting of ∆G by direct measurements is one of the key issues in polarised scattering physics for the next foreseeable future. The unpolarised gluon distribution has been studied at the ep collider HERA. The large centre of mass energy ( √ s = 300) GeV, resulting from 27.5 GeV electrons colliding with 820 GeV protons, allows for several techniques to be used. So far the gluon distribution at HERA has been accessed via scaling violations of F 2 , di-jet production, charm production and exclusive vector meson production. While the first method gives an indirect measurement of the gluon, like for the NLO analysis of g 1 , for the other methods the gluon enters directly at the Born level. In this paper we will use the method of extracting the gluon via di-jet events rates. In LO two diagrams can lead to di-jet events, shown in Fig. 1. These are the Photon-Gluon Fusion process (PGF) and the QCD-Compton process (QCDC). The PGF process is directly sensitive to the gluon density, while the QCDC process is sensitive to the quark densities and constitutes the background. The H1 collaboration has performed an analysis of di-jets to extract the PGF contribution and thus the gluon distribution [2]. Presently both the H1 an ZEUS collaborations attempt to extract the gluon distribution at NLO from di-jets event rates [3,4]. If both beams at HERA would be polarised, this method could be used to extract ∆G. Due to the Sokolov-Ternov effect the electron beam gets transversely polarised in the machine. Spin rotators can flip transverse into longitudinal polarisation, which is more useful for physics studies. The only possibility for a polarised proton beam at HERA is to start from a polarised source and accelerate and store the beam, keeping the polarisation on the way [5]. First feasibility studies indicate the possibility of such a scenario in case the accelerators get upgraded with partial and full Siberian snakes. For this report it is assumed that a polarisation of 70% can be reached for both beams, and that the luminosity will be as large as for the unpolarised case (roughly 200 to 500 pb −1 , integrated over several years) First studies on extracting ∆G from di-jet event rates were made in [6,7]. In this paper we make a full Monte Carlo simulation of the signal and background processes, include hadronisation, higher order effects via parton showers, and detector effects. Starting from three different sets of polarised gluon distributions, shown in Fig. 2, we check the sensitivity of the measurements and extract ∆G(x). These distributions are the Gehrmann-Stirling (GS) sets A and C [8], which result from a QCD analysis of g 1 data, and the instanton-gluon distribution [9]. The latter results from a calculation of the polarised parton distribution in the Instanton Liquid Model [10]. The distributions shown in Fig. 2, purposely selected, indicate how poorly ∆G(x) is constrained by the present polarised data. All of these distributions are compatible with the available data, stressing the need for direct measurements of ∆G(x). The GS-A and GS-C distribution show a similar small x behaviour, but differ considerably in the region around x ∼ 0.1. The GS-C distribution is negative for this x region. The instanton-gluon is quite different from the GS sets. It remains negative over the full x range. The latter gluon is used in combination with the GS-A quark distributions for the study in this paper. For the unpolarised parton density functions the parametrizations of Glück, Reya and Vogt in LO were used [11]. Jet cross sections and Monte Carlo programs Deep inelastic electron-proton scattering with several partons in the final state, e − (l) + p(P ) → e − (l ′ ) + remnant(p r ) + parton 1(p 1 ) + . . . + parton n(p n ) (1) proceeds via the exchange of an intermediate vector boson V = γ * , Z. Z-exchange and γ * /Z interference become only important at large Q 2 (> 1000 GeV 2 ) and are neglected in the following. We denote the momentum of the incoming proton by P , the momentum of the virtual photon, γ * , by q = l − l ′ , (minus) its absolute square by Q 2 , and use the standard scaling variables Bjorken-x x = Q 2 /(2P · q) and inelasticity y = P · q/P · l. The general structure of the unpolarised n-jet cross section in DIS is given by where the sum runs over incident partons a = q,q, g which carry a fraction x a of the proton momentum.σ a denotes the partonic cross section from which collinear initial state singularities have been factorized out (in next-to-leading order (NLO)) at a scale µ F and implicitly included in the scale dependent parton densities f a (x a , µ 2 F ). For longitudinally polarised lepton-hadron scattering, the hadronic (n-jet) cross section is obtained from Eq. (2) by replacing (σ had , f a ,σ a ) → (∆σ had , ∆f a , ∆σ a ). The polarised hadronic cross section is defined by ∆σ had ≡ σ had ↑↓ − σ had ↑↑ , where the left arrow in the subscript denotes the polarisation of the incoming lepton with respect to the direction of its momentum. The right arrow stands for the polarisation of the proton parallel or anti-parallel to the polarisation of the incoming lepton. The polarised parton distributions are defined by ∆f a (x a , µ 2 Here, f a↑ (f a↓ ) denotes the probability to find a parton a in the longitudinally polarised proton whose spin is aligned (anti-aligned) to the proton's spin. ∆σ a is the corresponding polarised partonic cross section. The subprocesses γ * + q → q + g, γ * +q →q + g, γ * + g → q +q contribute to the di-jet cross section (Fig. 1). The photon-gluon fusion subprocess γ * + g → q +q dominates the di-jet cross section at low Bjorken-x for unpolarised protons (see below) and allows for a direct measurement of the gluon density in the proton. The full NLO corrections for di-jet production in unpolarised lepton-hadron scattering are implemented in the ep → n − jets event generator MEPJET [12] which allows to analyse arbitrary jet definition schemes and general cuts in terms of parton 4-momenta. Recently, MEPJET has been extended to NLO for polarised scattering [13], and the NLO QCD corrections are found to be moderate. In LO the total unpolarised di-jet cross section is the sum of the contributions from photon gluon fusion processes, σ P GF di-jet , and QCD-Compton scattering, σ QCDC di-jet and can be written as: where G and q are the gluon and quark densities and A and B can be calculated in perturbative QCD. Similarly, for the polarised case we can write: with ∆G and ∆q being the polarised gluon and quark densities. The di-jet asymmetry is therefore sensitive to ∆G/G, especially at low x where the PGF cross section dominates. with A ≡ a/A and B ≡ b/B. The experimentally accessible asymmetry A meas is smaller than A di-jet due to the incomplete polarisations of the electron and proton beams, given by P e , P p , and the depolarisation of the γ * with respect to the electron. The latter effect is described by the depolarisation factor D = (y(2 − y))/(y 2 + 2(1 − y)(1 + R)), where y is the inelasticity and R is the ratio of longitudinal and transverse γ * p cross sections. The quantities N ↑↓ (N ↑↑ ) are the total number of observed di-jet events (N ↑↓ = N ↑↓ P GF + N ↑↓ QCDC ) for the case that proton and electron spin are antiparallel (parallel), respectively. The kinematic quantities to describe the PGF process are the momentum fraction of the proton carried by the gluon x g , the four-momentum transfer Q 2 and the square of the invariant mass of the two jets s ij . They are related to the Bjorken-x, x, by: For s ij > 100 GeV 2 , and in the Q 2 range relevant for HERA of 5 < Q 2 < 100 GeV 2 , x g is larger than Bjorken-x by about an order of magnitude and the accessible range at HERA is therefore about 0.002 < x g < 0.2. H1 has demonstrated that in this region the unpolarised gluon density G(x) can be extracted from di-jet cross sections [2]. The program MEPJET has been used to study di-jet production in (un)polarised DIS at (N)LO at the level of parton-jets [6,7,12]. To perform a study for a possible future measurement it is, however, desirable to include also hadronisation and detector effects. Therefore in this study we use the Monte Carlo event generator program PEPSI 6.5 [14,15]. It is a full, LO leptonnucleon scattering Monte Carlo program based on LEPTO 6.5 [16] for unpolarised and polarised interactions, including fragmentation, and unpolarised parton showers to simulate higher order effects. A comparison of PEPSI and MEPJET, used with identical conditions and cuts, shows that, at parton level, both programs give very similar results for di-jet cross sections and asymmetries for the HERA kinematic range [17]. As an example Fig. 3 shows the comparison for the di-jet asymmetry (A p (2-jets) ≡ D A di-jet ) as function of x g . The agreement between the MEPJET and PEPSI calculation is very good. The calculations were done for HERA energies, 820 GeV protons and 27.5 GeV electrons. The kinematic range was restricted to 0.3 < y < 0.8 and Q 2 < 100 GeV 2 . The minimum Q 2 was varied from 2 GeV 2 (plots on the left) to 5 GeV 2 (plots on the right). In PEPSI the so-called z-ŝ recombination scheme [18] has been used to define the phase space available for the LO matrix elements. The parameters for this scheme, z min = 0.04 andŝ min = 100 GeV 2 , were chosen such that the phase space region for di-jet events using a cone jet scheme for di-jets with p t > 5 GeV and s ij > 100 GeV 2 was not affected. z min is the minimum of z = (P · p jet )/(P · q) for the two jet momenta p jet in a di-jet event. The variableŝ is defined viaŝ ≡ (p + q) 2 , where p is the momentum of the incoming quark. For the jet detection a cone jet algorithm was used with R min = 1, R min being the minimal distance which two partons must have in order to belong to different jets. R is given by: R = (∆η) 2 + (∆φ) 2 with η being the pseudo rapidity and φ being the azimuthal angle in the laboratory frame. The two upper plots show the asymmetries for events with 100 < s ij < 400 GeV 2 , for the two lower plots s ij > 400 GeV 2 . The division into two s ij bins was made, because studies for the unpolarised di-jet cross sections [12,6] have shown that NLO corrections are expected to be small above s ij > ∼ 400 GeV 2 . In ref. [17] a study was made to further optimize the cuts in order to get a better sensitivity to ∆G. They concluded that the cuts used in this analysis are already very close to the optimum choice. Recently, some disagreement at low Q 2 has been reported between the newly, more precise, measured jet cross sections and the NLO calculations at HERA, using the cone jet algorithm [19]. This discrepancy may hint towards a 'resolved' photon component in the data, and is presently under study. We expect however that at the time the measurement described in this paper can be made, this matter will be settled and will have the effect of an additional small background to be subtracted from the di-jet event rates, in order to access the gluon distribution. Measured asymmetries We present a detailed study using the PEPSI program on the expected size of the measurable asymmetries for di-jet production at HERA. We show the influence of parton showers, which simulate higher order effects, and hadronisation and detector effects. We also show the sensitivity of the measurement to different polarised gluon distributions. The kinematic cuts applied for this study are similar to the ones discussed in the previous section, 5 < Q 2 < 100 GeV 2 and 0.3 < y < 0.85. Again two bins of s ij were analysed with 100 < s ij < 400 GeV 2 and s ij > 400 GeV 2 , respectively. Jets are defined using the cone scheme, are required to have a p t > 5 GeV and are restricted to the acceptance of a typical existing HERA detector by |η jet | < 2.8, were η jet is the pseudo-rapidity in the laboratory system. The expected measurable asymmetry for the input polarised gluon density GS-A, assuming the beam polarisations P e = P p = 0.7 and 200 pb −1 for the luminosity, is shown in Fig. 4a at the parton level. The expected asymmetry is negative and of the order of a few %. Jets induced by parton showers tend to reduce the size of the asymmetry. This is due to the fact that parton showers can produce a hard jet, which is then misidentified as a PGF induced one. This, rather small reduction on parton level, is more pronounced, if hadronisation and detector smearing effects are included (see Fig. 4b). The reason for this is that both effects broaden the jets and therefore the measured p t which is related to the energy in the cone of fixed size is smaller than the p t of the parton jet. The reconstruction of the kinematics of the event (s ij , x g ) is influenced and the correlation with the parton jets is reduced. For the hadronisation the Lund fragmentation model, implemented in JETSET [20], was used and an energy resolution for the hadronic calorimeter of ∆E had /E had = 0.5/ E had [GeV] was assumed. The results were cross checked using a realistic NOPS, s ij < 400 NOPS, s ij > 400 PS, s ij < 400 PS, s ij > 400 Figure 4: Expected measured asymmetries for di-jet events as a function of x g calculated with PEPSI on the parton level (a) and detector level (b). For each case the asymmetries are shown with (PS) and without (NOPS) parton showers for two different ranges of s ij . The assumed integrated luminosity is 200 pb −1 . The input polarised gluon distribution is GS-A. simulation program of the H1 calorimeter [21], which takes into account the energy resolution, the absolute energy scale and dead material in the detector. In order to optimize the signal to background ratio cuts were introduced demanding the two jets to be produced with a restricted difference in pseudo-rapidity and back to back in azimuth, as it is expected for real PGF events: |η jet1 − η jet2 | < 2 and 150 • < φ jet1 − φ jet2 < 210 • . After all these cuts for 100 pb −1 about 70,000 di-jet events are selected. The ratio of QCDC -PGF events is in the order of 1:6. The average Q 2 of this event sample is very close to 20 GeV 2 therefore results for ∆G are presented at this value. All cuts are applied in the asymmetry shown in Fig. 4b. Although the asymmetries are smaller due to the parton showers, and the statistics is reduced compared to the result on parton level, the expected asymmetry is still large enough to allow a statistically significant measurement for 200 pb −1 . These asymmetries form the basis of the studies in this paper. Due to the split-up in the s ij -bins, for the second and third x g bin there are two measurements. For simplicity we choose in the following one measurement per x g -bin, i.e. the one with the better significance. However, in principal the other points could be used as well, and add to the statistical significance. Table 1 5 < Q 2 < 100 GeV 2 x g A meas A corr δ(A) 0.002 -0.016 -0.016 0.008 0.006 -0.012 -0.012 0.004 0.014 -0.015 -0.018 0.005 0.034 -0.032 -0.032 0.009 0.084 -0.026 -0.047 0.018 0.207 -0.032 -0.069 0.040 2 < Q 2 < 10 GeV 2 10 < Q 2 < 100 shows the expected asymmetries A meas and their statistical errors δ(A) for the six x g -bins shown in Fig. 4. The two lowest x g -bins correspond to 100 < s ij < 400 GeV 2 . Also shown is A corr and corresponds to the first term in the sum of Eq. 5, which is the part sensitive to ∆G/G. In other words, it is the measured asymmetry corrected for the QCDC contribution. The numbers in Table 1 show that a significant contribution from QCDC processes is expected only for the two highest x g -bins. The right part of Table 1 shows results for A corr and its statistical error if the low Q 2 cut is released to 2 GeV 2 and the data are divided into to Q 2 bins. The mean Q 2 values for the two bins are 4.5 GeV 2 and 30 GeV 2 , respectively. Fig. 5 shows the expected measurable asymmetries for different sets of polarised gluon densities, i.e. the ones shown in Fig. 2: GS-A, GS-C, and the instanton-gluon. For the latter the polarised quark densities were taken from [8]. The assumed luminosity is 200 pb −1 . It can be seen here that the measurable asymmetry is very sensitive to the gluon input. The negative instanton-∆G leads to a positive asymmetry and can be clearly distinguished from the other two sets. GS-A and GS-C can be discriminated in the higher x-range, which is where they are maximally different. Extraction of ∆G In this section we will quantify the sensitivity to the shape of ∆G/G and discuss systematic uncertainties. In a real measurement one could obtain ∆G/G from the measured asymmetry by an unfolding method, where the background would be subtracted statistically and correlations between bins are fully taken into account. Such a method was used by H1 to extract the unpolarised gluon density [2]. If correlations between bins are small one can use a simpler method performing a bin-by-bin correction. For our study we consider the latter method to be sufficient. We simulate 500 pb −1 of di-jet events, as described in Sect. 3 with GS-A as input gluon density. This would in a real measurement correspond to the Monte Carlo generation of events and will therefore be called 'MC-set' here. Assuming that for each x-bin ∆G/G and A corr are GS-A GS-C Instanton-Gluon where i indicates the x-bin, we compute these factors using the MC-set. These factors F i were then multiplied with the asymmetries A corr that correspond to the three measured asymmetries in Fig. 5. The three sets of events used here represent the possible measurements and are called 'data sets'. The result is shown in Figs. 6a -6c. Within the statistical accuracy the input (solid lines) ∆G/G is found back for all cases and the statistics is sufficient to discriminate between them. The six x-points allow in particular a measurement of the shape of the polarised gluon distribution. (The statistical fluctuations for GS-A are smaller than can be expected because the data set of 200 pb −1 which was used to produce the asymmetry was also included in the MC-set used to determine the correction factors F i .) The errors shown reflect the statistics of the data sets (200 pb −1 ). The statistical uncertainty of the F i is not included here, since in a real measurement it would be computed with very high statistical precision. However, the limited statistics here is reflected in the fluctuations in Fig. 6. Figures 6d -6f shows the same result presented for the theoretically more interesting quantity x∆G. The error bars are scaled errors of the left column, i. e. no uncertainty was assigned to G at this stage. After we have shown that the measurable asymmetries are sensitive to the input of ∆G and that we are able to extract the polarised gluon densities in several x-bins, the sensitivity to the shape of ∆G/G and x∆G for an integrated luminosity of 500 pb −1 is shown in Fig. 7. The statistical errors for the x points are shown on the curves for ∆G/G (a-c) and x∆G (d-f). Again we notice the good separation between the different distributions. A study was performed on the systematic errors for the GS-A polarised gluon distribution. The error sources considered were: an uncertainty of 2% of the calibration of the hadronic energy scale, an error on the total unpolarised di-jet cross section σ di-jet of 2% and an error on the unpolarised gluon density G(x) of 5% [22]. We assume that the unpolarised quantities will be measured before with high statistical precision with the HERA high-luminosity upgrade. The uncertainty on the ratio of the polarised and unpolarised quark densities, ∆q/q, was considered to be 10%, based on present fixed target measurements of g 1 (x), and the error on the polarisation measurement was taken to be 5% for each beam. These contributions were added in quadrature and the result is displayed as a shaded band in the Fig. 7a and 7d. The largest contribution is due to the uncertainty on the beam polarisations and, for the two highest x-bins, due to the QCDC contribution. Other studies, such as the influence of the choice of the fragmentation model on the result, have also been performed, but no significant change of the results could be observed. In summary we see that for all x bins the statistical uncertainty is dominating. In Table 2 the expected statistical errors, corresponding to a luminosity of 500 pb −1 , are detailed for the measurable x-bins and for the first moment ∆Gdx in the range of 0.0015 < x < 0.32. The result we obtain is: A comparison of the accessible x-ranges for this and other proposed experiments is also shown in Fig. 8. Displayed is the expected di-jet result from polarised ep scattering at HERA (HERA 2+1 jet), the HERA-N γ+jet measurement, and the expected accuracy from a measurement of γ+jet in polarised pp collisions with the STAR detector at RHIC [24] for Q 2 = 20 GeV 2 . The di-jet measurement at a polarised HERA clearly extends into a region of x which is not accessible to any other experiment! Also shown are four different parametrizations for the polarised gluon densities. In addition to the previously used 'gluons sets A and C' of Gehrmann and Stirling also the 'gluon set B' of the same authors and the 'standard scenario' of Glück, Reya, Stratmann and Vogelsang (GRSVs) [25] (all LO) are shown. All these parametrizations are in agreement with present data. Table 2: Expected uncertainties on ∆G/G and the first moment of ∆G(x) for the measured x-bins. The upper seven rows show errors for each x-bin and the sum over the full measured range of the di-jet analysis. In the lower three rows the di-jet measurement has been combined with a possible measurement of ∆G/G from prompt photon + jet production at HERA-N . The first moments of all the Gehrmann-Stirling sets are very similar (2.6, 2.6, and 2.5 at Q 2 = 20 GeV 2 for the sets A, B, C, respectively, and 1.8 for GRSVs). The shape of ∆G(x), however, can be very different, which shows the importance of this kind of measurement with respect to e.g. extractions of the first moment of ∆G in a NLO-QCD analysis of the polarised structure The low-Q 2 cut was reduced to Q 2 > 2 GeV 2 . The assumed luminosity is 500 pb −1 . function g 1 . However, since these two approaches give rather complementary information it could be advantageous to combine the di-jet analysis and the NLO fits to g 1 into a common fit. A case study of such an analysis for a polarised HERA has been performed [26]. Another point to stress here is that for all four parametrizations the part of the first moment 0.32 0.0015 ∆Gdx which can be measured with the di-jet events is 60% for GS-C and about 75% for GS-A, GS-B, GRSVs of the total first moment 1 0 ∆Gdx. As an example, assuming GS-A, in this experiment we would measure 0.32 0.0015 ∆Gdx = 2.0 ± 0.21(stat.), hence a 10% uncertainty for the first moment in the measured range. To show that more information could be extracted, Fig. 9 shows the expected statistical uncertainty on ∆G/G (a,b) and x∆G (c,d) for two bins of Q 2 . A luminosity of 500 pb −1 and the GS-A polarised gluon distribution are assumed. The data were divided as given in Table 1. It shows that such analysis can provide direct information on the interesting question on the Q 2 dependence of ∆G. Conclusions We have shown in this study that an analysis of the di-jet rate at HERA allows a measurement of ∆G/G(x) in an x-range from 0.002 < x < 0.2, a region where large differences are observed between present models for the polarised gluon distribution. This x range is largely uncovered by any other proposed experiment. The precision of the measurement, both statistical and systematical, is large enough such that shape of ∆G/G could be measured and discrimination between different polarised gluon distributions would be possible. The first moment of ∆G can be determined with a precision of about 10%, in the range 0.0015 < x < 0.32. The results are complimentary to extractions of the first moment of ∆G from structure function measurements, and measurements at COMPASS [27], RHIC or HERA-N . The proposed measurement is vital for our understanding of the spin structure of the nucleon.
2014-10-01T00:00:00.000Z
1997-11-18T00:00:00.000
{ "year": 1997, "sha1": "6b9a608ceeab948917825dad4df0d819d1d5480c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "3c7dbf7258f9f8915819052194a4288213205079", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
260035938
pes2o/s2orc
v3-fos-license
A Pilot Study Evaluating LV Diastolic Function with M-Mode Measurement of Mitral Valve Movement in the Parasternal Long Axis View This pilot study aimed to develop a new, reliable, and easy-to-use method for the evaluation of diastolic function through the M-mode measurement of mitral valve (MV) movement in the parasternal long axis (PSLA), similar to E-point septal separation (EPSS) used for systolic function estimation. Thirty healthy volunteers from a tertiary emergency department (ED) underwent M-mode measurements of the MV anterior leaflet in the PSLA view. EPSS, A-point septal separation (APSS), A-point opening length (APOL), and E-point opening length (EPOL) were measured in the PSLA view, along with the E and A velocities and e’ velocity in the apical four-chamber view. Correlation analyses were performed to assess the relationship between M-mode and Doppler measurements, and the measurement time was evaluated. No significant correlations were found between M-mode and Doppler measurements in the study. However, M-mode measurements exhibited high reproducibility and faster acquisition, and the EPOL value consistently exceeded the APOL value, resembling the E and A pattern. These findings suggest that visually assessing the M-mode pattern on the MV anterior leaflet in the PSLA view may be a practical approach to estimating diastolic function in the ED. Further investigations with a larger and more diverse patient population are needed to validate these findings. Introduction Point-of-care ultrasound (POCUS) is a vital tool in the emergency department (ED) for the evaluation of heart function in time-sensitive situations, particularly in patients presenting with dyspnea, chest pain, shock, or cardiac arrest [1][2][3]. Echocardiography, a non-invasive imaging technique that uses ultrasound to produce real-time images of the heart, can assess various aspects of cardiac function, including systolic and diastolic function [4,5]. The E-point septal separation (EPSS) method, which measures the distance between the ventricular septal wall and anterior leaflet of the mitral valve (MV) from the parasternal long axis (PSLA) view, is a reliable and easy-to-use method for the evaluation of systolic function that does not require specialized equipment or complex calculations, making it particularly useful in emergency patient care, where obtaining high-quality cardiac ultrasound images can be challenging [6][7][8][9][10][11][12]. Spectral Doppler echocardiography can generally be used to assess diastolic function by measuring various parameters, such as the E/A ratio (the ratio of the early diastolic velocity to the late diastolic velocity of the mitral inflow), the deceleration time, and the E/e' ratio (the ratio of the early diastolic velocity of the mitral inflow to the early diastolic velocity of the mitral annulus) in the apical four-chamber (A4C) view [13,14]. Although there are several studies on how emergency physicians can evaluate and diagnose diastolic dysfunction [15][16][17], it can still be challenging owing to difficulties in obtaining an accurate Doppler signal, particularly in patients with poor acoustic windows, arrhythmias, or breathing difficulties. Additionally, spectral Doppler measurements require careful calibration and angle correction, and errors in these adjustments can lead to significant measurement errors [18,19]. Therefore, there is a need for a diastolic function evaluation method that is as simple and reliable as the EPSS method for systolic function evaluation. The investigators observed, in some cases, that the pattern of mitral inflow velocity assessed by pulsed-wave (PW) Doppler in the A4C view, specifically the E/A pattern, was similar to the movement pattern of the MV anterior leaflet observed while measuring the EPSS with the M-mode in the PSLA view. When the E/A ratio was reversed, the M-mode movements of the MV anterior leaflet in the PSLA view were also reversed. Patients with normal diastolic function exhibited M-mode movements of the MV anterior leaflet that resembled the normal E/A pattern. The MV is located between the left atrium (LA) and left ventricle (LV). During the relaxation period, the MV opens and blood flow in the LA moves to the LV. Therefore, the amount of blood flow entering the ventricle changes according to the diastolic function of the LV, and MV movement is affected accordingly. Some studies have evaluated diastolic function by observing the movement of the MV using the M-mode in the PSLA view [20][21][22][23][24]; however, no studies have conducted a quantitative evaluation comparing it to the Doppler blood flow rate in the A4C view. Therefore, analyzing the correlation between these measurements could provide a new diastolic measurement method that simply evaluates the septal separation of the MV anterior leaflet using M-mode in the PSLA view, instead of relying on spectral Doppler evaluation, which requires accurate Doppler angle matching in the A4C view. This study aimed to develop a new, reliable, and easy-to-use method for the evaluation of diastolic function that can be implemented in ED clinical practice, similar to the EPSS used for systolic function estimation. Study Design This was a prospective observational pilot study to develop a novel diastolic function evaluation method using M-mode measurements of the distances between the anterior leaflet of the MV and the septum in the PSLA view. This study was conducted at an urban tertiary academic ED in Seoul, Republic of Korea, with more than 70,000 annual ED visits. The study was performed in accordance with the Declaration of Helsinki and approved by the Samsung Medical Center Institutional Review Board (IRB file number 2022-11-033). All participants provided written informed consent prior to inclusion in the study. Participants Between December 2022 and February 2023, we recruited 30 healthy volunteers for bedside echocardiography who met the following inclusion criteria: age of at least 18 years, no history of cardiovascular disease, and no structural or valvular heart abnormalities. Exclusion criteria consisted of specific findings on the myocardium, pericardium, or valve on echocardiography; failure to measure all indices required for the study of echocardiography; or refusal of consent. Study Protocol All the participants were asked to complete a simple demographic survey. Before performing the evaluation required for the study, an investigator performed a screening Diagnostics 2023, 13, 2412 3 of 11 test using a simple scan to determine whether there were any functional or structural abnormalities in the heart (global LV function, regional wall motion abnormality, valve dysfunction, and pericardial effusion). The participants were positioned in the supine or left lateral supine position unless they experienced discomfort. Electrocardiogram (ECG) leads were then attached to their chests. The study protocol involved taking Mmode measurements of the MV anterior leaflet through a PSLA view scan, followed by PW Doppler and tissue Doppler measurements on an A4C view for diastolic function evaluation. The time required for each measurement was recorded. An EM resident and faculty member with experience in performing more than 200 echocardiography assessments conducted the study. One investigator measured the research image, while the other performed pre-screening. The investigators used pre-designated cardiac presets of Venue Go with a 1-3 MHz phased array transducer (GE Healthcare, Chicago, IL, USA). M-Mode Measurements in PSLA View This study used the M-mode technique to obtain mitral valve separation measurements in the PSLA view. Specifically, EPSS refers to the separation distance between the anterior leaflet and the septum during early diastole. Additionally, the distance between the anterior leaflet of the MV and the interventricular septum during late diastole is defined as the A-point septal separation (APSS). Furthermore, the investigators measured the vertical distance between the imaginary line where the MV is closed at systole and the tip of the anterior leaflet of the mitral valve during early diastole, which is defined as the E-point opening length (EPOL). Similarly, the distance to late diastole is defined as the A-point opening length (APOL). Using the M-mode technique, the time between the peak early diastolic point and nadir (EPSS deceleration time) was obtained ( Figure 1A). To ensure reproducibility, M-mode measurements were performed twice in the PSLA view, and the time required for M-mode measurements was recorded only during the first measurement. Study Protocol All the participants were asked to complete a simple demographic survey. Before performing the evaluation required for the study, an investigator performed a screening test using a simple scan to determine whether there were any functional or structural abnormalities in the heart (global LV function, regional wall motion abnormality, valve dysfunction, and pericardial effusion). The participants were positioned in the supine or left lateral supine position unless they experienced discomfort. Electrocardiogram (ECG) leads were then attached to their chests. The study protocol involved taking M-mode measurements of the MV anterior leaflet through a PSLA view scan, followed by PW Doppler and tissue Doppler measurements on an A4C view for diastolic function evaluation. The time required for each measurement was recorded. An EM resident and faculty member with experience in performing more than 200 echocardiography assessments conducted the study. One investigator measured the research image, while the other performed pre-screening. The investigators used pre-designated cardiac presets of Venue Go with a 1-3 MHz phased array transducer (GE Healthcare, Chicago, IL, USA). M-Mode Measurements in PSLA View This study used the M-mode technique to obtain mitral valve separation measurements in the PSLA view. Specifically, EPSS refers to the separation distance between the anterior leaflet and the septum during early diastole. Additionally, the distance between the anterior leaflet of the MV and the interventricular septum during late diastole is defined as the A-point septal separation (APSS). Furthermore, the investigators measured the vertical distance between the imaginary line where the MV is closed at systole and the tip of the anterior leaflet of the mitral valve during early diastole, which is defined as the E-point opening length (EPOL). Similarly, the distance to late diastole is defined as the A-point opening length (APOL). Using the M-mode technique, the time between the peak early diastolic point and nadir (EPSS deceleration time) was obtained ( Figure 1A). To ensure reproducibility, M-mode measurements were performed twice in the PSLA view, and the time required for M-mode measurements was recorded only during the first measurement. Spectral Doppler Measurement in A4C View PW Doppler was used to obtain mitral inflow measurements, which included the peak trans-mitral inflow velocities during early diastole (E) and late diastole (A), the E/A ratio, and the deceleration time of early diastolic flow (E deceleration time) ( Figure 1B). Tissue Doppler imaging (TDI) in the A4C view was used to obtain the septal mitral annular excursion velocity (e') in early diastole. The measurement times for the E/A ratio and the E/e' ratio were recorded, and the LV ejection fraction (EF) was measured using Simpson's method. Measures Demographic data, including vital signs, sex, and age, were collected at the time of examination. The following M-mode measurements were obtained from the PSLA view: EPSS, APSS, APOL, EPOL, and EPSS deceleration time. The following Doppler measurements were obtained in the A4C view: E velocity, A velocity, E deceleration time, e' velocity, and EF. The time required for each measurement was recorded. Data Analysis Continuous variables are reported as the mean (standard deviation, SD) or median (interquartile range, IQR), and categorical variables as a number and percentage. The mean values of the first and second M-mode measurements were used for analysis. Nonnormally distributed variables were log-transformed prior to the analysis. Spearman's and Pearson's correlation analyses were used to evaluate the correlation between the MV measurements in the PSLA view and spectral Doppler measurements in the A4C view. Wilcoxon's signed-rank test was used to assess the time differences between measurement methods, and p-values were adjusted using Bonferroni's method. The reproducibility of the primary and secondary MV measurement values was evaluated using a Bland-Altman plot and the intra-class correlation coefficient (ICC). Sample size calculation was not performed as this was a pilot study, and the relationship between M-mode measurements of the MV anterior leaflet in the PSLA view and Doppler measurements of the MV inflow velocity in the A4C view has not yet been established. Statistical significance was set at p < 0.05 for all analyses. SAS version 9.4 (SAS Institute, Inc., Cary, NC, USA) and R version 4.1.0 (Vienna, Austria; http://www.R-project.org/, accessed on 21 April 2023) were used for all statistical analyses. Outcomes The primary outcomes of this study were the correlation between the E, A, and E/A ratio measured by PW Doppler in the A4C view and the EPSS, APSS, APSS/EPSS ratio, EPOL, APOL, and EPOL/APOL ratio measured by M-mode in the PSLA view. Secondary outcomes were the comparison of the measurement times for both methods and the ICC of the primary and secondary measurement values for the MV measurements. Results Thirty healthy participants were recruited between December 2022 and February 2023. Of these patients, 20 (67%) were male, with a mean age of 29 years (Table S1). In the PSLA view, the EPSS (median [IQR]) and APSS (mean [SD]) distances were 2.7 (2.2-4.3) mm and 1.3 (0.3) cm, and the median APSS/EPSS ratio was 4.3 (3.2-5.6). The mean (SD) EPOL and APOL lengths were 2.6 (0.4) cm and 1.8 (0.4) cm, and the median EPOL/APOL ratio was 1.4 (1.3-1.5) cm. In the A4C view, the mean (SD) E and A velocities were 0.8 (0.2) m/s and 0.5 (0.1) m/s, respectively, and the E/A ratio was 1.6 (0.4). The mean septal e' was 13.4 (2.4) cm/s, and the mean E/e' ratio was 6.3 (1.2) ( Table 1). The E/A ratio in the A4C view was normal for all 30 participants, with no instances of reversal. Similarly, the EPOL/APOL ratio obtained through M-mode measurement of the MV anterior leaflet in the PSLA view did not demonstrate any reversal and remained within the range of 1 to 2 ( Figure 2). demonstrate any reversal and remained within the range of 1 to 2 ( Figure 2). The correlation between the APSS/EPSS ratio in M-mode measurements and ratio in Doppler measurements was moderately positive (Pearson's correlation co The correlation between the APSS/EPSS ratio in M-mode measurements and the E/A ratio in Doppler measurements was moderately positive (Pearson's correlation coefficient = 0.4, p = 0.045). However, this correlation was not statistically significant when reanalyzed using a rank-based method (Spearman's correlation analysis) because of the skewed APSS/EPSS ratio data. Correlation analysis revealed only one significant correlation between M-mode measurements in the PSLA view and Doppler measurements in the A4C view: a moderate positive correlation between the APSS/EPSS ratio and the E value (Spearman's correlation coefficient = 0.4, p = 0.026). No other significant correlations were observed ( Table 2). Table 2. Correlation between M-mode measurements in the PSLA view and Doppler measurements in the A4C view. Table 3). The ICC values for both primary and secondary M-mode measurements in the PSLA view showed a high degree of correlation, ranging from 0.8 to 1.0 (p-value < 0.001) (Table 4, Figure 3). Discussion This pilot study explored a new method for the evaluation of LV diastolic function using M-mode measurements of MV motion in the PSLA view. The study found that although there was no statistically significant correlation between the EPOL/APOL and E/A ratios, both values showed a notable trend in participants with normal diastolic function. The EPOL value was consistently higher than the APOL value and the E value was higher than the A value in all patients. In addition, the EPOL/APOL ratio had a more limited distribution range, from 1 to 2. The M-mode measurements were also quick and highly reproducible compared with the Doppler measurements. These findings suggest that visually assessing the M-mode pattern on the MV anterior leaflet in the PSLA view may be a practical approach to estimating diastolic function in the ED, similar to the EPSS method used for systolic function estimation. The accurate evaluation and diagnosis of diastolic function are crucial in the ED because of its association with a wide range of heart diseases and clinical symptoms. Diastolic dysfunction can be observed in conditions such as hypertension, coronary artery disease, and diabetes and can also serve as a predictor of mortality and morbidity [25,26]. Importantly, diastolic dysfunction can occur even in patients with normal systolic function, emphasizing the need to evaluate both aspects of cardiac function. Echocardiography, a rapid and safe diagnostic tool, plays a critical role in assessing left ventricular relaxation function and can be readily utilized in clinical practice for the accurate evaluation of diastolic function [15][16][17]. The assessment of diastolic function commonly employs PW Doppler to evaluate mitral inflow velocities. The E and A velocities represent early and late diastolic filling and are influenced by factors such as preload, LV relaxation and compliance, and LA contractile function [13,14,27]. In healthy individuals, the E/A ratio is typically greater than 1 (e.g., 8:2 or 7:3). As relaxation function declines, the E/A ratio decreases below 1, although reduced LV compliance can lead to an E/A ratio greater than 1. Age, arrhythmia, LV capacity, and electric recoil also affect the E/A ratio [28,29]. Dilated cardiomyopathy or coronary artery disease with normal EF has a weak correlation with ventricular filling pressure. Therefore, the E/e' ratio, obtained by measuring e' using tissue Doppler, is a Discussion This pilot study explored a new method for the evaluation of LV diastolic function using M-mode measurements of MV motion in the PSLA view. The study found that although there was no statistically significant correlation between the EPOL/APOL and E/A ratios, both values showed a notable trend in participants with normal diastolic function. The EPOL value was consistently higher than the APOL value and the E value was higher than the A value in all patients. In addition, the EPOL/APOL ratio had a more limited distribution range, from 1 to 2. The M-mode measurements were also quick and highly reproducible compared with the Doppler measurements. These findings suggest that visually assessing the M-mode pattern on the MV anterior leaflet in the PSLA view may be a practical approach to estimating diastolic function in the ED, similar to the EPSS method used for systolic function estimation. The accurate evaluation and diagnosis of diastolic function are crucial in the ED because of its association with a wide range of heart diseases and clinical symptoms. Diastolic dysfunction can be observed in conditions such as hypertension, coronary artery disease, and diabetes and can also serve as a predictor of mortality and morbidity [25,26]. Importantly, diastolic dysfunction can occur even in patients with normal systolic function, emphasizing the need to evaluate both aspects of cardiac function. Echocardiography, a rapid and safe diagnostic tool, plays a critical role in assessing left ventricular relaxation function and can be readily utilized in clinical practice for the accurate evaluation of diastolic function [15][16][17]. The assessment of diastolic function commonly employs PW Doppler to evaluate mitral inflow velocities. The E and A velocities represent early and late diastolic filling and are influenced by factors such as preload, LV relaxation and compliance, and LA contractile function [13,14,27]. In healthy individuals, the E/A ratio is typically greater than 1 (e.g., 8:2 or 7:3). As relaxation function declines, the E/A ratio decreases below 1, although reduced LV compliance can lead to an E/A ratio greater than 1. Age, arrhythmia, LV capacity, and electric recoil also affect the E/A ratio [28,29]. Dilated cardiomyopathy or coronary artery disease with normal EF has a weak correlation with ventricular filling pressure. Therefore, the E/e' ratio, obtained by measuring e' using tissue Doppler, is a reliable index for the evaluation of diastolic function as it reflects the left ventricular relaxation rate accurately [30]. However, accurately obtaining Doppler measurements in the ED is challenging. Compared with the apical view, the PSLA view offers several advantages. It provides superior visualization of the LV, mitral and aortic valves, and LA, resulting in higherresolution images owing to the perpendicular imaging plane. In contrast, the oblique angle required in the apical view makes it more challenging to obtain clear images and precise measurements; thus, emergency physicians prefer the PSLA view to other views when performing echocardiography [26]. EPSS measurements in the PSLA view involve straightforward distance measurements between the anterior leaflet of the MV and interventricular septum, ensuring high reproducibility [6,10,11]. Similarly, APSS, EPOL, and APOL measurements rely on distance measurements using the anterior leaflet movement of the MV in early and late diastole, leading to accurate measurements and high-quality images. In this study, it was found that the APSS, EPOL, and APOL measurements were also reproducible; the ICC values were between 0.8 and 1.0 (p-value < 0.001), and the time required for M-mode measurements was significantly less (p < 0.001). Therefore, in a time-sensitive ED setting where patient cooperation may be limited, the convenience of M-mode measurements in the PSLA view makes it a valuable method for the evaluation of diastolic function. In this study, no significant correlations were found between most M-mode and Doppler measurements, including the EPOL/APOL and E/A ratios, making it challenging to establish a clear index linking M-mode measurements to E/A values. However, a significant correlation was observed between the APSS/EPSS ratio and E value (Spearman's correlation coefficient = 0.4). The study included 30 healthy participants with normal cardiac function, a small median EPSS value of 2.7 mm, a median APSS/EPSS ratio of 4.3, and a mean E-value of 0.8 m/s. These findings suggest that in individuals with normal diastolic function, the LV inflow velocity (E) increases during early diastole through active volume suction from the LA, resulting in a decrease in EPSS and an increase in EPOL. Consequently, LV filling during late diastole (A) may be reduced, leading to a decrease in A and an increase in APSS, ultimately increasing the APSS/EPSS ratio. Therefore, this new diastolic function measurement method using the M-mode in the PSLA view provides a better understanding of relaxation function in individuals with normal diastolic function. However, the clinical application of this correlation to quantitatively assess diastolic dysfunction may have limitations. The evaluation of diastolic function using the MV anterior leaflet has inherent limitations, offering only a partial assessment and disregarding factors such as MV stenosis or regurgitation. It may not be applicable to patients with abnormal LV systolic function or significant mitral valve pathology, necessitating a comprehensive evaluation with multiple parameters. The objective of this pilot study was to develop a reliable and user-friendly method that does not require spectral Doppler. While the M-mode approach may not be directly incorporated into routine cardiac POCUS protocols, it shows potential as an auxiliary tool to estimate diastolic function in the ED, where challenges with image quality and complex calculations exist. Limitations Our study had several limitations. First, this was a pilot study conducted in a small number of healthy volunteers with normal cardiac function. We did not perform sample size calculation because the relationship between M-mode measurements of the MV anterior leaflet in the PSLA view and Doppler measurements of the MV inflow velocity in the A4C view has not yet been established. Therefore, generalizing these findings to a wider range of cardiac functions and the general population is challenging. To overcome this limitation, a follow-up study involving a larger and more diverse population is necessary to determine whether similar EPOL and APOL patterns can be applied to patients with reduced systolic and diastolic function due to underlying cardiopulmonary diseases, for the evaluation of diastolic function. Second, the evaluation of diastolic function through echocardiography typically includes additional measurements, such as pulmonary vein flow Doppler, the LA volume index, and systolic pulmonary arterial pressure using the maximal tricuspid regurgitation velocity. These comprehensive evaluations help to determine the stage of diastolic dysfunction and the presence of increased LV filling pressure. However, in our study, we focused only on the mitral inflow velocity and M-mode measurements from the PSLA perspective, which provided a simplified assessment. While this approach has its advantages, it should be noted that it does not encompass the entire spectrum of diastolic function evaluation. Lastly, in our study, echocardiography was performed by two emergency physicians, and we did not assess the image quality or interrater reliability. Furthermore, the median EPSS value was very small, 2.7 mm, raising the possibility of measurement error. These factors could have influenced the accuracy and consistency of the results. Further studies should include assessments of image quality and interrater reliability to enhance the reliability and reproducibility of the diastolic function evaluation method. Although our study provides valuable insights into the evaluation of diastolic function using the MV anterior leaflet in the PSLA view, it is important to acknowledge these limitations. Future studies should address these limitations to validate and refine the proposed method. Conclusions Although the study aimed to establish correlations between M-mode and spectral Doppler measurements for LV diastolic function assessment, no statistically significant correlations were observed. However, M-mode measurements were quick and reproducible. Additionally, the EPOL value consistently exceeded the APOL value in all patients with normal diastolic function, similar to the E/A pattern. This pilot study suggests that visually assessing the pattern by utilizing M-mode on the MV anterior leaflet in the PSLA view may be a practical approach to estimating overall diastolic function in the ED. Further studies with larger sample sizes are necessary to validate these findings, particularly in patients with diastolic dysfunction. Informed Consent Statement: All participants provided written informed consent prior to inclusion in the study. Data Availability Statement: The datasets used and/or analyzed in the current study are available from the corresponding author upon reasonable request. Conflicts of Interest: The authors declare no conflict of interest.
2023-07-22T15:11:48.316Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "8709e8af78edaa3b602bb69d33007f0dbf9473e0", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4418/13/14/2412/pdf?version=1689762757", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1a58109986edd9be2e05287d7714f4e72666f175", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
119650728
pes2o/s2orc
v3-fos-license
Modular Lattice for $C_{o}$-Operators We study modularity of the lattice Lat $(T)$ of closed invariant subspaces for a $C_0$-operator $T$ and find a condition such that Lat $(T)$ is a modular. Furthermore, we provide a quasiaffinity preserving modularity. then Lat(T ) is called modular. We study Lat(T ) where T is a C 0 -operator which were first studied in detail by B.Sz.-Nagy and C. Foias [4]. In this paper D denotes the open unit disk in the complex plane. This paper is organized as follows. Section 1 contains preliminaries about operators of class C 0 and the Jordan model of C 0 -operators. For operators T 1 ∈ L(H 1 ) and T 2 ∈ L(H 2 ), if X ∈ {A ∈ L(H) : AT 1 = T 2 A}, then we define a function X * : Lat(T 1 ) → Lat(T 2 ) as following: 1. C 0 -Operators Relative to D 1.1. A Functional Calculus. It is well-known that for every linear operator A on a finite dimensional vector space V over the field F , there is a minimal polynomial for A which is the (unique) monic generator of the ideal of polynomials over F which annihilate A. If the dimension of F is not finite, then generally there is no such a polynomial. However, to provide a function similar to a minimal polynomial, B. Sz.-Nagy and C. Foias focused on a contraction T ∈ L(H) which is called to be completely nonunitary, i.e. there is no invariant subspace M for T such that the restriction T |M of T to the space M is a unitary operator. Let H be a subspace of a Hilbert space K and P H be the orthogonal projection from K onto H. We recall that if A ∈ L(K), and T ∈ L(H), then A is said to be a dilation of T provided that for n = 1, 2, ..., If A is an isometry (unitary operator) then A will be called an isometric (unitary) dilation of T . An isometric (unitary) dilation A of T is said to be minimal if no restriction of A to an invariant subspace is an isometric (unitary) dilation of T . B. Sz.-Nagy proved the following interesting result: Every contraction has a unitary dilation. Let T ∈ L(H) be a completely nonunitary contraction with minimal unitary dilation U ∈ L(K). For every polynomial p(z) = n j=0 a j z j we have (1.2) p(T ) = P H p(U )|H, and so this formula suggests that the functional calculus p → p(T ) might be extended to more general functions p. Since the mapping p → p(T ) is a homomorphism from the algebra of polynomials to the algebra of operators, we will extend it to a mapping which is also a homomorphism from an algebra to the algebra of operators. By Spectral Theorem, since U ∈ L(H) is a normal operator, there is a unique spectral measure E on the Borel subsets of the spectrum of U denoted as usual by σ(U ) such that Since the spectral measure E of U is absolutely continuous with respect to Lebesgue measure on ∂D, for g ∈ L ∞ (σ(U ), E), g(U ) can be defined as follows: It is clear that if g is a polynomial, then this definition agrees with the preceding one. Since the spectral measure of U is absolutely continuous with respect to Lebesgue measure on ∂D, the expression g(U ) makes sense for every g ∈ L ∞ = L ∞ (∂D). We generalize formula (1.2), and so for g ∈ L ∞ , define g(T ) by While the mapping g → g(T ) is obviously linear, it is not generally multiplicative, i.e. it is not a homomorphism. Evidently it is convenient to find a subalgebra in L ∞ on which the functional calculus is multiplicative. Recall that H ∞ is the Banach space of all (complex-valued) bounded analytic functions on the open unit disk D with supremum norm [4]. It turns out that H ∞ is the unique maximal algebra making the map a homomorphism between algebras. We know that H ∞ can be regarded as a subalgebra of L ∞ (∂D) [1]. We note that the functional calculus with H ∞ functions can be defined in terms of independent of the minimal unitary dilation. Indeed, if u(z) = ∞ n=0 a n z n is in H ∞ , then ∞ n=0 a n r n T n , where the limit exists in the strong operator topology. B. Sz.-Nagy and C. Foias introduced this important functional calculus for completely nonunitary contractions. Proposition 1.2. Let T ∈ L(H) be a completely nonunitary contraction. Then there is a unique algebra representation Φ T from H ∞ into L(H) such that : We simply denote by u(T ) the operator Φ T (u). B.Sz.-Nagy and C. Foias [4] defined the class C 0 relative to the open unit disk D consisting of completely nonunitary contractions T on H such that the kernel of Φ T is not trivial. If T ∈ L(H) is an operator of class C 0 , then ker Φ T = {u ∈ H ∞ : u(T ) = 0} is a weak * -closed ideal of H ∞ , and hence there is an inner function generating ker Φ T . The minimal function m T of an operator of class C 0 is the generator of ker Φ T , and it seems as a substitute for the minimal polynomial. Also, m T is uniquely determined up to a constant scalar factor of absolute value one [1]. The theory of class C 0 relative to the open unit disk has been developed by B.Sz.-Nagy, C. Foias ( [4]) and H. Bercovici ([1]). Jordan Operator. We know that every n × n matrix over an algebraically closed field F is similar to a unique Jordan canonical form. To extend that theory to the C 0 operator T ∈ L(H), B.Sz.-Nagy and C. Foias [4] introduced a weaker notion of equivalence. They defined a quasiaffine transform of T which is bounded operator T ′ defined on a Hilbert space H ′ such that there exists an injective operator Instead of similarity, they introduced quasisimilarity of two operators, namely, T and T ′ are quasisimilar, denoted by Given an inner function θ ∈ H ∞ , the Jordan block S(θ) is the operator acting on H(θ) = H 2 ⊖ θH 2 , which means the orthogonal complement of θH 2 in the Hardy space H 2 , as follows : where S ∈ L(H 2 ) is the unilateral shift operator defined by (Sf )(z) = zf (z) and P H(θ) ∈ L(H 2 ) denotes the orthogonal projection of H 2 onto H(θ). For every inner function θ in H ∞ , the operator S(θ) is of class C 0 and its minimal function is θ. Let θ and θ ′ be two inner functions in H ∞ . We say that θ divides θ ′ (or θ|θ ′ ) if θ ′ can be written as θ ′ = θ·φ for some φ ∈ H ∞ . It is clear that φ ∈ H ∞ is also inner. We will use the notation θ ≡ θ ′ if θ|θ ′ and θ ′ |θ. Let γ be a cardinal number and Θ = {θ α ∈ H ∞ : α < γ} be a family of inner functions. Then Θ is called a model function if θ α |θ β whenever card(β) ≤card(α) < γ. The Jordan operator S(Θ) determined by the model function Θ is the C 0 operator defined as We From Theorem 1.6 and Theorem 1.7, we can conclude that " ≺ " is an equivalence relation on the set of C 0 -operators. We will think about Lat(T ) for a C 0 -operator T . Let T 1 and T 2 be operators in L(H). Suppose that X ∈ {A ∈ L(H) : AT 1 = T 2 A}. If M is in Lat(T 1 ), then (XM ) − is in Lat(T 2 ). By using these facts, we define a function X * from Lat(T 1 ) to Lat(T 2 ) as following : The operator X is said to be a (T 1 , T 2 )-lattice-isomorphism if X * is a bijection of Lat(T 1 ) onto Lat(T 2 ). We will use the name lattice-isomorphism instead of (T 1 , T 2 )-lattice-isomorphism if no confusion may arise. If X ∈ {A ∈ L(H) : Thus, if T has property (P ), then H is separable and T * also has property (P ). If the mapping X * is onto Lat(T 2 ) if and only if (X * ) * is one-to-one on Lat(T * 2 ). Corollary 2.7. Assume that T 1 ∈ L(H 1 ) and T 2 ∈ L(H 2 ) are two operators, and X ∈ {A ∈ L(H 1 , H 2 ) : AT 1 = T 2 A}. The mapping X * is one-to-one on Lat(T 1 ) if and only if (X * ) * is onto Lat(T * 1 ). Proof. Since XT 1 = T 2 X, T * 1 X * = X * T * 2 . By Proposition 2.6, (X * ) * is onto Lat(T * 1 ) if and only if (X * * ) * = X * is one-to-one on Lat(T 1 ). From Proposition 2.6 and Corollary 2.7, we obtain the following result. Corollary 2.8. If T 1 ∈ L(H 1 ) and T 2 ∈ L(H 2 ) are two operators, and X ∈ {A ∈ L(H 1 , H 2 ) : AT 1 = T 2 A}, then X is a lattice-isomorphism if and only if X * is a lattice-isomorphism. Proposition 2.9. [1] (Proposition 7.1.21) Assume that T 1 ∈ L(H 1 ) and T 2 ∈ L(H 2 ) are two quasisimilar operators of class C 0 , and X ∈ {A ∈ L(H 1 , H 2 ) : AT 1 = T 2 A} is an injection. If T 1 has property (P), then X is a lattice-isomorphism. Recall that if T is an operator on a Hilbert space, then ker T = (ran T * ) ⊥ and ker T * = (ran T ) ⊥ . Corollary 2.10. Assume that T 1 ∈ L(H 1 ) and T 2 ∈ L(H 2 ) are two quasisimilar operators of class C 0 , and X ∈ {A ∈ L(H 1 , H 2 ) : AT 1 = T 2 A} has dense range. If T 2 has property (P), then X is a lattice-isomorphism. Corollary 2.11. Suppose that T i ∈ L(H i )(i = 1, 2) is a C 0 -operator and T 1 has property (P ). If X ∈ {A ∈ L(H 1 , H 2 ) : AT 1 = T 2 A} and X is an injection, then X is a lattice-isomorphism. Since X is an injection, so is Y . Clearly, Y has dense range. Note that (XH 1 ) − is invariant for T 2 . By definition of Y , It follows that T 1 ≺ (T 2 |(XH 1 ) − ) and so T 1 ∼ (T 2 |(XH 1 ) − ). By Proposition 2.9, it is proven. Corollary 2.12. Suppose that T i ∈ L(H i )(i = 1, 2) is a C 0 -operator and T 2 has property (P ). If X ∈ {A ∈ L(H 1 , H 2 ) : AT 1 = T 2 A} and X has a dense range, then X is a lattice-isomorphism. Proof. By assumption, X * T * 2 = T * 1 X * . Since T 2 has property (P ), by Proposition 2.4, so does T * 2 . Because X has dense range, X * : H 2 → H 1 is an injection. By Corollary 2.11, X * is a lattice isomorphism. From Corollary 2.8, X is also a lattice isomorphism. . If Y is invertible, that is, T 1 and T 2 are similar, and Lat(T 1 ) is modular, then clearly, Lat(T 2 ) is also modular. In this section, we consider when T 1 and T 2 are quasi-similar instead of similar, and find an assumption in Theorem 2.14 such that Lat(T 2 ) is modular, whenever Lat(T 1 ) is modular. Then for any Proof. Assume that N i ∈ Lat(T 2 ) and M i = Y −1 (N i ) for i = 1, 2. Then by assumption, we obtain Theorem 2.14. Let T 1 ∈ L(H 1 ) be a quasiaffine transform of T 2 ∈ L(H 2 ) and Y ∈ {B ∈ L(H 1 , H 2 ) : BT 1 = T 2 B} be a quasiaffinity. If Y * : Lat(T 1 ) → Lat(T 2 ) is onto and Lat(T 1 ) is modular, then Lat(T 2 ) is also modular. Proof. Suppose that Lat(T 2 ) is not modular. Then there are invariant subspaces N i (i = 1, 2, 3) for T 2 such that Thus M i is a closed invariant subspace for T 1 . Condition (2.7) implies that Since Y * is onto, there is a function φ : Lat(T 2 ) → Lat(T 1 ) such that Y * • φ is the identity mapping on Lat(T 2 ). Hence for i = 1, 2, 3, It follows that for i = 1, 2, 3, Since Y * • φ is the identity mapping on Lat(T 2 ), (2.10) implies that for i = 1, 2, 3, By (2.9) and (2.11), we get , by the same way as above, we obtain (2.14) By equations (2.12) and (2.14), we obtain , from equations (2.13) and (2.15), we can conclude that Therefore Lat(T 1 ) is not modular. Modular Lattice for C 0 -Operators with Property (P ) We provide some operators, say T , of class C 0 such that Lat(T ) is modular. Let θ be a nonconstant inner function in H ∞ . Then every invariant subspace M of S(θ) has the form φH 2 ⊖ θH 2 for some inner devisor φ of θ. In this section, we will consider a sufficient condition for Lat(T ) of a C 0 -operator T to be modular. Let H and K be Hilbert spaces and H ⊕ K denote the algebraic direct sum. Recall that H ⊕ K is also a Hilbert space with an inner product ( h 1 , k 1 , h 2 , k 2 ) = (h 1 , h 2 ) + (k 1 , k 2 ) Theorem 3.5. Let T ∈ L(H) be an operator of class C 0 with property (P ). Then Lat(T ) is a modular lattice. Thus if T has property (P ), then by (3.8) and (3.13), we obtain that
2019-04-12T09:21:10.756Z
2006-09-29T00:00:00.000
{ "year": 2006, "sha1": "06c03d6ae81b46272820a613706c7d1c87eb48c6", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "4ab3faa20fd8e3c414cf8bcf0b142b1ed7ad1269", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
8266180
pes2o/s2orc
v3-fos-license
Prioritising patients for bariatric surgery: building public preferences from a discrete choice experiment into public policy Objectives To derive priority weights for access to bariatric surgery for obese adults, from the perspective of the public. Setting Australian public hospital system. Participants Adults (N=1994), reflecting the age and gender distribution of Queensland and South Australia. Primary and secondary outcome measures A discrete choice experiment in which respondents indicated which of two individuals with different characteristics should be prioritised for surgery in repeated hypothetical choices. Potential surgery recipients were described by seven key characteristics or attributes: body mass index (BMI), presence of comorbid conditions, age, family history, commitment to lifestyle change, time on the surgical wait list and chance of maintaining weight loss following surgery. A multinomial logit model was used to evaluate preferences and derive priority weights (primary analysis), with a latent class model used to explore respondent characteristics that were associated with variation in preference across the sample (see online supplementary analysis). Results A preference was observed to prioritise individuals who demonstrated a strong commitment to maintaining a healthy lifestyle as well as individuals categorised with very severe (BMI≥50 kg/m2) or (to a lesser extent) severe (BMI≥40 kg/m2) obesity, those who already have obesity-related comorbidity, with a family history of obesity, with a greater chance of maintaining weight loss or who had spent a longer time on the wait list. Lifestyle commitment was considered to be more than twice as important as any other criterion. There was little tendency to prioritise according to the age of the recipient. Respondent preferences were dependent on their BMI, previous experience with weight management surgery, current health state and education level. Conclusions This study extends our understanding of the publics’ preferences for priority setting to the context of bariatric surgery, and derives priority weights that could be used to assist bodies responsible for commissioning bariatric services. discrete choice experiment in which respondents indicated which of two individuals with different characteristics should be prioritised for surgery in repeated hypothetical choices. Potential surgery recipients were described by seven key characteristics or attributes: body mass index (BMI), presence of comorbid conditions, age, family history, commitment to lifestyle change, time on the surgical wait list and chance of maintaining weight loss following surgery. A multinomial logit model was used to evaluate preferences and derive priority weights ( primary analysis), with a latent class model used to explore respondent characteristics that were associated with variation in preference across the sample (see online supplementary analysis). Results: A preference was observed to prioritise individuals who demonstrated a strong commitment to maintaining a healthy lifestyle as well as individuals categorised with very severe (BMI≥50 kg/m 2 ) or (to a lesser extent) severe (BMI≥40 kg/m 2 ) obesity, those who already have obesity-related comorbidity, with a family history of obesity, with a greater chance of maintaining weight loss or who had spent a longer time on the wait list. Lifestyle commitment was considered to be more than twice as important as any other criterion. There was little tendency to prioritise according to the age of the recipient. Respondent preferences were dependent on their BMI, previous experience with weight management surgery, current health state and education level. INTRODUCTION Obesity is a substantial public health problem with increasing prevalence in most countries. Bariatric surgery is recognised as a cost-effective intervention for the management of adult obesity, leading to sustained weight loss and remission from obesity-related conditions (most notably, type II diabetes mellitus). [1][2][3][4][5] Guidelines recommend bariatric surgery be considered after non-surgical interventions have failed for those with a body mass index (BMI) greater than 40 kg/m 2 , or greater than 35 kg/m 2 with comorbid conditions. 4 6 Waiting lists for bariatric surgery are growing fast and outstripping the availability of the procedure in many high-income countries. The capacity of health systems, especially publicly funded systems, to expand service provision is limited in terms of budgetary allocation and the required medical expertise. In this limited resource setting, criteria are inevitably required to prioritise access. There is also increasing evidence that the distribution of bariatric surgery is not associated with need. For example, in Australia, access is extremely limited in the public Strengths and limitations of this study ▪ This study uses a robust methodology grounded in welfare choice theories to derive weights that could be used to prioritise patients for bariatric surgery, from the perspective of the general public. ▪ This study represents the preferences of a large sample of adults, representative of the general population in Australia by age and gender. ▪ The sample was recruited from a research panel in Australia, which may limit generalisability of the findings. hospital system, with only 7% of bariatric surgeries performed in public hospitals. 7 Perversely but perhaps unsurprisingly, the lowest rates of access are reported in lower socioeconomic groups who have the highest obesity prevalence and would be likely to benefit most. 8 Provision of bariatric surgery varies across the six Australian States 3 7 and anecdotally has resulted in a lack of access in some areas, with long waiting lists where access is limited. Given the socioeconomic inequality, and its demonstrated cost-effectiveness for the treatment of obesity, there is pressure to expand the provision of bariatric surgery services in public hospitals. 3 Public opinion is widely acknowledged as an important consideration in priority setting, [9][10][11][12] and several models of public participation are available to guide engagement approaches. [13][14][15][16][17] A clear consensus on how public opinion should be incorporated in healthcare decision-making and the impact of its inclusion is lacking. 10 18-20 Nevertheless, normative ethical (eg, procedural justice), 21 economic 22-24 and political (eg, deliberative democracy) 25 arguments provide strong support for the consideration of public preferences alongside other clinical and economic evidence when developing prioritisation criteria. Rationales for public engagement in priority-setting include promoting public confidence in the health system, increasing the transparency, accountability and legitimacy of rationing decisions and improving the responsiveness of the health system. 9 21 26 27 Moreover, considering public preferences is likely to be of particular importance when policy decisions allocate priorities across population groups or incorporate social value judgements, as may be anticipated in the case of obesity. 25 28-30 In the context of priority-setting for public resource allocation, it has been argued that it is the preferences of the general public rather than any subgroup who benefit that should be considered. The public as a whole fund the health system through taxation and pay any opportunity cost associated with funding a particular intervention. Moreover, using the 'average' preference of a representative sample of the general public avoids any self-interest that might be associated with decision-making. 24 31 Consequently, this preference study aimed to assess the relative importance of potential criteria and tradeoffs the public would make when prioritising access to bariatric surgery for obese adults in Australia, and to use these preferences to develop 'priority weights' that could be assigned to criteria to prioritise access to bariatric surgery for adults. METHODS This paper presents a substudy of a larger project aiming to investigate methods for engaging the public in healthcare decision-making. 32 33 A discrete choice experiment (DCE) was used to measure preferences and derive importance for different criteria that might be used to prioritise bariatric surgery for individuals. The DCE is a stated preference method that has gained popularity as an approach to eliciting preferences in health, [34][35][36][37] including for setting priorities. 38 In the context of priority setting, it allows the derivation of 'priority weights' for different criteria on a common interval scale, and quantification of the trade-offs people would be willing to make between different criteria. DCE survey instrument The DCE was undertaken according to best practice guidelines. 39 In the DCE, respondents were asked to make 19 (18+1 repeat choice; explained below) hypothetical choices between two different patients who would both benefit from surgical management for their obesity. Potential surgery recipients were described according to seven different characteristics or attributes which were chosen to represent possible prioritisation criteria (table 1). These attributes and the description of their levels were developed using a two stage process. First, a literature review was undertaken to indicate generic criteria of potential importance to the public in priority setting. 38 Second, the initial generic criteria were refined in consultation with research partners and an expert focus group to include potential conditionspecific criteria to prioritise obese patients for bariatric surgery. The levels of the attributes varied between the hypothetical patients in the choice sets according to a systematic D p -efficient design, utilising prior coefficient values obtained from a pilot study. This approach maximised the statistical efficiency of the design while ensuring that all main effects and selected two-way interaction effects could be estimated independently. 40 The final design consisted of 162 different choice sets (example choice set in figure 1), which were divided into 9 blocks of 18 choice sets. A 10th D-efficient block of 18 choice sets was also used to allow comparison of the data to other samples who completed this block only, for purposes related to the wider project which are beyond the scope of the current paper. 32 Thus, there were 10 survey versions, each consisting of 18 different choice sets. One choice set was reversed and repeated as a 19th choice set in each version as an indicator for internal choice consistency; responses to the 19th repeat choice set were excluded from the DCE analysis (as this was a duplicate choice set and not part of the experimental design). Respondents were randomised to one of the 10 survey versions. Extensive pilot testing was undertaken to confirm the face validity of the instrument, prior to main data collection. This involved face-to-face completion of the survey by an adult convenience sample (n=20), with qualitative exploration of understanding of the instrument along with estimation of a preliminary choice model. The final survey (see online supplementary material) presented some background information on obesity, an explanation of the choice task, followed by the 19 choice sets. It also collected information about the respondent's sociodemographic characteristics, health including their current health state (AQoL-8D 41 ), and self-reported height and weight (which was used to estimate respondent BMI). Sample The DCE was administered between November 2013 and February 2014 as part of an online survey to a target sample of 2000 adults residing in Queensland and South Australia, recruited from an online survey panel. Quotas were used to ensure the sample was representative by age and gender for each State. A target sample of 2000 was chosen to ensure precise estimation of preference parameters while also allowing flexibility in modelling heterogeneity. 42 Data analysis A multinomial logit model (MNL) was used to evaluate preferences across the whole sample. 42 The model coefficients indicate the relative importance of each attribute level in explaining respondent choice. While the main focus of this paper is on the average preferences (based on the MNL model) of the sample, the extent to which preferences differed across respondent subgroups was explored in a online supplementary analysis using a latent class model. 43 The latent class model can be understood as a process of clustering groups of individuals with similar preferences into a defined number of distinct preference classes. The modelling approach is detailed in the online supplementary appendix. To develop a prioritisation system based on the preferences of the public that could be used to prioritise individuals for bariatric surgery, 'priority weights' were derived based on the MNL model coefficients, to indicate the relative importance of the different criteria. This was achieved by estimating the marginal rate of substitution between each prioritisation criterion and effectiveness (ie, chance of maintaining weight loss). 39 The marginal rate of substitution (and therefore priority weight) for each criterion was estimated by dividing the marginal utility for that criterion level by the marginal utility for effectiveness. For example, the weight for prioritising an individual with 'very severe obesity' rather than 'obesity' is equal to the difference between the coefficients from the MNL model between these two attribute levels, divided by the coefficient for a one percentage increase in the chance of maintaining weight loss (ie, priority weight=((0.28 751 −(−0.30 626))/ 0.01 530)=38.80 850; from results presented in tables 3 and 4; calculations performed prior to rounding of decimal places). This represents the amount of effectiveness that respondents were willing to trade in order to prioritise an individual who met other desirable criteria that were considered to be relevant. Importantly, this approach ensures the priority weights are presented on an interval scale; thus, the weights can be summed for any individual patient requiring surgery in order to rank patients. We illustrate how the priority weights may be used in practice, using three hypothetical patients. Priority criteria The MNL raw coefficients are presented in table 3 and the prioritisation criteria considered to be important for the public are presented in table 4 and graphically in figure 2. On average, there was a strong preference to prioritise those who had shown commitment to lifestyle change before surgery (weight 79.81, 95% CI 75.79 to 83.88). There was also a significant preference to prioritise very severely obese individuals (BMI≥50 kg/m 2 ) over obese individuals (BMI≥30 kg/m 2 ). However, this criterion (weight 38.81, 95% CI 36.41 to 41.23) was considered to be only half as important as prioritising those who had shown lifestyle commitment. The preference to treat severe obesity (BMI≥40 kg/m 2 ) over obesity (BMI≥30 kg/m 2 ) was less strong. Respondents also wanted to prioritise those who already have obesity-related comorbidity, with a family history of obesity, with a greater chance of maintaining weight loss, or who had spent a longer time on the wait list. There was little inclination to prioritise by age. A small weight was assigned on average to treating a 50-year-old (3.62; 95% CI 1.30 to 5.93) rather than a 20-year old. The priority weight assigned to treating a 35-year old (3.84; 95% CI −0.31 to 8.00) was greater than for a 50-year-old, but not significantly different to that for a 50 or 20-year old. Given the small and non-linear weights given to prioritising by age, we would not recommend including age as a prioritisation criterion in the development of any policy. The estimated prioritisation criteria from the public perspective could be adopted into decision-making. A 'referent case', an individual who is obese (BMI≥30 but <40 kg/m 2 ), is at risk of comorbid conditions rather than having developed them, has no family history of obesity, has not maintained a healthy lifestyle, has spent a maximum of 6 months on the waiting list, and is assumed to have a 30% chance of maintaining a substantial (at least 50%) reduction in excess weight, scores zero points. Other patients in need of surgery could be prioritised relative to this benchmark 'referent case'. Table 5 indicates the priority weights given by the public sample to three hypothetical patients; if managed according to public preferences, priority would be allocated to the patient with the most points. While the MNL model provides the results of the average respondent from a public sample that reflects the age and gender distribution of the Australian population and therefore provides the relevant weights from a policy perspective, four sociodemographic characteristics (BMI, history of weight loss surgery, AQoL utility score and education level) were significantly associated with membership of a particular preference class in the latent class model ( p≤0.05; see online supplementary appendix). Notably, respondents who were not overweight or obese, who had no experience of weight loss surgery, or with better overall health were more likely to belong to a preference class for whom lifestyle commitment was considered to be particularly important. Respondents who were not overweight or obese or who had attained a lower education level were more likely to belong to a class for whom lifestyle commitment was considered to be unimportant. Finally, respondents who were overweight or obese were more likely to belong to a class who considered age should be a prioritisation criterion; though, some prioritised 20-year-olds and some prioritised 50-year-olds. Therefore, individuals differing on these characteristics (BMI, history of weight loss surgery, AQoL utility score and education level) may systematically allocate different priorities across patients requiring surgery than the general public. DISCUSSION This is the first study to derive preferences of the public that could be used to prioritise elective surgery in the contentious policy area of bariatric surgery, where current demand strongly exceeds the health system's willingness and capacity to supply. The public clearly consider a demonstrated commitment to establishing and maintaining a healthy lifestyle to be the most important prioritisation criterion. Severity of obesity at baseline, the existence of comorbidities and the likely sustained effectiveness of the intervention were all considered to be important, and consistently so, across all preference subgroups. Prioritising surgery for those with a family history of obesity was relevant for the sample overall, but to a lesser extent than the other criteria. Time on the waiting list was also important for the sample overall. The priority weights developed in this study according to a rigorous and systematic methodology can be used to assign priority for access to individuals who may benefit from bariatric surgery. Although this study was undertaken in Australia, it has relevance for other countries, especially relatively high-income countries with well-developed public health systems. The indicated importance of these criteria, particularly a desire to prioritise the most severely obese and those with comorbidities, are largely consistent with previous studies that suggest public preferences in other health priority setting contexts would prioritise those who are most severely affected by the condition being treated. 38 They are also largely consistent with existing obesity guidelines, which recommend the use of BMI and/or comorbidities as criteria for surgery. 4 6 However, the strong preference to prioritise those who have shown a prior commitment to changing their lifestyle in support of weight loss, which was by far the most important criterion in this study, is somewhat of an exception. In general, the importance of lifestyle or personal responsibility for illness (when previously explored in preference studies) suggest these may be relevant to the public, but they have generally been found to be of *Referent levels for marginal rates of substitution (MRS): obesity, at risk of comorbid conditions, 20 years old, no family history of obesity, has not maintained a healthy lifestyle, has spent a maximum of 6 months on the waiting list and has a 30% chance of maintaining a substantial (at least 50%) reduction in excess weight. †Bracket indicates 95% CI, estimated using the delta method. 64 MNL, multinomial logit model. Figure 2 Priority weights for surgery according to criteria (from multinomial logit model model). Footnote to figure 2: Priority weights are relative to a score of zero for an individual who has obesity, is at risk of comorbid conditions rather than having developed them, has no family history, has not maintained a healthy lifestyle, has spent 6 months on the waiting list, and has a 30% chance of maintaining a substantial (at least 50%) reduction in excess weight. Priority points for time on wait list are per each month over 6 months and for change of maintaining weight loss are for each % over 30%. relatively minor importance compared to other prioritisation criteria, and may well be context dependent. 38 45 However, personal responsibility has been found to be a strong predictor of public opinion around the allocation of donor livers, where public preferences have favoured allocation to naturally occurring rather than alcoholic liver disease. 46 The public have also supported rationing treatment for patients with 'unhealthy lifestyles' in opinion polls. 47 Furthermore, the perceived importance of lifestyle commitment is also rational, in that weight loss maintenance after most forms of surgery requires continued lifestyle change that is, there will be regression in any weight loss if an appropriate diet and physical activity regimen are not adopted. Nevertheless, we are not aware of any previous preference study that has attempted to quantify priorities for bariatric surgery from a public perspective, and as such this applied study makes an important contribution to develop priority weights that could be assigned to encapsulate the general public's preferences in prioritising access to bariatric surgery for adults. Age as a criterion for access to care is a contentious issue, and has been found to be of varying importance for the public in previous studies. 38 48-56 Age was not important for most respondents to this study in the context of prioritising bariatric surgery. This is consistent for example with the deliberations of the National Institute for Health and Care Excellence Citizens' Council in the UK, that age should not be considered as a prioritisation criterion (in health technology assessment), unless it is associated with the level of health outcome. 57 However, variation in preferences was observed across respondents in the latent class model, including for the age criterion (which was important for some classes). Interestingly, the age of the respondent was not found to be associated with preference for prioritising bariatric surgery in the latent class model; suggesting, self-interest does not explain choices for priority setting by age. The supplementary finding of variation of preferences across respondents highlights the need to ensure a relevant and representative sample is achieved when canvassing preferences to inform policy. It seems likely that the differing opinions around prioritising by age found in previous studies may be explained at least in part by the distinct preference samples involved. 38 Our results suggest that, at least in the context of prioritising for bariatric surgery in Australia, recipient age should not be a prioritisation criterion (beyond any capacity it has to impact on outcomes). Whether this also applies in other contexts and countries is an empirical question requiring further investigation. The choice tasks given to respondents in this study were of necessity somewhat simplified to enable their administration to laypersons in a survey format. However, the clinical decision-making context around the appropriateness of bariatric surgery for specific individuals and who would benefit most, is complex. For example, the benefits of surgery may extend beyond weight loss and include metabolic outcomes, leading to the emergence of 'metabolic surgery' which has differing therapeutic goals and a lower BMI criterion threshold, with some effects occurring independent of weight loss. 58 59 Thus, the potential criteria used in this DCE may not be the only criteria of clinical relevance for selection of individuals for surgery. The inadequacy of BMI as a primary clinical criterion for selection for surgery and potential of other clinical criteria to augment selection has been highlighted. 58 60 Further, those with a higher BMI and comorbidities such as diabetes, obstructive sleep apnoea and cardiac disease, may be at greater risk of adverse events from surgery. 58 Thus, the optimal selection of candidates for bariatric surgery from a clinical perspective so as to balance the benefits and risks of surgery is not straight forward. Nevertheless, despite these potential limitations, the current study focused on prioritising individuals for surgery assuming surgery was considered to be clinically appropriate. Respondents were instructed in the survey to imagine that each of the potential surgery recipients had been clinically assessed to be in equal need of surgery to manage their obesity. Thus, any 'real world' clinical consideration around the benefit of surgery was held constant in each hypothetical choice and should not have impacted the hypothetical decisions. Individuals participating in this study differed in their preferences for the importance of different prioritisation criteria. While respondent age was not observed to affect priority choices, BMI was perhaps unsurprisingly associated with preference class membership, reaffirming the need to give careful consideration to whose preferences are sought to inform priority decisions-the public or individuals with some direct or indirect experience of the condition. 31 33 61 62 This study takes the normative position that it is the preferences of the public, rather than individuals with a specific condition, that are relevant for informing priority setting decisions. Moreover, for health services funded by taxation of the public, the public are a key stakeholder in how those funds are used. Therefore, the publics' perspective is important for allocating funds to specific services. While this is an accepted approach in health economics in the context of priority setting, 38 the exploratory latent class analysis in this study suggests that the preferences of an obese population around priorities for bariatric surgery may differ to those of the general public. Although associations between preference and individual characteristics were tested for many sociodemographic characteristics, it is perhaps surprising that only four sociodemographic characteristics were associated with membership of different preference classes at conventional levels of significance in this large sample. It seems possible that class membership, particularly for potentially contentious decisions, might depend more strongly on attitudes and beliefs, cultural differences, and/or individual tastes, all of which are challenging to observe or measure, than on sociodemographic characteristics. However, we can conclude that the representativeness of the sample should be a key methodological consideration for preference studies that seek to inform public policy; and is likely to matter in particular where recipient age or personal responsibility is a criterion under consideration. Consequently, the main limitation of this study is that we recruited from a panel sample. Although the sample was representative of the Australian public by age and gender, these two characteristics were not found to be significantly associated with preferences and we cannot be sure whether the sample reflects the diversity of the population on a wider range of characteristics that might be associated with preferences-not least because we have been unable to identify what those characteristics are. Although the sample only recruited from two Australian states, these states account for 27.2% of the Australian population. The sample differed descriptively from the Australian population on a number of characteristics (education level, employment status, household income and health status). Of these, only health status and education level were found to be associated with preference class in secondary latent class analyses (see online supplementary material). Therefore, it is not known to what extent this recruitment approach may have impacted the representativeness of the overall sample preferences. Further research into characteristics beyond sociodemographics that might impact preferences, such as attitudes and beliefs, and the extent to which samples are representative on these less tangible characteristics, is needed. The implementation of the findings may also be limited since application of the priority weights requires an ability to predict the category into which each patient fits for each of the attributes, before their treatment. This may be challenging for the attribute 'chance of maintaining weight loss', since effectiveness is difficult to predict a priori. Nevertheless, estimates of effectiveness are available in the international literature. Alternatively, if distinguishing likely effectiveness between potential patients is considered to be unreliable, this attribute could be excluded from the priority estimates for all potential patients. To support their capacity to make decisions in the DCE, respondents were provided with some basic information on obesity, its consequences, and its management at the start of the survey. However, obesity and its management is a complex issue and although the pilot study suggested the survey was easy to understand, respondent understanding of the obesity information was not tested in the main survey. Further studies investigating public opinion for prioritising bariatric surgery using a Citizens' Jury, which represents a deliberative approach in which participants are informed and can challenge experts before making recommendations on the issues, are planned as part of the parent study within which this DCE is undertaken. 32 In conclusion, this study extends our understanding of public preferences for priority setting in the allocation of bariatric surgery in public health services, and derives weights that could be used to prioritise patients for surgery. As such, it provides an exemplar for the growing interest in deriving public preferences to inform prioritisation decisions in healthcare. As preference for prioritisation criteria varied across respondents, achieving a representative sample on relevant characteristics including those that may be difficult to measure is likely to be an important methodological challenge when determining preferences to inform public policy. When setting priorities for the allocation of health services, evidence of public preferences offers a valuable contribution to political debate about the need for prioritisation and the defence of chosen priorities. Ethics approval Ethical approval for this study was granted by Griffith and Flinders' University Human Research Ethics Committees (MED/09/12/HREC; 6088 SBREC). The project was assessed as being low risk from an ethical perspective and informed consent was inferred by completion of the survey. Provenance and peer review Not commissioned; externally peer reviewed. Data sharing statement The full data set and statistical code are available on an individual basis (with restrictions on use) from the corresponding author at j.whitty@uq.edu.au. Consent was not obtained but the data are anonymised and risk of identification is low. Open Access This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work noncommercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http:// creativecommons.org/licenses/by-nc/4.0/
2017-04-21T05:26:17.649Z
2015-10-01T00:00:00.000
{ "year": 2015, "sha1": "77351039e6d2a5501dfc9a9f3564a1422a396a7a", "oa_license": "CCBYNC", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/5/10/e008919.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "77351039e6d2a5501dfc9a9f3564a1422a396a7a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233984150
pes2o/s2orc
v3-fos-license
Anti-bias training for (sc)RNA-seq: experimental and computational approaches to improve precision Abstract RNA-seq, including single cell RNA-seq (scRNA-seq), is plagued by insufficient sensitivity and lack of precision. As a result, the full potential of (sc)RNA-seq is limited. Major factors in this respect are the presence of global bias in most datasets, which affects detection and quantitation of RNA in a length-dependent fashion. In particular, scRNA-seq is affected by technical noise and a high rate of dropouts, where the vast majority of original transcripts is not converted into sequencing reads. We discuss these biases origins and implications, bioinformatics approaches to correct for them, and how biases can be exploited to infer characteristics of the sample preparation process, which in turn can be used to improve library preparation. Introduction RNA-seq has become one of the most important tools in molecular biology. It allows straightforward measurement of RNA expression levels in transcriptome-wide fashion. It is now available in countless variants that allow sequencing of different types of RNAs, from different starting materials, using different experimental approaches, and more [1]. Although developed early [2], RNA-seq from single cells (scRNA-seq) increased dramatically in its popularity recently [3]. The power of scRNAseq lies in its ability to potentially visualize variability that is masked by the ensemble averaging of standard RNA-seq; it can be used to identify allelic exclusion based on single-nucleotide polymorphisms [4] and can reveal non-genetic heterogeneity. The latter is believed to be important in diseases [5] and can offer insights into transcriptional mechanisms [6,7]. In this review, we will discuss current limitations of RNAseq with respect to its main application of quantifying transcript abundances. Since this is particularly relevant for absolute From RNA to sequencing reads The main goal of RNA-seq in most contexts is the accurate quantification of the original RNAs' abundances in a sample, whether that refers to 'bulk' RNA from a homogenized cell population, or single cells. In practice, this amounts to correctly interpreting the number of sequencing reads that are obtained for each transcript. This problem is non-trivial due to several confounding factors preventing precise quantification, most of which are owed to the complexity of RNA-seq sample preparation. Several steps are necessary to convert the RNAs in cell lysates into sequencing reads. Common to the vast majority of protocols are selecting which RNA is to be sequenced, the cDNA production steps of reverse transcription (RT) (often referred to as first-strand synthesis) and second-strand synthesis. The reason for selecting RNA to be sequenced is that the vast majority of RNA in cell lysates is ribosomal RNA, which is normally undesired. Removing it allows for more reads to be used towards the detection of less abundant RNA species of interest, such as mRNA. This is achieved by removing rRNA ('ribodepletion') or positive selection of RNAs of interest. RNA is replaced with DNA because RNA is problematic to work with; it is subject to degradation through RNases and metal ion catalyzed hydrolysis at higher temperatures. It has a propensity to form secondary structures and cannot easily be amplified due to a lack of suitable enzymes and its compromised stability during thermal cycling. Synthesis of the second cDNA strand is necessary to enable adapter ligation for next generation sequencing, unless special adaptations are used [8]. Other protocols use the RT step to add adapter sequences directly, for instance by using a RT primer with overhanging adapter sequences. This idea is taken further in scRNA-seq protocols where the RT primer is often oligo-(dT)s (to capture polyadenylated mRNAs) with an overhang including adapter sequences, cell barcodes and unique molecular identifiers (see section UMIs e.g. 10x Chromium [9], Drop-seq [10] or InDrop [11]). The only RNA-seq strategy that avoids cDNA conversion is direct RNA sequencing, as implemented by the ill-fated Helicos sequencing machine [12] or nanopore sequencing [13]. The latter is promising as a future technology producing long reads for single molecules; it records the base sequence of individual nucleic acid strands as they are electrophoretically pulled through channels in a membrane. The system is plagued with high error rates, though, and most studies have exploratory character and/or use additional second generation (e.g. Illumina) sequencing to bolster sequencing quality [14]. RNA-seq libraries are usually fragmented by various means and size-selected in order to produce more sequencing reads at optimal length. This can occur before or after cDNA production. Direct fragmentation of RNA often uses metal-ion catalyzed hydrolysis at high temperatures (e.g. TruSeq) and cDNA fragmentation often uses physical methods (e.g. sonication) or enzymatic methods. 'Tagmentation' is a convenient enzymatic way to combine fragmentation and adapter ligation [15]. It uses transposase Tn5 to internally cleave double-stranded DNA and ligate oligonucleotides to both resulting ends in the same reaction. The material is usually further amplified by PCR. Often, an extended first PCR cycle is used to synthesize the secondstrand. An alternative to PCR is linear amplification by in vitro transcription (IVT), as implemented by the CEL-seq protocol [16]. Each of these steps can skew the representation of original transcripts by sequencing reads. It is worth noting that there is a difference between variability and bias. Statistically, the average of a repeatedly sampled value needs to deviate from the true value to make it an actual bias; random variation per se is not enough. Biases in RNA-seq can have very different effects and it is important to understand, classify and quantify these. Two key properties that help categorize biases are their scale (local -bias is specific to one gene or individual positions, or global -bias occurs across genes in a systematic overall pattern) and their visibility (can be seen on a coverage plot, e.g. Figure 1A), which are explained in more detail below. These properties are not always independent. In the next section, we introduce the two major methods for quantifying the abundance of RNA in a sample. We discuss how the sample preparation process introduces bias for coverage-based approaches, avoids these biases for UMI-based approaches, and how these approaches compare otherwise. Quantitation approaches Read numbers alone are not sufficient to quantify the abundance of RNA in a sample and need to be expressed in terms of transcript numbers to draw conclusions about biological processes in many cases. Here, we discuss the two main approaches, read-coverage and UMIs, and their strengths and limitations. Coverage-based approaches Coverage (the number of sequencing reads that align to known reference bases)-based approaches have characteristic biases which are likely to affect quantitation of expression levels. These can occur on a well-studied local scale, or an as yet undercharacterized global scale. It is generally assumed that expected sequencing read numbers for a particular transcript are proportional to its length, i.e. a linear relationship, giving rise to the RPKM/FPKM (reads/fragments per kilobase transcript length per million total sequencing reads) or transcripts per million (TPM) measures [17]. These have been recognized to be inadequate in their original conception and are frequently subjected to various bias correction algorithms, although the fundamental notion of lengthproportionality is usually kept [18]. It is worth noting that application of correction algorithms subverts the physical unit/dimensions character of their names. Local biases If the sequencing read density is plotted along gene bodies, usually a spikey peak landscape emerges ( Figure 1A). Frequently, abrupt changes in coverage coincide for independent replicate samples, suggesting that the local sequence environment causes an actual bias and not just experimental variability ( Figure 1A). This corresponds to local bias that is highly visible ( Figure 1B, bottom left). Its causes are debated, but are likely to include RNA secondary structure, non-uniform hydrolysis of RNA, RNA binding proteins and others [19,20]; most of these factors are speculated to prevent cDNA production at certain spots and/or stop cDNA production in the spots' vicinity, thus causing free ends that might facilitate adapter ligation (unless tagmentation is employed). A potentially powerful experimental solution to this problem could be provided by reverse transcriptases found in mobile group II introns. These introns are retroelements that are mainly found in prokaryotes, fungal and plant organellar genomes. They consist of an autocatalytic intron RNA and an intronencoded reverse transcriptase which act jointly to excise the intron and reverse-splice it into DNA, thereby propagating themselves. Engineered versions of such reverse transcriptases have been shown to have high fidelity and processivities, and are thermostable, which permits increased incubation temperatures during RT, thereby reducing RNA secondary structures. In addition, they exhibit template switching activity that foregoes the need to ligate primers or adapters to the RNA [21]. However, increasing temperatures decreases the stability of RNA [22] and reduces processivity [23]. Therefore, a delicate balance has to be struck between minimizing secondary structures and degradation while maximizing processivity. The picture is not entirely conclusive, though; RNA-seq libraries that are based on poly-A tail priming still feature many (often short) genes with peak-valley-peak formations where the 5 peaks are larger than the 3 ones ( [23] and Figure 4A). This appears hard to reconcile with the idea of obstacles to RT. In general, this local type of bias does not necessarily have a strong effect on quantification [24]. Although the estimation of splice variant abundances can be skewed depending on differential inclusion of individual peaks or valleys, the local variability might average out for longer RNAs. A related local bias concerns the apparent non-uniform binding of random oligonucleotides [25], which are used in some protocols to prime RT. This bias manifests as unequal nucleotide frequencies at the ends of sequencing reads, which probably affects coverage similarly as the aforementioned examples for local biases. Global biases A reasonably well understood and intuitive bias arises from reduced fragmentation efficiency close to the ends of DNA fragments. Tagmentation requires a minimum sequence of ∼10 bases on either end of the integration sites [15]. Similarly, physical fragmentation methods, such as sonication, probably exert higher tensile stress on longer strands which facilitates breakage in longer DNA. The results are fewer sequencing reads from regions with ineffective fragmentation, which causes noticeable dips in coverage at the ends of transcripts. However, fragmentation bias is more complex than it seems at first glance; cDNA production might stop before the end of the transcript is reached (see below), which potentially biases fragmentation internally, making the bias less visible and harder to correct. In addition, even in the absence of internal fragmentation bias, RNAs that are too short for effective fragmentation will become depleted. Thus, a global bias is introduced that affects transcript representation in a non-linear, length-dependent way. This is an example of a bias that is visible and has both local and global effects. In general, different combinations of local and global bias might occur ( Figure 1B, bottom right). Potentially, a global bias severely skews the representation of transcripts, e.g. by underestimating long ones [20], but is invisible as far as coverage profiles are concerned ( Figure 1C). An important example for global bias is the unequal amplification of different sequences by PCR at exponential rates [26]. This has been of particular concern for scRNA-seq, due to the many PCR cycles that are required. This bias is well recognized and efforts have been made to tackle it by using IVT [16] and by employing UMIs as described below [27], albeit these measures are not compatible with all RNA-seq protocols. In fact, the protocols themselves introduce strong global biases which warrant a closer examination. . Each heatmap displays transcripts spanning 100 bases to 10 kb that are aligned at 5 and 3 ends and are ordered from shortest to longest (top to bottom, respectively). Read coverages are indicated by color in 20 bins along transcripts (color key, top right). The global bias exhibits non-linear lengthdependent scaling, from uniform coverage, to 5 bias, to a bimodal distribution (dark streaks). Typical underrepresentation of transcript ends due to inefficient fragmentation is indicated by orange arrows (shown for one of the three bottom plots affected by it). The origins of global bias Heatmaps are a convenient way to simultaneously depict the local and global scale of a visible bias [23]; if the density of sequencing reads along RNAs is color-coded and RNAs are ordered by length, patterns emerge. A selection of simulated datasets illustrates this for varying degrees and types of bias ( Figure 2). The local bias component appears as the noisy color fluctuations throughout the center and right images ( Figure 2) and which resemble video noise. Fragmentation dips at either RNA ends, which have both local and global character as explained above, are visible for the bottom row of images ( Figure 2). Finally, a strong length-dependent (non-linear) global bias is present in this example as black vertical streaks at the ends of long transcripts ( Figure 2). This type of bias appears in similar fashion in many actual datasets and is due to the RNAseq library preparation process; it has substantial effects on quantification -what is its origin? One difference in the sample preparation process is how RNA is selected to be sequenced; this can cause bias through the mechanism of RNA degradation. As mentioned earlier, RNA (compared to DNA) is an inherently unstable molecule, with a reactive 2 hydroxyl group which (when deprotonated) can attack the neighboring phosphodiester bond [22], resulting in self cleavage. This degradation is complicated further by the presence of RNase's in both the surrounding environment and endogenously in the sample being studied [22]. The instability of mRNA is one of the reasons why mRNA is converted into DNA in the early stages of most protocols -however some degradation is likely to still occur in the process. A visible global bias can be introduced with RNA degradation when RNA is selected from one end of the transcript (poly-(A) + selection) [28]. If one mRNA strand is split in two, and only one strand is selected for (the poly-(A) + strand), then the other will be missed. If the assumption is made that cuts in mRNA occur with equal probability across the whole strand, long mRNAs will have more cuts than short mRNAs and therefore resulting reads are more biased towards the 3 end. To assess this bias, we used a collection of RNA-seq degradation datasets of human dorsolateral prefrontal cortex tissue [29] that had been prepared using either poly-(A) + selection or ribodepletion. Standard ribodepletion based protocols work by removing ribosomal RNA through sequence specific hybridization followed by bead separation or enzymatic degradation. Inspection of read distribution heatmaps ( Figure 3) for these datasets show a more pronounced 3 bias with increased degradation times, particularly with longer genes with pA+ selection, but not with ribodepletion. This also suggests that 5 to 3 exonucleases do not cause the bias on the poly-(A) + selected samples, because if they did, the bias would also be present in the ribodepleted samples. Another major difference between RNA-seq protocols concerns the strategies for producing cDNA. The first step, RT, is initiated from primers that are designed to either bind random positions or that target the 3 poly-A tail of mRNAs. 'Randompriming' was and is common in RT-PCR and is used in strandspecific RNA-seq systems such as the ScriptSeq or Ovation kits (Illumina and NuGEN, respectively), while the latter, 'oligo-(dT) priming' is very popular for scRNA-seq. This is because the primers will not target rRNA and therefore eliminate the need for purification of mRNA, thus potentially reducing losses of the limiting starting material. Depending on the protocol, second-strand synthesis may once again start from a random position or from the terminus of the first-strand. The enzymes used in these reactions, reverse transcriptase and DNA polymerase, are both processive. This means that large numbers of nucleotides are incorporated before the enzyme drops off or the reaction stops otherwise (e.g. by reaching the end of the template strand). However, the exact stopping points cannot be predicted and are best described as probabilities for certain positions. These positional dependencies between first-and secondstrand priming cause global biases which have been noticed early [30]. Attempts to fully understand these are scarce, presumably owing to their complexity, but approaches to experimentally tackle them have been developed. One way to strongly reduce cDNA bias is to perform fragmentation on the original RNA instead of the cDNA. Since the resulting fragment length (∼200 bp) is usually an order of magnitude shorter than the enzyme processivities [31], internal synthesis stops become negligible. The resulting coverage is more uniform on a global scale as simulated in the top row of heatmaps in Figure 2. However, RNA fragmentation is not practiced with scRNA-seq; it risks degradation of RNA and requires ligation of the first-strand primer directly to RNA, which is presumably of low efficiency since no scRNA-seq study has done it. There does not appear to be a reference for this, though, and an actual investigation of this might be prudent [32]. The protocol-specific global bias is thus hard to experimentally avoid for scRNA-seq. However, it can be understood and thus corrected based on its shape. In fact, the shapes of the global bias are highly characteristic for the library preparation protocol that was used; datasets prepared with 'SMART'mechanism-based protocols ('Smart-seq' and its derivatives, see below) [33], which are prominently employed in scRNA-seq applications, resemble a Star Trek insignia on the heatmap; its bias shifts from central to bimodal with increasing mRNA length ( Figure 4A). Subsequent improvements in this protocol resulted in Smart-seq2 [34], shows the same global bias. We envision the recently published Smart-seq3 [35] with further improvements in enzymes and buffers as well as the addition of a 5 UMI will also show a similar bias shape as the mechanisms resulting in the bias remain (see below). Datasets based on random-primed first strands, as implemented by the Ovation kit (NuGEN), for instance, display a faint 'ridge' slightly off center that diminishes with increasing transcript length. Coverage profiles at different lengths might be described as having whale-like shapes (or perhaps the hat from 'The Little Prince'), Figure 4B. A summary of the major types of biases and their effects is shown in Table 1. The effect of these non-linear length-dependent global biases is to miscalculate expression levels when assuming a linear relationship between expression levels and transcript length (e.g. TPM, RPKM, FPKM). For example, using poly-(A) + selection on degraded RNA would result in the underestimation of long transcripts expression levels as these are missing more reads at the 5 end than shorter transcripts. In a similar fashion, libraries based on Smart-seq also underestimate the expression of long transcripts [23] (see Section Global bias estimation). The size of this effect can be quite dramatic, for example Dyer et al. found if a FPKM/TPM is used to compare short (200 bp) and long (20 000 bp) transcripts on a dataset prepared with smart-seq2, there would be a ∼9-fold error [36]. In some experiments, this bias could be less important than others. One example could be an experiment designed to discover differentially expressed genes between two conditions using poly-(A) + selected mRNA. A common pipeline would be to generate counts (the number of sequencing reads overlapping each gene) and pass this into a differential expression tool. Here, the global bias due to RNA degradation might affect both samples equally and not affect which genes are differentially expressed. However, if degradation is different between samples -the global bias will affect quantitation. Of course, for experiments measuring expression between genes in the same sample these global biases will have a large effect [23,36]. One way to tackle bias is via spike-in controls [37]. Spikeins are RNA molecules of known concentrations and lengths that are added to samples at early stages of library preparation. After sequencing, the reads aligning to these spike-ins can be then compared to their known concentrations to correct for differences in library preparation efficiency between samples (see section Technical noise in scRNA-seq for further information on the uses of spike-ins in (sc)RNA-seq). In theory, the length-dependent global biases present in bulk and scRNA-seq could also be monitored and corrected by using spike ins. This could be achieved by comparing known concentrations against sequencing reads for different lengths of spike-ins and adjusting sequencing reads up/down depending on their lengths [23]. Unfortunately, these spike in probes are fairly short; for instance, ERCC probes, which dominate use in existing datasets [37], are roughly between 250 and 2000 nucleotides in length and so cannot be currently used for measuring global bias in long RNAs. Longer alternatives are now becoming available, though [38]. Analysis approaches to combat bias Multiple computational methods have been developed to combat some of the biases in RNA sequencing experiments to more accurately quantify expression levels in coverage-based protocols. As mentioned before, the spiky peak landscape arising from local biases may not have a strong effect on quantitation, particularly in longer genes where this might average out -we do note that isoform quantitation will be affected by this, though [39,40]. For a gene with only one isoform and in the absence of any sources of bias, coverage would be uniform across exons. Local and global biases mean this often is not the case. Roberts et al. (2011) address the local bias by redistributing reads within a transcript to make the coverage more uniform. Many tools either implement this strategy directly (Cufflinks [41], Salmon [40]) or take similar approaches [17,42,43]. A problem with this course of Figure 3 LiBiNorm software [36] was used to fit a bias model to the data whose predicted coverages are shown for two transcript lengths (right). (B) As A, for a typical random priming based (Ovation ® system, NuGEN; GEO accession number GSE84724) dataset. Typical global pattern and coverage shapes are indicated in orange for Smart-seq and random priming in A and B, respectively. [28], this review action is that it ignores the non-linear global length dependent bias and does not change expression estimates for single isoform genes. This results in the underestimation of long genes' expression levels for SMART and poly-(A) + selected samples [23]. Some protocols use random hexamers to prime RT, which causes sequence specific bias [25]. This results in preferential sequencing of fragments starting with particular motifs and cannot be corrected by simply trimming the ends of reads as this will result in the bias shifting to sequences next to the random hexamer bias. The standard approaches to tackle this are algorithms that try to 'learn' bias patterns, that is, find sequences associated with lower or higher count density around the start of a read, and then adjust read counts up or down accordingly. GC content can also contribute to under-representation of sequences, presumably due to incomplete PCR amplification, which can also be modeled and corrected for [39,40]. Global bias estimation There are few tools which attempt to understand and correct for global biases. Maxcounts [48] tries to avoid this bias altogether by taking the maximum number of overlapping reads that is found at a position along a transcript to measure its expression level. The downside of this approach is its rejection of the majority of sequencing reads, which could otherwise provide useful insights, and its potential vulnerability to local biases [36]. Wan et al. [49] model the non-linear global bias as an exponential decrease from the 5 end, which appears to be an oversimplification, though, given the presence of 5 bias, or clear bimodal biases on both the 5 and 3 ends of many datasets (Figure 4). The Flux simulator tool [50], can simulate enzymatic reactions in library preparation and in silico reproduces aspects of the global bias, but does not provide any bias correction and uses a model with some shortcomings [23]. Our group has developed the tool LiBiNorm [36] which follows from the work of Archer et al. [23] and fundamentally differs from the vast majority of existing software tools for bias correction. Here, the probabilistic aspects and logics of the enzymatic conversion steps of different protocols are taken into account, which allow the construction of mathematical models that predict certain coverage shapes. Fitting the predicted coverages to datasets yields parameter values for characteristics of the RNA-seq library preparation process, such as the processivity estimates. This information can be used to derive improved estimates for the relative expression levels of the original mRNAs. Importantly, this approach is based on inference of reaction mechanisms underlying the library preparation, which thus provides biochemical reasons for systematic underor over-representation of transcripts by sequencing reads. This is exemplified by Smart-seq and similar SMART-based protocols, where underrepresentation of long transcripts is expected based on the library preparation logics; SMART refers to the 'Switching Mechanism At the 5' terminus of the RNA Transcript,' which introduces the second-strand primer at the end of the first-strand [33]. Due to the SMART mechanism and a PCR selection step [51], incomplete first strands, where RT fails to reach the mRNA's 5 end, are not targeted for PCR amplification. This occurs more frequently for long mRNAs, which get depleted in the process. The protocol also leads to more even coverage, which serves to render the global bias less obvious. However, imperfections in the protocol such as the spurious occurrence of the SMART mechanism inside of some transcripts (and not at 5 ends only), usually yield non-uniform coverage of the observed shape ( Figure 4A). This allows fitting models which produce estimates for enzyme processivities and similar parameters, in turn allowing for correction of the length bias and also providing a way to diagnose potential library preparation issues. While local bias is too strong to permit precise bias estimation in many cases, we found LiBiNorm to perform well on Smart-seq2 datasets; these are prepared using additional measures to reduce local bias [34] and allow improved quantitation upon LiBiNorm processing and global bias correction [36]. A summary of bias correction tools is in Table 2. (Faux) 'Gold standards' Many bias correction methods use RT-PCR as a gold standard to measure the success of the correction and benchmark data to other tools. However, RT-PCR involves cDNA production as well and is therefore subject to the global bias, too. Employing a different gold standard might be advisable, such as datasets prepared with the 'TruSeq' protocol that fragments RNA (not cDNA), thus reducing the bias in RT (although we note here that RNA degradation can still cause 3 bias in TruSeq samples - Figure 3), or data derived from RNA fluorescent in situ hybridization (FISH) or the Nanostring nCounter ® system. In fact, some popular bias correction tools (Cufflinks [41], Mix 2 [52]) are effective when RT-PCR is used as (faux) 'gold standard' but often perform worse than even simple linear TPM when benchmarking on RNA-fragmented data (e.g. TruSeq) [36]. Model-driven insights Ideally, the model and tools employed will be able to correct biases and provide insight into how the latter occur. This will help to develop experiments to test these insights, enabling better understanding of the biology and make improvements into RNA sequencing protocols based on model predictionsa particularly pressing issue to resolve the zero-inflation controversy in scRNA-seq (see section Dropouts). Whilst most tools make little attempt to explain these technical issues, some dowhich we will briefly touch upon. For example, the global bias is interpreted as RNA degradation in Wan et al. [49]. This makes the prediction that increased degradation conditions will increase 3 bias, and whilst this is true for the poly-(A) + selected samples, it is not for the ribodepleted samples ( Figure 3). This suggests that degradation does not only occur from the 5 end. The approach taken by Archer et al. [23] showed that careful modeling of the sample preparation (in this case enzyme processivities) admits to tests of the model; modulating reaction conditions in one part of the protocol (altering reaction temperatures for first or second strand synthesis) changed the resulting global bias in line with predictions of the underlying model. Exploiting these insights allowed improving the experimental protocols, in this case by boosting enzyme processivity via decreased reaction temperatures. This in turn improves conversion of RNA into cDNA, along with a reduction of global bias, and was adapted by subsequent studies on full-length RNA sequencing protocols (e.g. RamDA Seq [53]). UMIs UMIs are random oligonucleotide tags which are designed to label each individual mRNA molecule. When the length of the random oligos increases, there is an exponential increase in the number of possible distinct UMIs (4 λ , where λ = UMI length), meaning that each RNA is very unlikely to be tagged with the same sequence. When sequenced, reads bearing the same UMI are counted once only, thus removing the potential bias of unequal PCR-amplification [54]. Ideally this allows inferring absolute molecule numbers [54], although in reality molecules are often under/over-estimated [55]. The process of using UMIs and their effect on the data can be illustrated using simple simulations of the library preparation process [55], as shown in Figure 5. This notion has to be treated with some caution though, as rates of reaction are what determine biochemical processes, and these are defined by concentrations and not numbers of transcripts [56]. Thus, differences in transcript numbers between cells could be due to a cell size difference (an interesting phenotype in its own right) with no changes in rates of reactions. Ideally this would be accounted for, and whilst it is possible on the scale of a few genes with RNA FISH, through microscopy [57] or flow cytometry [58] and even transcriptome-wide with barcoding [59] -this is laborious and expensive. Recently, scRNAsequencing combined with cell imaging measurements using microfluidic devices has been demonstrated [60,61]. To prevent 'over-counting,' multi-labeling of single transcripts with different UMIs must be avoided. This makes the poly-A tail a preferred target for labeling an individual mRNA uniquely using oligo-(dT) -UMI concatenates, thus restricting detection to poly-(A) + transcripts. There is also a trade-off with sensitivity; UMI usage restricts quantification of a transcript to the single fragment (usually from the RNA's 3 end) bearing the UMI, whereas other fragments are lost. Therefore, for studying any processes away from the 3 -end, such as alternative splicing, UMI-based methods are not useful. This problem may be solved in the future with greatly increasing sequencing lengths, ideally (E) Reads to mRNA counts relationship is noisy due to stochastic effects during amplification and sequencing, as well as PCR efficiency variation between genes. Color brightness indicates PCR amplification efficiency, with darker colors indicating lower efficiency. (F) Sequenced UMIs are used to remove duplicate reads, improving the estimation of the initial RNA molecules. resulting in full length UMI labeled transcripts being sequenced. For now, coverage based methods (or combining coverage based methods with UMIs [35]) still have to be used to answer these types of questions. Restricting this quantification to the single fragment at the 3 end means that the global length dependent biases of degradation and processivity are irrelevant. However, whilst we have said that local biases might not have a strong effect on quantitation for coverage-based protocols, the opposite may be true for UMI based ones. This is because restricting quantitation to only a single fragment means that local peaks or valleys in that fragment will not be averaged with reads from the rest of the transcript (unlike coverage-based protocols). Worse still, it will be an invisible type of bias. Single cell Sequencing at a single cell level has recently gained huge traction in a variety of fields (see [62] for a review). The major advantage of scRNA-seq over bulk RNA sequencing is that the identity of the individual cells in a population is preserved. This allows for a heterogeneous population such as those found in biomedical samples to be dissected into its constituent sub-populations after sequencing, which can be used to detect diseases at early stages [63][64][65], or track the progression of differentiation and development [66][67][68]. Similarly, differences in the expression levels between cells of homogeneous populations can be measured, which can be useful for interpreting the underlying stochastic mechanisms of gene expression. Therefore, mRNA distributions that can be obtained with scRNA-seq are a much richer source of information than the average RNA expression conferred by bulk RNA sequencing. UMIs are particularly useful in scRNA-seq, where PCR amplification efficiency varies between single cells as well as between genes. For this reason, the next section is focusing on single cell sequencing methods using UMIs. However, it is important to note that the same technical effects also apply to coverage based single cell studies. Technical noise in scRNA-seq scRNA-seq suffers from several additional sources of technical noise, which contribute to the observed variation between single cells [69,70]. The first relates to the sampling or 'counting' error associated with the number of RNA molecules captured by the library preparation process. This results in an intrinsic source of technical noise which poses a limit to the precision of scRNA-seq. While sampling error is present in bulk RNA-seq too, it is negligible in practice due to the much higher amount of input material. In contrast, sampling error gains significance in scRNA-seq due to the low number of RNA molecules per cell. Modeling techniques can be used to account for this source of technical noise [70,71]. In addition to sampling error, scRNA-seq suffers from variation in the library preparation efficiency between cells, resulting in a variable fraction of RNA molecules per cell being converted to cDNA [46]. This can be due to subtle differences in the concentration of primers and library preparation enzymes between cells, as well as variation in cell lysis efficiency [47]. As a consequence, a technical source of variation is introduced into the total size of single cell cDNA libraries. This is another type of error which is well known from bulk RNA-seq, where it is accounted for by expressing RNA counts in terms of reads per million (RPM) [72]. In scRNA-seq, the total RNA per cell is often assumed to be constant, and the libraries are scaled based on a group of genes which are assumed to be stably expressed [73,74]. However, relying on such normalization methods ignores the likely variation in transcriptome size between biological groups, or single cells in the case of scRNA-seq, which is a natural source of library size variation. As highlighted by others [72,75], doing so can lead to vastly different interpretations of the data. Ref. [76] showed that by accounting for changes in transcriptome size, more than 6000 genes were found to be induced during yeast aging, as opposed to only the few hundred identified previously [77]. The implications are even greater for scRNA-seq, where often the aim is to compare the expression between cells in a heterogenous population, consisting of cells of different type, volume and cell-cycle stage, all of which are expected to affect the natural size of the transcriptome [78]. Correcting for technical variation in library size is therefore a crucial step in the pre-processing of scRNA-seq data. Similar to bulk RNAseq, this can be achieved by using an internal [79] (e.g. housekeeping genes) or external (RNA spike-ins) reference point [69], with respect to which the individual libraries can be scaled. In the former case, a certain group of genes which is assumed to be non-differentially expressed between cells is defined, and any variation in the number of counts for these genes is ascribed to technical sources, allowing the libraries to be scaled accordingly. The validity of this assumption however is not always easy to ascertain, especially in single cells, where gene expression stochasticity can lead to a variable degree of expression even for stably expressed genes [80,81]. This method is preferred in droplet-based methods, where the application of spike-ins is unfavorable (see below and [81]). Using RNA spike-ins, the library sizes can be normalized between cells without requiring any assumptions about their gene-expression profile. The assumption underlying the use of spike-ins is that the same amount is added to each cell, and that the variation in capture efficiency between cells is similar for endogenous and spike-in RNA. While the use of spike-ins has been criticized for use in bulk RNA-seq [82], systematic analysis in plate-based scRNA-seq has shown that they are a reliable method for normalization [81]. Using RNA spike-ins for normalization also has certain limitations. For example, as spike-ins can only be added once the cells have been lysed, they do not reflect the error arising from variation in lysis efficiency between cells [71]. Furthermore, a pilot experiment is advisable to determine the optimal amount of spike-in RNA to be added (5-10% of library size) [81]. Criticism regarding the commonly used ERCC spike-ins [37], whose gene and polyA tail lengths are shorter than many endogenous transcripts, have also been raised [83]. These however are mostly relevant to absolute quantification of mRNAs rather than library normalization between cells. In either case, spikeins remain the only way currently to normalize between single cell libraries without making strong assumptions about gene expression variation between cells, and have therefore been strongly recommended for this purpose [84]. Perhaps the biggest limitation of spike-ins is that they cannot easily be used with current droplet-based scRNAseq, thus limiting their use to plate-based scRNA-seq [81]. This is in part because the highly diluted cell suspension required to minimize the number of doublet encapsulations results in a high fraction of empty droplets. In the absence of spike-ins, the empty droplets do not contribute to the sequencing cost. When spikeins are used however, spike-in cDNA is produced for every single empty droplet, which can double the cost of sequencing [85]. Recently, a drop-seq device which enables ordering of the cells into a line prior to encapsulation has been shown to achieve much higher working concentrations of cell suspensions and thus fewer empty droplets [86]. Improvements in this area are likely to make the use of spike-ins in droplet-based sequencing more cost-effective. Due to the popularity of droplet-based scRNA-seq methods (see [87] for a review), driven in part by the lower cost compared to plate-based scRNA-seq, several spike-in free normalization methods have been proposed which account for the natural variation in transcriptome size. By modeling the RNA molecule capturing process by UMIs and by randomly assigning plausible capture efficiencies to each cell, Ye et al. [88] produced estimations of the molecule counts per cell without spike-ins, which were comparable to results from spike-in based normalization. However, a requirement of this method is that genes are assumed to have a zero-inflated (see below) negative binomial distribution, which may not hold for all genes, conditions and systems. Instead, Wang et al. [78] use a more flexible prior for the gene-expression distribution, allowing for the shape of the biological distribution to be inferred while accounting for changes in the transcriptome size. Systematic comparisons between these and other methods are required to establish when each is most suited for. Other sources of technical noise affecting scRNA-seq include batch effects, the presence of doublets and multiplets, ambient gene expression and gene dropouts, the latter of which is discussed in more depth in the next section. Accounting for these effects is part of the standard pre-processing and quality control of most scRNA-seq experiments, an overview of which can be found in [89]. Batch effects, also known from bulk RNA-seq, occur when cells from different biological groups are processed separately. In such cases, technical variation during each step of the process (cell culture, capture and sequencing) introduces biases which, if not accounted for, can confound data analysis [90]. The best way to deal with batch effects is to design the experiment in a way that avoids them all together. For example, batch effects can be avoided when using plate-based scRNA-seq by ensuring that cells from each biological group are equally represented on each plate [91], something that can be easily achieved using fluorescence activated cell-sorting. This is not possible for all scRNA-seq protocols, however. Specifically in droplet-based sequencing, the standard balanced experimental design cannot be easily achieved as cells need to be encapsulated and sequenced separately for each sample, in order to retain each sample's identity [90]. Of note are recent developments in cell-tagging methods (cell 'hashing'), which allow cells from different samples to be pooled together prior to encapsulation and sequencing [92]. The reads from cells belonging to different groups can be subsequently demultiplexed, thus avoiding batch-effects altogether. In cases where this is not possible, correction needs to be performed at the analysis stage (see [93] for a comparison of existing methods, also [94]). Doublets result when two cells are co-encapsulated in the same droplet or land in the same well of a multi-well plate in the case of plate-based scRNAseq. The result is that reads from these cells cannot be de-multiplexed, leading to artificial transcriptomes in the data. While these can often be identified by their unusually high number of associated transcripts, inherent variation in transcriptome size often found in cell populations means arbitrarily setting a threshold also introduces a bias [95]. Recent computational models developed to account for the presence of doublets in an unbiased way are compared in [96]. Ambient gene expression refers to extracellular RNA which becomes encapsulated into the same droplet as a cell or accompanies a cell in the same well, leading to contamination of a cell's resulting transcriptome. The presence of extracellular RNA in the sample results from RNA leaking from damaged cells during the sample preparation process, and unless it is accounted for leads to biases in the interpretation of the data. While it is difficult to completely remove non-endogenous RNA from the sample prior to library preparation [9], novel methods exist which can filter these RNA molecules at the analysis stage. Methods such as SoupX [97] use the existence of empty droplets (or wells) to calibrate a cell-free RNA model, which can be used to correct the data. In the case of plate-based scRNA-seq, this can easily be achieved by sequencing several wells into which only cell suspension buffer has been dispensed. For droplet-based sequencing on the other hand, distinguishing empty droplets is not a trivial task. While a common approach is to set a threshold on the minimum number of RNA counts [9,98], this introduces another bias. Specifically, certain types of cells (usually smaller cells) will also have fewer counts than the average cell and can thus be mistakenly excluded from the analysis. The opposite problem also exists -empty droplets containing cell-free RNA can be mistaken for a distinct cell type. This has motivated the development of methods which model the profile of empty cells to efficiently exclude them from analysis [99,100] without removing genuine cells with low RNA counts. Combining such an approach with existing models for cell-free RNA such as SoupX could be a powerful way to tackle both filtering of empty droplets and cell-free RNA from the data. Dropouts Because of the finite size of the single cell starting material, often genes with moderate or low expression level will not be detected, which leads to the over representation of zero counts of gene expression in the final scRNA-seq datasets. This 'dropout' phenomenon has been an actively discussed topic since the first emergence of the scRNA-seq itself [70,101,102] and is commonly ascribed to technical reasons (e.g. capture efficiency, sampling noise or PCR bias) and deemed an obstacle for quantitative analysis [103]. (While this section has a focus on UMI methods, low capture efficiency can affect coverage-based single cell RNA sequencing datasets too. For example; in a particular cell, a gene with two isoforms, both expressed at the same level, can wrongly appear as though only one isoform is being expressed in scRNA-seq experiments with low capture efficiency. This is especially important when the expression levels are low [104].) It is thus common to process scRNA-seq data using zeroinflated distribution models for deconvoluting meaningful biological variance from high counts of zeros due to technical noise. Therefore, many 'imputation' algorithms have been proposed to 'rescue' scRNA-seq data from the inflated zero counts for downstream applications [78,101]. For instance, dimension reduction based on zero-inflated distributions was used on scRNA-seq datasets to extract informative variables for further analysis (e.g. ZIFA [105], ZINB-WaVE [106] or ScVI [107]). Other imputation methods such as MAGIC [108], SAVER [102] or scImpute [109] 'fill in' the undetected RNA counts by exploiting gene-gene expression relationships and information from neighboring cells sharing similar expression profiles. Although these approaches are potentially powerful tools to address the dropouts problem, their proclivity to introduce artifacts [109] and/or erase existing differences by over-smoothing the data is known [102]. Recently, it was argued by Svensson [110] that at least for a range of droplet-based UMI scRNA sequencing methods (e.g. 10x Chromium [9], Drop-seq [10] or InDrop [11]), complex zeroinflated model based analysis might not be necessary. The dropout events (zero counts) could be explained well enough by rather simple Poissonian distributions (gamma-Poissonian mixture or negative binomial distribution) and correspond to biologically meaningful information rather than technical noise, according to the author. Based on this simpler binomial distribution and taking a Bayesian approach, Tang et al. [111] developed bayNorm -an integrated package for processing scRNA-sequencing data and showed accurate reconstruction of experimental data by their simulation. However, this might hold true only for the latest generation of UMI low-volume emulsion techniques as opposed to earlier methods, owing to improvements in capture efficiencies. All the same, high-dropout rates are still observed in modern methodologies and compatibility with Poissonian models does not prove genuinely biological origins. Moreover, the evidence collected from pure RNA control solutions might not convincingly explain sequencing data derived from cellular RNA samples, which exist in more complex environments. The assumption that at least some of the zero counts reflect biological variance is supported by recent work from Qiu [112], who presented a cell type classifier based on dropout co-occurrence patterns alone. Furthermore, the author demonstrated that such a classifier is as powerful as classification algorithms based on high count mRNA molecules. This further supports the argument that dropout events contain meaningful biological information, rather than being purely an artifact. In conjugation with other techniques A further issue with scRNA-seq datasets is illustrated well by the differing conclusions obtained from these compared to alternatives, such as single molecule RNA FISH (smFISH) [57] or live cell imaging [80]; whereas scRNA-seq data suggest that the vast majority of genes feature relatively little variability, consistent with a Poisson distribution of transcript numbers [11], imaging data usually shows variability higher than a Poisson's (e.g. [58,80,113]). One field of note here is spatial transcriptomics, where researchers combine measuring RNAs with their positional contexts. Here, imaging data such as smFISH appear well suited to the task, e.g. [114][115][116], however, as mentioned previously, it is difficult to measure more than a few genes simultaneously with this method. To combat this challenge, some methods use known positional information from 'marker genes' to calibrate results from scRNA-seq and define the spatial location of each cell [117,118]. A problem with this calibration approach is that it requires prior knowledge of marker genes and their locations and can be biased by the choice of these genes. Technologies such as slide-seq [119] and Visium [120] allow the spatial transcriptome to be measured without using 'marker genes' by adding spatially barcoded RNA capture probes to a slide, upon which fresh-frozen tissue samples are placed. This results in cDNA containing the spatial barcode, which can then be used to assign the spatial location of the original mRNAs. Concluding remarks (sc)RNA-seq is a rapidly maturing technology, with technical improvements continuing to increase the output in terms of numbers of samples/cells sequenced at an exponential rate [121]; it is now commonplace, with many easily accessible (commercial) implementations and bioinformatic tools to support data processing and analysis. While for many biological questions, high sensitivity, precision or absolute quantification might not be necessary, biases are still present and underappreciated, which can skew the estimation of transcript abundances and influence the conclusions that are made. Furthermore, for more complex aims such as analyses of non-genetic heterogeneity [6], gene regulatory network inference, or even quantitative descriptions of a whole cell [122], the best possible measurement of expression levels in all respects is required. Even in situations where relative expression levels only are of interest, fold changes can be meaningless if they concern very low expression levels, requiring estimation of the latter. By highlighting them here, we hope researchers will consider these issues when designing experiments and continue to develop methods for dealing with them. These methods can be experimental, such as using spike-ins, employing UMIs to combat the PCR amplification bias and fragmenting RNA to combat RT bias, or computational, such as improving expression level estimation by modeling the sample preparation process. Key points • RNA-seq is subject to trade-offs between sensitivity and single transcript labeling • Coverage bias can be local or global, visible or invisible • Global bias can cause systematic and lengthdependent over-or underestimation of transcripts, with protocol dependent patterns • scRNA-seq in particular is affected by technical noise and dropouts • These biases can be explained, understood, and partially corrected by novel analysis approaches
2021-05-08T06:17:03.966Z
2021-05-06T00:00:00.000
{ "year": 2021, "sha1": "d8d47dbfc9802ce5893254205f811ce0c6f2f4b1", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/bib/article-pdf/22/6/bbab148/41088980/bbab148.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "4ae922516ecf537df15de9983be2e8adb95dec24", "s2fieldsofstudy": [ "Computer Science", "Biology" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
247105616
pes2o/s2orc
v3-fos-license
Exploration of Exosomal miRNAs from Serum and Synovial Fluid in Arthritis Patients Arthritis is caused by inflammation, infection, degeneration, trauma, or other factors that affect approximately 250 million people all over the world. Early diagnosis and prediction are essential for treatment. Exosomes are nanoscale vesicles that participate in the process of joint disease. Serum is the mainly used sources in the study of arthritis-related exosomes, while whether serum exosomes can reflect the contents of synovial fluid exosomes is still unknown. In this work, we separated exosomes from serum and the synovial fluid of osteoarthritis patients and compared their miRNA expression utilizing miRNA sequencing. The results revealed that 31 upregulated and 33 downregulated miRNAs were found in synovial fluid compared to serum. Transcriptome analysis showed that these differentially expressed miRNAs were mainly associated with intercellular processes and metabolic pathways. Our results show that serum-derived exosomes cannot fully represent the exosomes of synovial fluid, which may be helpful for the study of joint diseases and the discovery of early diagnostic biomarkers of arthritis. Introduction Arthritis is mainly related to autoimmune reaction, infection, metabolic disorder, trauma, degenerative lesions, and other factors [1]. With the combined impact of global population aging and increasing obesity, this already cumbersome syndrome is becoming increasingly common, affecting 250 million people worldwide [2]. Drugs for osteoarthritis patients can only relieve pain without actually preventing progression of the disease [3]. Early diagnosis and prediction are very important for treatment. Exosomes are one kind of extracellular vesicle (EV) with a size range from 40 to 160 nm in diameter [4]. All kinds of cells can release exosomes, and they are found in almost all kinds of body fluids such as blood [5][6][7], urine [8,9], sweat [10], saliva [11], milk [12], and synovial fluid. They play an important role in information transmission, metabolism, disease processes, and immune regulation in the human body. In addition, they have been used as important biomarkers in liquid biopsies for the early clinical diagnosis of cancer, evaluation of drug efficacy, and monitoring of disease progress [13,14]. In recent years, exosomes have been used in research on arthritis-related diseases, especially in the study of exosomal miRNAs. miRNAs are a class of highly conserved and endogenous non-coding small RNAs with a length of 18-24 nucleotides [15], and they have been used as potential biomarkers and are the most widely studied molecules in exosomes [16]. As the most commonly used body fluid for testing, serum has been reported to test exosomal miRNAs in patients with arthritis [17,18]. At the same time, synovial fluid-the fluid in the joint cavity-has also been reported in exosomal miRNA analysis [19]. However, the difference and correlation between miRNAs in serum exosomes and synovial fluid exosomes and whether serum exosomes can reflect the situation in the joint cavity are still unknown. In this work, we separated exosomes from serum and the synovial fluid of osteoarthritis patients and explored their miRNAs' expression by miRNA sequencing. Fabrication of the Exosome Separation Device The exosome separation device was composed of four parts: chitosan scaffolds, reaction tubes, working buffer, and a shaker (NuoMi, Taizhou, China). The chitosan scaffold was used to provide the chitosan substrates with large specific surface area. The working buffer was used to create an acid reaction environment and wash away unabsorbed impurities. The reaction tubes were used to combine samples with chitosan scaffolds, and the shaker was used to enhance the reaction between the samples and chitosan scaffolds. The working buffer was 10 mM MES (2-(N-morpholino) ethanesulfonic acid, Aladdin, Shanghai, China) with sodium hydroxide to adjust the pH to 6.0. The chitosan scaffolds were synthesized by the classic freeze-drying method. Two percent chitosan (Aladdin, 100-200 mPa·s, Shanghai, China) was first mixed with 5% DMSO (dimethyl sulfoxide) in 1% acetic acid. Then, equal volumes of 0.3% glutaraldehyde were added to the solution for crosslinking chitosan. After the reaction was completed at −20 • C for over 16 h, 1% NaBH 4 solution was used to remove the unreacted reagent three times followed by washing with deionized water three times. Finally, the products were put into a lyophilizer to obtain dried chitosan scaffolds. Sources and Storage Conditions of Human Serum and Synovial Fluid Clinical serum samples and synovial fluid samples of osteoarthritis patients were obtained from the First Affiliated Hospital of Dalian Medical University based on the protocols authorized by the institutional review committee of the First Affiliated Hospital of Dalian Medical University (PJ-KY-2019-96(X), 29 October 2019). After collection from the donors, all samples were transferred to a −80 • C refrigerator as soon as possible until the experiment started. Exosome Collection from Cell Culture Medium The medium used in this experiment was from C2C12 (mouse muscle cell line). The culture medium for this cell line is high glucose DMEM combined with 1% penicillinstreptomycin and 10% FBS (v/v). When the bottom of the Petri dish was full of cells, the medium was collected for exosome separation. Cell culture-related reagents were all purchased from Gibco, New York, NY, USA. For the preparation of pretreated medium, the obtained medium was centrifuged at 1000× g for 10 min and then 10,000× g for 30 min at 4 • C to remove cells, cell fragments, and large vesicles. Then, a commercial 220 nm PVDF filter membrane (Millex, Atlanta, GA, USA) was used to filter the supernatant to remove substances larger than 220 nm. For the preparation of standard samples through ultracentrifugation, the filtrate was ultracentrifuged to precipitate exosomes at the speed of 120,000× g for 75 min at 4 • C. Then, using working buffer to resuspend exosomes followed by ultracentrifugation at the speed of 120,000× g for 75 min at 4 • C to precipitate exosomes. The precipitate was resuspended with 200 µL MES and stored in refrigerator with the temperature of −80 • C. Pretreatment of Clinical Samples The pretreatment method for the clinical samples was approximately the same as for the culture medium. The thawed serum and synovial fluid were centrifuged at 4 • C, 1000× g for 10 min followed by 10,000× g for 30 min. Then, in order to reduce the loss Diagnostics 2022, 12, 239 3 of 12 of filtration due to the rarity of clinical samples, working buffer was used to dilute and adjust the pH of samples at a volume ratio of 1:4 (sample: working buffer). The mixture was subsequently filtered through a 220 nm filter. Isolation of Exosomes Using the Exosome Separation Device Samples and chitosan scaffolds were added to the reaction tube and then mixed on a shaker at 4 • C for 20 min to capture exosomes via electrostatic adsorption. Then, using working buffer, unreacted exosomes and impurities were washed away three times. The exosomes were absorbed on the chitosan scaffolds. The shaker had a fixed swing angle of 15 • C and was ran at a speed of 30 rpm/min. Quantitative Analysis of Exosomes A fluorometer (Qubit 3.0, Waltham, MA, USA) was used to quantify the concentration of exosomes. The capture efficiency was the ratio of the concentration difference before and after capture to the initial concentration. Particle Size Analysis A nano-laser particle detector (Zetasizer Nano, Malvern, UK) was used to characterize the particle size distribution of exosomes. The temperature was 25 • C. The material RI was 1.59. The duration used was 500 s, and the measurement position was 4.65. Transmission Electron Microscopy Image Exosomes were purified from cell culture medium by ultracentrifugation, as described above, and resuspended in PBS. Before staining, exosomes were fixed in 4% paraformaldehyde for 30 min at 4 • C. Afterwards, exosomes were loaded on a copper grid for 5 min, and the excess liquid was removed with filter paper. Finally, the exosomes were stained with uranyl acetate (Damao, Tianjin, China) for 2 min before examination under a transmission electron microscope. Western Blot Analysis To extract proteins, chitosan scaffolds with exosomes were lysed in RIPA lysis buffer which contained 1% RMSF and on 1% protease inhibitor ice. Protein lysates were handled by SDS-PAGE (10% gel, 100 V) and transferred onto a NC membrane (Millipore, Burlington, MA, USA). After blocking for 1 h, the membrane was incubated with antibodies against CD63 and CD9 (Abcam, Cambridge, UK) in PBS at 4 • C overnight. Then, the membrane was first washed with PBS three times to remove unreacted antibodies, and horseradish peroxidase conjugated secondary antibodies were used to incubate the membrane for 1 h. Chemiluminescence could be detected by a multifunctional imager (FUSION FX7, Paris, France). The reagents used in the Western blot were mostly purchased from Beyotime, Shanghai, China, except for the antibodies. RNA Extraction and Quantification The RNAs in exosomes were extracted using the Trizol method. In brief, 1 mL of TRIzol (Invitrogen, Carlsbad, CA, USA) was added to the samples to break exosomes and release RNAs on ice. Then, chloroform was used to extract phenol and centrifugated at 12,000× g for 15 min. The RNAs were in the upper aqueous phase. The upper liquid was carefully absorbed into a new tube, and an equal volume of isopropyl alcohol was added to resuspend the RNAs. After centrifugation at 12,000× g for 10 min, 75% methanol was used to wash the precipitate. Afterwards, the RNAs were precipitated by centrifugation at 12,000× g for 5 min. The precipitate was resuspended with 10 uL DEPC water, and the concentration of RNAs was determined by means of a spectrophotometer (Thermo Fisher Scientific Inc., Waltham, MA, USA). miRNA Sequencing The obtained exosomal RNA was delivered to a sequencing company (Wuhan Se-qHealth Technology Corporation, Wuhan, China) on dry ice, where a transcriptome sequencing project was completed on an Illumina paired-end sequencing platform, and the transcriptome data were analyzed by bioinformatic methods. Data Analysis All measurements were performed at least three times, and the data are shown as the mean ± SD. The Student's t-test was used to analyze the statistical significance of the two groups. A p-value less than 0.05 was considered statistically significant. For the miRNA analysis, a sequencing company conducted the quality control of the original data, and all analyses between samples were based on the sequencing results. Design of the Exosome Separation Device Exosomes are nanoscale microvesicles selected by most types of cells, and they play important roles in various pathological processes. Separating exosomes from biological samples is challenging because of their small size (i.e., 40-160 nm) and similar density with body fluids. In this work, we used a simple strategy that we fabricated and optimized previously [20] to separate exosomes from serum and synovial fluid. This exosome separation device cooperatively integrates scaffold substrates, electrostatic adsorption, and shuttle flow to enable efficient isolation of exosomes from biological samples. As shown in Figure 1a, the main principle of this device is the positive and negative charges' reactions. In an acidic reaction environment, -NH 2 groups on chitosan scaffolds will be protonated into -NH 3 + groups; then, the negatively charged phosphate groups of the exosome membrane can be combined onto the surface of the chitosan scaffolds. The exosomes absorbed onto the chitosan scaffolds can be released by Tris buffer (pH = 8.0) for particle size analysis as we reported in our previous work [21]. This is mainly because the -NH 3 + groups will be deprotonated into -NH 2 groups in the alkaline solution, and exosomes will be desorbed from the surfaces of chitosan scaffolds. Furthermore, exosomes can also be lysed in situ by RIPA for protein analysis or by TRIzol for RNA analysis (Figure 1b). The device is composed of a chitosan scaffold, reaction tube, working buffer, and a shaker. In an acidic environment, -NH3 + on chitosan can combine with anionic phosphate groups on a phospholipid bilayer of exosomes thus absorbing exosomes on its surface. (b) Analysis process of exosomes absorbed on a chitosan scaffold. The exosomes captured by the chitosan scaffold can be released by alkaline buffer for particle size detection or lysed online for nucleic acids and proteins analysis. Performance of the Exosome Separation Device In order to test the performance of the exosome separation device for exosome isolation, we used exosomes acquired from cell culture medium by ultracentrifugation as a standard sample (Figure 2a). The morphology of a chitosan scaffold is shown in Figure 2b. It can be seen that there was an obvious through-hole structure on the chitosan scaffold, which is conducive for liquid exchange. For the capture efficiency of this device, we used this device to separate exosomes from standard samples and detected the concentration of samples before and after capture. We normalized the concentration of the samples after capture to the original standard samples due to the fact that the concentration of the standard samples in each experiment could not be completely consistent. The result is shown in Figure 2c. The relative content of exosomes in the solution after capture was 0.18 ± 0.04, which means that the capture efficiency of the device was 82 ± 4%. This result is consistent with our previous work, which indicates that the exosome separation device has a stable and reliable performance. We also used SEM to observe the absorption of exosomes on the surface of chitosan scaffolds. As shown in Figure 2d, the surface of the chitosan scaffold was smooth, while a anionic phosphate groups on a phospholipid bilayer of exosomes thus absorbing exosomes on its surface. (b) Analysis process of exosomes absorbed on a chitosan scaffold. The exosomes captured by the chitosan scaffold can be released by alkaline buffer for particle size detection or lysed online for nucleic acids and proteins analysis. Performance of the Exosome Separation Device In order to test the performance of the exosome separation device for exosome isolation, we used exosomes acquired from cell culture medium by ultracentrifugation as a standard sample (Figure 2a). The morphology of a chitosan scaffold is shown in Figure 2b. It can be seen that there was an obvious through-hole structure on the chitosan scaffold, which is conducive for liquid exchange. For the capture efficiency of this device, we used this device to separate exosomes from standard samples and detected the concentration of samples before and after capture. We normalized the concentration of the samples after capture to the original standard samples due to the fact that the concentration of the standard samples in each experiment could not be completely consistent. The result is shown in Figure 2c. The relative content of exosomes in the solution after capture was 0.18 ± 0.04, which means that the capture efficiency of the device was 82 ± 4%. This result is consistent with our previous work, which indicates that the exosome separation device has a stable and reliable performance. verifies that exosomes can be isolated using this device, but there are still a small number of protein impurities in the isolated exosomes. This is mainly because the principle of electrostatic adsorption makes it inevitable that some proteins can also be adsorbed on the surface of chitosan, which may affect the morphology and protein characterization of exosomes. We are now trying to further optimize our equipment such as adding volume exclusion chromatography to pre-remove the protein in the samples. The relative content of exosomes before and after being separated by the exosome separation device. The results were normalized to the original concentration. The data were presented as the mean ± SD, n = 3. **** p < 0.0001 by two-sided paired Student's t-test. (d) SEM images of the surface of the chitosan scaffold before exosome capture, after exosome capture, and after exosome release. The red arrows indicate exosomes absorbed on the scaffolds. (e) Particle size analysis of pretreated cell culture medium samples. The data were presented as mean ± SD, n = 3. (f) Particle size distribution of exosomes after being isolated by the exosome separation device. The data are presented as the mean ± SD, n = 3. We also used SEM to observe the absorption of exosomes on the surface of chitosan scaffolds. As shown in Figure 2d, the surface of the chitosan scaffold was smooth, while a large number of small particles can be observed on the surface of the chitosan scaffold after exosome adsorptions (medium). The size of the particles were approximately 100 nm, and they had the typical bowl-like structure of exosomes. After the exosomes were released using Tris buffer, only a few white particles could be observed on the surface of the chitosan scaffold. These results reveal that the exosome separation device can capture and release exosomes in a controllable manner. Furthermore, we also used particle size distribution analysis to observe the particle size of the exosomes separated by this device. The results are shown in Figure 2e,f. It can be found that before capture, the particle size of the pretreated medium was particularly inhomogeneous with a large peak around 10 nm and a smaller peak around 100 nm, which indicates that there were a large number of free proteins in the culture medium. After isolation by the device, the size distribution of the exosomes had one large peak at approximately 100 nm and another small peak at approximately 20 nm. This result further verifies that exosomes can be isolated using this device, but there are still a small number of protein impurities in the isolated exosomes. This is mainly because the principle of electrostatic adsorption makes it inevitable that some proteins can also be adsorbed on the surface of chitosan, which may affect the morphology and protein characterization of exosomes. We are now trying to further optimize our equipment such as adding volume exclusion chromatography to pre-remove the protein in the samples. Characterization of Exosomes Isolated from Serum and Synovial Fluid Using the Exosome Separation Device In order to explore the difference and correlation between exosomes of serum and synovial fluid, we used the exosome separation device to isolate exosomes from serum and synovial fluid from six osteoarthritis patients and analyzed their proteins and RNA (Figure 3a). For the protein analysis, we selected CD9 and CD63, two kinds of main protein markers that are usually used to prove the existence of exosomes, to verify the capture of exosome in serum and synovial fluid by Western blot. As shown in Figure 3b, exosomes separated by the device from serum and synovial fluid samples all expressed CD9 and CD63, which indicates that this device can isolate exosomes from serum and synovial fluid successfully. During the experiment, we controlled the initial sample volume of serum and synovial fluid to be 1 mL and ensured that the operation processes of these two body fluids were completely consistent. From the result, there was no significant difference in protein expression between serum and synovial fluid, indicating that there was no significant difference in exosome concentration between serum and synovial fluid. presented as mean ± SD, n = 3. (f) Particle size distribution of exosomes after being isolated by t exosome separation device. The data are presented as the mean ± SD, n = 3. Characterization of Exosomes Isolated from Serum and Synovial Fluid Using the Exosome Separation Device In order to explore the difference and correlation between exosomes of serum an synovial fluid, we used the exosome separation device to isolate exosomes from seru and synovial fluid from six osteoarthritis patients and analyzed their proteins and RN (Figure 3a). For the protein analysis, we selected CD9 and CD63, two kinds of main pr tein markers that are usually used to prove the existence of exosomes, to verify the captu of exosome in serum and synovial fluid by Western blot. As shown in Figure 3b, exosom separated by the device from serum and synovial fluid samples all expressed CD9 an CD63, which indicates that this device can isolate exosomes from serum and synovial flu successfully. During the experiment, we controlled the initial sample volume of seru and synovial fluid to be 1 mL and ensured that the operation processes of these two bod fluids were completely consistent. From the result, there was no significant difference protein expression between serum and synovial fluid, indicating that there was no sign icant difference in exosome concentration between serum and synovial fluid. We also assessed the performance of the device for the extraction of exosomal RN from serum and synovial fluid. The results are shown in Figure 3c,d. The concentration RNA obtained by the device was normalized relative to the pretreated samples becau of the differences among patients. For serum, the relative content of RNA was 0.74 ± 0.2 and for synovial fluid, the relative content of RNA was 0.94 ± 0.25. These results show th exosomes can be isolated from serum and synovial fluid using this device and RNA ca be obtained by in situ lysis. We also assessed the performance of the device for the extraction of exosomal RNA from serum and synovial fluid. The results are shown in Figure 3c,d. The concentration of RNA obtained by the device was normalized relative to the pretreated samples because of the differences among patients. For serum, the relative content of RNA was 0.74 ± 0.20, and for synovial fluid, the relative content of RNA was 0.94 ± 0.25. These results show that exosomes can be isolated from serum and synovial fluid using this device and RNA can be obtained by in situ lysis. In order to verify the differences between exosomal miRNAs from serum and synovial fluid, we used this device to isolate exosomes from serum and synovial fluid from 12 osteoarthritis patients, extracting exosomal nucleic acid in situ and sequencing the miRNA (Figure 4a). These 12 patients were all 50-60 years old with no other basic diseases. In this paper, a total number of 921 miRNAs were identified in the serum and synovial fluid exosomes from arthritis patient. The volcanic map shows that there were 31 upregulated and 33 downregulated miRNAs in the synovial fluid compared to the serum of arthritis patients (Figure 4b). The heat map in Figure 4c shows the differentially expressed miRNAs. Among these 64 differentially expressed miRNAs, 59 of them have already been reported, and five of them are identified for the first time in this work ( Table 1). It can be seen from the results that there are differences between exosomal miRNAs from serum and synovial fluid. In addition, we are now trying to analyze more samples to discover candidate miRNAs in our next work. separated from serum and synovial fluid. (c,d) The relative content of total RNA in exosomes from serum (c) and synovial fluid (d) isolated using the exosome separation device (EV-sep device). The results were normalized to the corresponding pretreated samples. The data are presented as the mean ± SD, n = 3. In order to verify the differences between exosomal miRNAs from serum and synovial fluid, we used this device to isolate exosomes from serum and synovial fluid from 12 osteoarthritis patients, extracting exosomal nucleic acid in situ and sequencing the miRNA (Figure 4a). These 12 patients were all 50-60 years old with no other basic diseases. In this paper, a total number of 921 miRNAs were identified in the serum and synovial fluid exosomes from arthritis patient. The volcanic map shows that there were 31 upregulated and 33 downregulated miRNAs in the synovial fluid compared to the serum of arthritis patients (Figure 4b). The heat map in Figure 4c shows the differentially expressed miRNAs. Among these 64 differentially expressed miRNAs, 59 of them have already been reported, and five of them are identified for the first time in this work ( Table 1). It can be seen from the results that there are differences between exosomal miRNAs from serum and synovial fluid. In addition, we are now trying to analyze more samples to discover candidate miRNAs in our next work. Transcriptome Analysis of Differentially Expressed Exosomal miRNAs in Serum and Synovial Fluid In order to analyze the differentially expressed miRNAs, the target gene of the 64 exosomal miRNAs was predicted using miRanda as well as RNAhybrid software, and the obtained final potential miRNA target gene set is presented using a Wayne diagram in Figure 5a. Gene network diagrams of upregulated and downregulated genes are shown in Figure 5b,c. The biological process, molecular function, and cellular component of these up/downregulated potential target genes in synovial fluid were determined by the GO (Gene Ontology) database (Figure 5d,e). Here, we selected the most significant top 20 GO terms by p-value for mapping. The results show that the differentially expressed genes of serum and synovial fluid are mainly related to intercellular processes such as cellular processes, binding, and cell parts. In addition, the high rich factor of membrane confirms that there is the expression of genes related to the formation of extracellular vesicles. We also used KEGG (Kyoto Encyclopedia of genes and genomes) database to determine the main biochemical metabolic pathway and signal transduction pathway of these potential target genes (Figure 5f,g). The results show the top 20 impact pathways. KEGG pathways enrichment reveals that the differential gene are mostly involved in the metabolic pathways, which is caused by the fundamental difference between the two body fluids. Besides, blood related genes such as vascular smooth muscle contraction, platelet activation and so on have been found to be downregulated in synovial fluid. Interestingly, although arthritis is immune associated disease, the result show that cancer related genes such as pathways in cancer, proteoglycans in cancer, basal cell carcinoma and so on have been found upregulated in synovial fluid, which suggests the further mechanism of synovial fluid in immune function remains to be explored. Diagnostics 2022, 11, x FOR PEER REVIEW 10 of 12 Conclusions In this work, we made the first attempt to compare exosomal miRNAs from serum and synovial fluid in arthritis patients. We observed 31 upregulated and 33 downregulated miRNAs in synovial fluid as compared to serum, which indicated that serum-derived exosomes cannot fully represent the exosomes of synovial fluid. Transcriptome analysis showed that these differentially expressed miRNAs were mainly associated with metabolic pathways and cancer-related pathways. The results may be helpful for the study of joint diseases and the discovery of early diagnostic biomarkers of arthritis. Informed Consent Statement: This study is a retrospective study, which only collects the nucleic acid and protein data of exosomes in collected blood samples without any clinical intervention. We have applied to the Ethics Committee of the First Affiliated Hospital of Dalian Medical University for exemption from the informed consent of the subjects, and we have obtained approval. We will make great efforts to protect the privacy of the subject's personal medical data. We will not display the subject's identity information in any research documents, reports or published articles. Data Availability Statement: The data presented in the study are available on request from the corresponding author. The data are not publicly available due to continued analysis by the corresponding author's research team. Conflicts of Interest: The authors declare no conflict of interest.
2022-01-21T16:24:48.641Z
2022-01-19T00:00:00.000
{ "year": 2022, "sha1": "5d68cd3056fafe29356acb4c7ce388eb8a2f729e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4418/12/2/239/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cdcba859a0fe1f2c14f99539f803c0af1ab4950c", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
237330723
pes2o/s2orc
v3-fos-license
Collaborative Social-Epidemiology: A Co-analysis of the Cultural and Structural Determinants of Health for Aboriginal Youth in Victorian Schools Social-epidemiology that excludes Aboriginal voices often fails to capture the full and complex social worlds of Aboriginal people. Using data from an existing co-designed Victorian government Adolescent Health and Wellbeing Survey (2008/9), we worked with Aboriginal organizations to identify data priorities, select measures, interpret data, and contextualize findings. Using this participatory co-analysis approach, we selected “cultural” and “structural” determinants identified by Aboriginal organizations as important and modelled these using principal component analysis. Resulting components were then modelled using logistic regression to investigate associations with “likely being well” (Kessler-10 score < 20) for 88 Aboriginal adolescents aged 11–17 years. Principal component analysis grouped 11 structural variables into four components and 11 cultural variables into three components. Of these, “grew up in Aboriginal family/community and connected” associated with significantly higher odds of “likely being well” (OR = 2.26 (1.01–5.06), p = 0.046). Conversely, “institutionally imposed family displacement” had significantly lower odds (OR = 0.49 (0.24–0.97), p = 0.040) and “negative police contact and poverty” non-significantly lower odds (OR = 0.53 (0.26–1.06), p = 0.073) for “likely being well”. Using a co-analysis participatory approach, the voices of Aboriginal researchers and Aboriginal organizations were able to construct a social world that aligned with their ways of knowing, doing, and being. Findings highlighted institutionally imposed family displacement, policing, and poverty as social sites for health intervention and emphasized the importance of strong Aboriginal families for adolescents. Introduction Epidemiology, translating to "the study of what is upon the people", is often criticized for its failure to capture the often complicated relationships between multiple and interacting determinants impacting "upon" people's lives [1]. Although epidemiology relating to Aboriginal populations has in recent years increasingly taken a social approach to measuring health and its broader social determinants, including increasing focus on strengths-based approaches, much social-epidemiology continues to be limited by a failure to centre Aboriginal people, organizations, and knowledges in "the study of what is upon Aboriginal people". This absence of Aboriginal voices has meant that many important social and cultural dimensions that make up the realities of Aboriginal people's lives and prioritized as important by Aboriginal people are not attended to in epidemiology. Epidemiology and the Individual Focus Epidemiology has been defined as "the study of the distribution and determinants of health-related states or events in specified populations, and the application of this study to control of health problems" [2] (p. 3). We know that the origins of epidemiology are rooted in the ecological, in the understanding that social factors determine the health of populations [3]. The formative public health efforts of John Snow, William Farr, Edwin Chadwick, and Friedrich Engels in the mid-1800s all demonstrated disease as a consequence of social class injustices, as determined by factors external to the individual [4,5]. But with the development of infectious disease epidemiology in the 1800s and chronic disease modelling in the mid-1900s, explanatory modelling shifted away from a social to a biological frame [6]. In considering epidemiology relating to Aboriginal populations, this evolved in the twentieth century, when there was a vigour for collecting anthropological data about individual bodies and minds [7]. Epidemiology relating to Aboriginal people started at this point of knowing the individual. Scholarship by Brough (2013), and Walter (2010) has challenged the utility of descriptive bioepidemiology that maintains a narrow focus on the individual or the Aboriginal community [1,8]. These critics highlight that framing health as a consequence of selfperpetuating deficits of individual-or group-level biology and behaviour, without consideration to the environments with which biology interacts and where communities live, is problematic, particularly when those individual Aboriginal bodies, minds, and cultures are exposed to social, political, and economic structuring that comes from colonization and being raced [9]. This narrative of problematic individuals is most pronounced for Aboriginal adolescent health. Epidemiology involving Aboriginal adolescents to date has largely adopted a victim-blaming narrative that sees young Aboriginal people engaged in "risky behaviours" and making "poor life choices", particularly when it comes to sexual health. A systematic review by Azzopardi et al. 2013 that examined health research nationally identified that research relating to Aboriginal adolescent health largely maintained a focus on the 11 risks of tobacco, alcohol, illicit drugs, high body mass, physical inactivity, low fruit and vegetable intake, high blood pressure, high cholesterol, unsafe sex, child sexual abuse, and intimate partner violence, with limited engagement into the contexts and mechanisms that give rise to these risks. Social-epidemiology allows us to engage with some of these important contexts and mechanisms. This absence of a comprehensive picture of Aboriginal adolescent health is a real barrier to enabling effective policy [10]. Data on the health of Aboriginal adolescents and its social determinants are instrumental to health planning, policy, resourcing, and evaluation. Social Epidemiology Recent decades have seen a re-emergence of the "social determinants approach", and for epidemiology, this has seen the development of ecosocial (systems) theory, where conceptualizations of health are turned to "who and what is responsible for population patterns of health, disease, and wellbeing, as manifested in present, past, and changing social inequalities in health" [11] (p. 694). Aboriginal populations have strong understandings of who and what is responsible for health [12]. Aboriginal people and communities have long asserted that health is socioculturally determined by Australian-society-level structural determinants and by community-level cultural determinants, and that the application of this knowledge to social-epidemiology should be straightforward [12][13][14]. However, most research has taken a dominant understanding to social determinants that has assumed a universality of Australian dominant social contexts, with limited attention to measures or indicators that capture the unique social experiences of Aboriginal populations. Aboriginal People in Victoria-Local Ways of Knowing the Social Determinants There is over 50 years of research and policy by Aboriginal communities in Victoria that speaks to Aboriginal understandings of health and its unique social determinants [13,15,16]. This includes numerous research projects that have challenged the transposition of dominant social indicators onto Aboriginal populations, highlighting that Aboriginal people participate in their own societies (i.e., communities) and participate differently in dominant societies (i.e., are marginalized). For example, Tynan et al. (2005) and colleagues from Aboriginal communities across the Goulburn-Murray Rivers region highlighted how the social lives of Aboriginal people are different: "Not only are there distinct cultural values contributing to different social processes in Koori communities, but the ongoing social marginalization of Koori people from mainstream society contributes to a substantially different social domain" [16] (p. 2). Similarly, Aunty Joan Vickery, a Gunditjmara Elder and long-time health advocate, highlighted that dominant social determinants are unlikely to capture the full and complex social realities of Aboriginal lives, especially when they include those that are imposed and of a colonizing or assimilative nature [15]. There are existing conceptual research frameworks that identify and describe social determinants from the "insider" perspective of Aboriginal people living in Victoria [15,[17][18][19][20][21][22]. These have largely taken an ecological perspective that links family, community, and wider social systems to Aboriginal health. The most widely recognized of these, by Aboriginal academic Dr. Graham Gee et al. (2010), largely frames Aboriginal cultural determinants as health assets and the structural determinants of Australian society as contributing to poorer health. By structural determinants, these models refer to the sociocultural, historical, and political contexts as well as the structural mechanisms that give rise to health inequities [23]. For Aboriginal people, dominant institutions, systems of governance, state and federal policies, welfare states, and dominant value systems are examples of sociocultural, historical, and political contexts, while ongoing racism and colonization form fundamental structural mechanisms that stratify and determine group access to resources and positions of power. By cultural determinants, these models refer to group-level determinants specific to Aboriginal peoples, often defined by them as the "aspects of culture which foster resilience that are protective of health, and that contribute to our identity and unique place within the Australian polity" [24] (p. 3). Although structural and cultural determinants feature prominently in Aboriginal health policy and research frameworks, they have long been poorly conceptualized and measured in epidemiological analysis. Recently, there have been increasing calls to strengthen the evidence that cultural determinants are connected to health outcomes [24] as well as calls for research to critically engage with the broader social and political dimensions of health [25]. Although we recognize increasing work in this area to at least measure cultural determinants, we remain acutely aware of the lack of appropriate data, measures, and methodologies for quantifying the social dimensions of Aboriginal lives [26,27]. Here we drew on one of the very few government surveys in Victoria that was codesigned by local Aboriginal organizations and included community-identified social measures of structural and cultural determinants specific to Aboriginal people-the Victorian Adolescent Health and Wellbeing survey, also known as the HOWRU Survey. The Victorian Adolescent Health and Wellbeing (HOWRU) Survey The HOWRU survey was designed in 2008 by the Centre for Adolescent Health at the Royal Children's Hospital in Melbourne on behalf of the Victorian government Department of Education and Early Childhood Development (DEECD). This survey included an Aboriginal module that was designed by the Onemda Koori Health Unit (Onemda) at the University of Melbourne and the Institute of Koorie Education at Deakin University [28]. It was developed under the governance of an Aboriginal steering committee, comprising Aboriginal organizations including the Victorian Aboriginal Child Care Agency (VACCA) and the Victorian Aboriginal Community Controlled Health Organisation (VACCHO), and two staff at the Royal Children's Hospital. The resulting survey instrument had 32 items relating to the areas of Aboriginal identity, aspirations, success, family and community, service use, connection, participation in cultural activities, contact with dominant institutions, school environments, and discrimination and racism [28]. In addition to the Aboriginal module, the wider HOWRU survey included items relevant to all Victorian populations relating to demographics, school experiences, family structure and relationships, health and wellbeing, personal experiences, neighbourhood amenities, access to services, and safety. The Aboriginal module did not include a specific construct measure for Aboriginal health and wellbeing, but the wider HOWRU survey included standardized measures including the Kessler-10 psychological distress score. In total, 10,424 adolescents from years 7, 9, and 11 across Victorian government, Catholic, and independent schools in metropolitan Melbourne and regional Victoria consented and participated in the 2008-2009 survey. Detailed methods for the wider HOWRU survey have previously been reported [29] and data have been extensively published for the broader cohort [30][31][32][33][34][35]. However, to date, only basic descriptive data have been reported for Aboriginal participants within a government document, "The state of Victoria's children 2009: Aboriginal children and young people in Victoria". In this paper, we describe a co-analysis of the HOWRU survey. In doing so, we reframe the traditional power relations in epidemiology through a participatory co-analysis wherein Aboriginal organizations were active in identifying the research question, designing the analysis, identifying the variables, and framing and reporting the findings. This co-design process was consistent with an Indigenous rights based and data sovereignty agenda, and in line with key recommendation that the HOWRU survey be analysed by Aboriginal people and their representative bodies [28,36]. Ethical Considerations The University of Melbourne Ethics Committee approved the project (ID:1443502.1). Permissions to access and analyse data were provided by the Department of Education and Early Childhood Development. In line with local protocols, an application for research partnership was submitted to VACCHO and approved. At project commencement, Terms of Reference were devised and agreed to by all coauthors to guide the partnership and co-analysis. These terms outlined expectations relating to ethical conduct, community protocols, roles and responsibilities, reporting and knowledge translation, authorship, intellectual property, and student involvement. Representatives from Aboriginal organizations were engaged to allow for local understand-ings of sociocultural realities and lived experiences in the Victorian community to guide this research. Engagement, Collaboration, and Co-analysis Process Co-analysis was conducted between 2015-2021 and involved three broad meetings with the Aboriginal researchers and staff from Aboriginal organizations. In these meetings representatives from Aboriginal organizations-1. identified organizational priorities and data needs; 2. designed and framed the analysis by selecting variables and outcomes that aligned best with local sociocultural understandings of health and wellbeing; and 3. interpreted data, contextualized findings and decided on processes for dissemination of findings. Both the priorities of Aboriginal researchers and staff from Aboriginal-communitycontrolled organizations are reflected in this work. Study Populaiton In total 88 (0.8% of adolescents sampled) self-identified as Aboriginal and completed the Aboriginal module and have been included in our co-analysis. In recognizing the small sample size, our analysis is compromised by limited statistical power. However, we emphasize here the collaborative process for data-analysis rather than findings alone. Demographic Variables Age was measured as age at last birthday (in years). Gender was collected as a selfreported binary (male/female). A binary location variable was created using postcode coded to the Australian Bureau of Statistics 2011 Australian Statistical Geography Standard (ASGS) in regard to remoteness. Because of small sample size, inner regional and outer region classifications were merged, and a dichotomous variable, "major city" or "regional", was created. Explanatory Variables The collaborative team identified which cultural and structural determinants fit with the work they did and with their understanding of social determinants. Because of the novel nature of these measures, we include how these were collected and measured as appendices. Measures were dichotomized to present the descriptive characteristics of the population (see Appendix A Tables A1 and A2), but for principal component analysis, categorical variables with all responses were used where possible. Outcome Measure An inverted version of the Kessler-10 psychological distress questionnaire (K-10) was used [37]. Kessler-10 scores were dichotomized into "likely to be well", corresponding to a score of 10-19, and "likely to have psychological distress", corresponding to a score of 20-50. "Likely being well" was the outcome of interest and was selected to focus on youth doing well, a salutogenic approach to disrupt narratives of problemed adolescents. This approach has been used for Aboriginal populations elsewhere to describe subsets of Aboriginal populations doing well [38][39][40]. Statistical Analysis Age has been summarized in terms of means and standard deviation. All categorical data were expressed as frequencies (percentages) and 95% CI. For categorical data, the χ 2 test was used to assess difference between the groups "likely being well" and "likely to have psychological distress". Rate ratios (RR) were calculated as "likely being well"/"likely to have psychological distress", and confidence intervals for RR were calculated using MedCalc (MedCalc software, Ostend, Belgium). All other analyses were completed using SPSS Statistics version 27 (SPSS Inc., Chicago, IL, USA). PCA was used to group highly correlated variables into a smaller number of "components". Correlations between variables were determined using a correlation matrix. PCA was used to expose latent clustering of variables in the population. We ran two PCA models, one for cultural determinants and one for structural determinants, because of a small sample size (n = 88), as model stability would be compromised by the inclusion of too many variables [41,42]. Because of constraints on number of variables able to be included, we used a correlation matrix to assess the strengths of intercorrelations among the cultural and structural determinants. Where variables had poor correlation (no correlations > 0.3), they were excluded from PCA; these included neighbourhood attachment, high housing transition, access to basic services, health access, sees members of extended family, having Aboriginal friends, speaking language, participation in Aboriginal ceremonies, going to Aboriginal funerals or sorry business, and going to Aboriginal organizations, which were used only in univariate analysis. The suitability of data for PCA was assessed by testing appropriateness of sample size using Kaiser-Meyer-Olkin (KMO) value and the Bartlett test of sphericity [43]. The PCA used an Oblimin rotation given that correlation matrices revealed that variables were highly correlated. Components were extracted from PCA on the basis of an eigenvalue > 1. To interpret PCA models, loadings (correlations) greater than 0.4 were considered meaningful. In addition, components arising from PCA modelling were culturally validated through the co-analysis process where the collaborative members identified whether the components produced made sense and captured the social experiences of the communities they worked with. Components produced using PCA were used as continuous variables. Linear regression was used to assess associations between the cultural and structural PCA components, with scatter plots uses to assess suitability for linear regression. Components produced by the structural and cultural PCA models were included in logistic regression models with the outcomes variable "likely to be well", with adjustment for gender only. Age was excluded, as the survey included only a small age range. Results Characteristics of the population are presented in Table 1. The mean age of adolescents was 14 years (SD 1.7; range 11-17 years), with similar proportions of adolescents across school levels 7, 9 and 11. There were more females than males and similar numbers of participants from the major city and other locations. Adolescents reported strong family connectedness, with most talking to or seeing extended family or feeling close to their mum or dad. In terms of community, one in three participants reported growing up in an Aboriginal community, and just over half had contact with the Aboriginal community. However, only a section of the population spoke an Aboriginal language (22%) or recognized an area as their homeland or traditional country (29%). Although participation in individual cultural activities was not high in the 12 months prior, 62.5% of adolescents had participated in at least one contemporary cultural activity-either NAIDOC, ceremonies, sporting carnivals, festivals, sorry business, or a traditional activity such as hunting, fishing, or gathering bush foods. For structural determinants, the historical and political contexts of ongoing government intervention were revealed in the proportions reporting being Stolen Generations (17.6%); having a parent forced from their homeland (12.0%); having a parent taken by the government, a mission, or welfare (10.5%); harassment or negative contact with police (9.3%); and removal from family by community services or/and ACCA (6.8%). Social contexts of negative school environments were revealed with just over half reporting experiences of racism at school (52.0%). Not all schools gave help for adolescents to feel comfortable, identified goals with students, or set up plans with students so they could achieve their goals. The acknowledgment and inclusion of Aboriginal culture in the curriculum and school involvement in activities such as NAIDOC was low (28.4%). For the socioeconomic determinants, just under half of adolescents reported high housing transition (40.9%) and just over half having medium or low family affluence score (52.3%). Almost all youth reported geographic access to basic services, and the vast majority felt they could access a GP if needed. Health Outcomes Characteristics of adolescents and bivariate associations with "likely being well" are reported in Table 2. Half (51.2%) of adolescents were coded as "likely being well", while 14.3% were coded as likely having "mild", 17.9% "moderate", and 16.7% as "severe" psychological distress disorder. For demographic variables there were no statistically significant associations with "likely being well". For cultural determinants bivariate analysis revealed statistically significant associations between "likely being well" and feeling close to dad (p < 0.01). Bivariate associations for structural determinants revealed statistically significant associations between "likely being well" and not having parents forced from their country (p = 0.03) and feeling they could access healthcare (p = 0.04). Associations were revealed between "likely being well" and schools giving help for adolescents to feel comfortable, adolescents not experiencing racism at school, and either parent not being taken by the government, a mission, or welfare, but these associations were not statistically significant. Bold text denotes magnitude of association is significant * denotes that characteristic differs at the p < 0.05 level for those "likely being well" and those with "likely to have psychological distress". The cultural PCA produced 4 components from the 11 cultural variables modelled (Table 3) In looking at associations between each of the three cultural components and the four structural components, "growing up in an Aboriginal family/community and connected" associated with higher "institutionally imposed family displacement" (p-value = 0.011) and higher "removed from parent with positive school environment" (p-value = 0.012), while "disconnected from community, country, and culture" associated with lower "institutionally imposed family displacement" (p-value < 0.001), indicating that those disconnected still had family together. Logistic regression modelling of PCA components revealed that for adolescents, "institutionally imposed family displacement" associated with significantly lower odds of "likely being well" (p-value < 0.05) ( Table 5). A similar association was also revealed for "negative contact with police and poverty" and "likely being well", but this was not statistically significant (p = 0.07). Conversely, adolescents who "grew up in Aboriginal family/community and connected" had statistically significant higher odds of "likely being well". Bold text denotes that magnitude of association is significant; * denotes that characteristic differs at the p < 0.05 level for those "likely being well". Discussion This collaborative co-analysis approach to social-epidemiology identified structural and cultural determinants that are "upon the people" from the insider perspective of Aboriginal people and organizations in Victoria. Modelling social survey items identified as important and unique to Aboriginal populations revealed that the relational cultural determinants of growing up in an Aboriginal family and community associated with "likely being well", as did closeness to dad in bivariate analysis. Structural environments characterized by generational child removal and family displacement were associated negatively with "likely being well". Bivariate analysis revealed an association with health access and "likely being well". In terms of structural determinants, the trauma caused by generational child removal and experiences with the justice system are well established and also acknowledged by the Australian government with the seminal "Bringing them home report" and the report of the Royal Commission into Aboriginal Deaths in Custody [44,45]. Here our data demonstrate that these structural issues continue to have ongoing impacts on the lives of school-aged adolescents as young as 11 years in Victoria in 2009, some 10-20 years after these reports were published. The Australian Institute of Health and Welfare (2018) reported on the multiple adverse health, cultural and socioeconomic outcomes that associate with being a member of the Stolen Generations (removed before 1972) or a descendant of the Stolen Generation [46]. In Australia, official policies from 1869 to the early 1970s saw over 100,000 Aboriginal children removed from their families and placed in government and church institutions or with white families. In present times, Noongar academic Jacynta Krakouer has described ongoing high Aboriginal child removal rates as a continuum of the Stolen Generation [47]. In our study, 17.6% of adolescents connected to the identity of being Stolen Generations, and 7% reported being in the care of community services or an Aboriginal Child Care Agency. These high rates of family displacement are understood against Victorian state-wide child protection data that showed 16% of Aboriginal youth under the monitoring of child protection services, eight times the 2% of non-Aboriginal people [46]. However, despite historic and ongoing disruptions to family units, a vast majority of Aboriginal adolescents reported strong and close relationships with family, including seeing extended family and reporting closeness with either mum or dad. The maintenance of these relationships is revealing of the resilience of Aboriginal family units. The key finding that growing up in and being connected to Aboriginal family and community was associated with far greater odds (126% more likely) of "likely being well" speaks to the important role Aboriginal family has in contributing to health, especially to adolescents. Shockingly, half of all Aboriginal adolescents in Victorian schools reported experiencing racism at school. This proportion is higher than the 34.6% of 12-17 year olds reporting racism in any setting 20 years earlier in a Victorian Aboriginal Health Service survey [48]. The interpersonal accounts of racism reported in schools, alongside a low proportion of schools including Aboriginal culture or activities in the curriculum, highlighted the problematic nature of the Victorian school system. Although our findings were inconclusive regarding the association between self-reported racism and "likely being well", they were broadly consistent with the influence of racism in schools. Bangerang/Wiradjuri Elder and cochair of the First People Assembly of Victoria Aunty Geraldine Atkinson framed changes needed in schools as "Aboriginal staff, Aboriginal culture and Aboriginal languages taught in these schools so we can make them acceptable to all our children" [49], while research elsewhere in Australia highlighted the importance of a social justice perspective, culturally inclusive curricula, culturally differentiated quality teaching, and a primary focus on students' wellbeing for school environments to support social and academic outcomes for Aboriginal adolescents [50]. Success in these areas is likely achieved when Victorian schools work in partnership with Aboriginal organizations that can provide leadership on culturally safe learning environments for all students. We recognize that a 2021 announcement by the Victorian government to adapt the Australian curriculum and incorporate First Nations histories, knowledges, and experiences of colonization is a step in the right direction. Further, recent work led by Distinguished Professor Marcia Langton under the Indigenous Knowledge Resources for Australian School Curricula Project is leading the way in providing resources for teachers so they are empowered to teach to Aboriginal histories. Absolute or relative socioeconomic disadvantage in terms of employment, education, and housing are widely promoted and accepted as social determinants in Australian and global settings [51]. These determinants largely relate to everyone and have been an emphasized focus of social policy relating to Aboriginal people for decades. Here we saw 52% of Aboriginal participants as having a high family affluence score compared to 70% of participants from the wider HOWRU survey [32]. For Aboriginal populations, responding to poverty is not straightforward. It is important to understand that pathways to affluence and socioeconomic opportunities are not equal for Aboriginal populations, owing to the processes of colonization and the racialized structuring embedded within Australian society. These processes see Aboriginal people more likely to be excluded from social privilege and employment, marginalized from resources (including own lands) and opportunities, and inheriting socioeconomic deprivation [52]. Although family affluence is a measure of material poverty for both dominant and Aboriginal populations, it is important to recognize that the concept of "high affluence", though it at some level measures the ability to meet basic human needs, it also is reflective of the aspirations of the dominant capitalist culture [53]. Particularly in the HOWRU survey, where the components of family affluence included the familial ownership of multiple cars and computers, frequent holidays, and house size, this metric speaks less to the ability to meet basic needs than social measures of adequate housing and access to foods or services would. These are imposed aspirations of Western capitalism, and there is a need for more appropriate indicators to be used in surveys relating to Aboriginal populations. We suggest future surveys consider material poverty in terms of human need rather than dominant aspirations; such a metric would have greater utility across all populations. As we report here on the quantifiable associations between psychological distress and child removal, negative school environments, police harassment, and material poverty, we are reminded by Johnson et al. (2013) and Doyle et al. (2016) and their work with Aboriginal communities in the Goulburn-Murray Rivers region that structural determinants are real and legitimate points for health intervention [54,55]. Interventions and policy need to move beyond just responding to Aboriginal disadvantage through setting targets for individuals and communities to examining and measuring change in dominant institutions including schools, police, and social services. For cultural determinants, expressions of Aboriginal culture are well and alive in Victoria. Here we saw that despite the continuing impacts of colonization, more than half of Aboriginal adolescents engaged in an Aboriginal cultural activity. We also saw Aboriginal adolescents as connected to friends, community Elders, and culture. Many Aboriginal adolescents in the study also participated in traditional cultural activities of hunting, fishing, or gathering bush foods, while others participated in more contemporary expressions of Aboriginal social life including sports carnivals and NAIDOC. These contemporary activities are all revealing of Aboriginal culture as dynamic and evolving. These local understandings of culture in Victoria are contemporary constructs that move beyond the limiting stereotypes of the "traditional", the "dysfunctional", and the "pathogenic" that have plagued epidemiology [56]. In terms of culture, our data spoke to the importance of the relational aspect of belonging to and participating in a living social system. We found that growing up in and being connected to Aboriginal family/community was associated with greater odds of "likely being well". The high loading of Aboriginal family (0.828) for this component revealed its contribution to the association with "likely being well". These findings highlighted the resilience of Aboriginal families and communities that have maintained identity and connection to country in a region that has a brutal colonial history. They also evidence the important work of Aboriginal organizations in Victoria, which are known to connect Aboriginal people to community and place [57]. These descriptions of culture are also consistent with those generated by Victorian Aboriginal researcher Dr Cammi Murrup-Stewart (2020), whose yarning work with young people spoke to the relational nature of culture for Aboriginal young people [58]. We found those "disconnected from community, country, and culture" had higher odds of "likely being well", but this association was not statistically significant. We also found that "Disconnected from community, country, and culture" associated with lower "institutionally imposed family displacement" (p-value < 0.001), so low rates of generational child removal may reflect in "likely being well" for those "disconnected from community, country, and culture". Although this group stated that they were not connected to Aboriginal community, they may have misinterpreted the question due to a lack of explanation. In all likelihood, many of these participants would be connected to an Aboriginal family (we know that most Aboriginal children are raised by an Aboriginal parent or kin), and this may be why they are "likely to be well". However, we are also mindful that the small sample size makes it difficult to determine if this is true beyond the study. The interpretation of Aboriginal organizations was integral to understanding why "disconnected from community, country, and culture" associated (although not significantly) with "likely being well". The coauthors from Aboriginal organizations spoke of the process of colonization in Victoria and how it worked to disperse, dislocate, and make less visible Aboriginal people. Individuals who are disconnected from community are less visible, and with that can come greater access to opportunities (school, employment) and less exposure to racism. For example, participation in community activities such as marches (i.e., NAIDOC) or in Aboriginal sporting teams, although acknowledged as important from the relational aspect of connecting people and building Aboriginal identity, made individuals more visible as Aboriginal and exposed to interpersonal racism [59]. For cultural determinants, speaking language and knowing homeland were not widely reported by adolescents, reflective of the aggressive nature of colonization in Victoria a history of forced assimilation and mass displacement. However, this will be an important indicator to monitor given the proliferation of Aboriginal language revitalization programs in Victoria and the development of other cultural heritage programs and activities since this survey was administered. Our analysis was only possible because the HOWRU Survey was designed to include an Aboriginal module. We emphasize the important need for social surveys to have measures that capture the social realities of Aboriginal people as identified by them. We also propose that an ecosocial theory and a multilevel framework that focuses on "who" and "what" drives social inequities, rather than focusing on individuals or communities, is a useful approach for social epidemiology involving Aboriginal populations [60]. Work by the Victorian Aboriginal Health Service two decades ago spoke to the importance of ecological frameworks as alternatives to the linear frameworks traditionally used in non-communicable-disease epidemiology. called for ecological approaches that recognized structural issues when they wrote, "The advantage of applying this [ecological] approach to epidemiological models is that it enables a more sophisticated and comprehensive understanding of risk and allows for the identification of factors that reflect people's own constructions of their social worlds through naturalistic observation" [21] (p. 1461). A maintained focus on dominant social determinants in social-epidemiology has come from a scientific methodology that has centred and normalized Western epistemologies (knowledges) and axiologies (ideologies) [61]. By not engaging with Aboriginal-defined social determinants, the discipline of epidemiology applies a casualness to the construction of Aboriginal social worlds that is at odds with a scientific discipline that focuses on statistic precision and rigor [62]. Further, by overlooking cofounding-the basic problem of compatibility between Aboriginal and non-Aboriginal groups-questions are raised about the internally and externally validity of these analyses. As highlighted by Larkins (2006), research relating to Aboriginal people, in general, is methodologically strengthened through the inclusion of Aboriginal people and their knowledges and the centring of these voices [61]. Rigney describes a fundament of Indigenist research is that it privileges Aboriginal voice [63]. Here, Aboriginal voice is reflected not only in the authorship by Aboriginal academics, but also in participation by Aboriginal organizations that speak firsthand about their social world and that of the people they work in proximity with. Another fundament of Indigenist research is resistance-of developing alternative discourses to those created without Aboriginal people [63]. We deliberately sought to resist the normalization of dominant sociocultural experiences and lives and to highlight that sociocultural determinants are different, as are health causality pathways. We know and reveal here that health has never been a consequence of Aboriginal people's inability to meet parity for a series of dominant social concepts [64]. Strengths and Limitations A key strength of this paper is its process, which deliberately sought to centre Aboriginal ways of knowing, doing, and being within social-epidemiology. Co-analysis meant that rather than relying on assumptions in the literature around social determinants, the community the data related to was able to articulate the social contexts and social processes that happen in its social world. This acknowledged the constructionist nature of epidemiological knowledge, but also the expert voices of Aboriginal people and organizations. As highlighted by Brough et al., epidemiology is not a value-free science and embedded within are the understandings, values, and positioning of those who create it [1]. Aboriginal identities and sociocultural conditions can be only partially known through dominant frameworks that provide one worldview and interpretation of the world. Aboriginal people too have legitimate understandings and interpretations of their identities and sociocultural realities [65]. Here, Aboriginal voices have asked fundamentally different social questions of the data than researchers of dominant Victorian institutions have asked in the past [66,67]. Rather than asking how we are different to non-Aboriginal people, by examining social determinants of dominant populations, it was asked, what are the unique social determinants specific to Aboriginal health? This frame was only possible because the dataset was available. Regrettably, when the Victorian Department of Education and Training conducted subsequent health and wellbeing surveys in Victorian schools in 2014 and 2018, they did not include an Aboriginal module. This means that health and social dimensions unique to the live of Aboriginal adolescents were not captured. We make the strong recommendation that future health and social surveys that collect an Aboriginal identifier must also collect health and social data that most accurately describes the social worlds of Aboriginal participants. These measures are best developed by or with Aboriginal organizations [28]. In recognizing key strengths of this research, we are also mindful of its limitations-in particular, the poor statistical power that came from a small sample size. In general, PCA components for small datasets are not very generalizable to the wider population, but the high loadings on components (>0.800) we saw here suggests that components were likely revealing of the latent structure in the wider community [42]. This limited sample size was a consequence of the sampling method, which selected for equal proportions of youth by area and school type and overlooked how this would impact Aboriginal participation. We recommend all future surveys aim to achieve greater power through targeted sampling of schools with Aboriginal students or conversely through Aboriginal networks or organizations. With larger sample size, a more nuanced exploration of health and its structural and cultural determinants for subpopulations can be conducted. Time has also elapsed since the study; however, we believe the findings are still relevant, as many of the structural issues, particularly those relating to child removal and policing, have in fact intensified. In interpreting findings, we realize that the HOWRU Aboriginal population is unlikely to be representative of all Aboriginal adolescents in Victoria. Firstly, there are adolescents who may not be comfortable reporting Aboriginal status in a government survey. Secondly, the survey relates to adolescents attending school, so it does not capture all Aboriginal adolescents. Data for 2010, the year after this survey was conducted, reveals that retention rates for students in years 10 to 12 was 50.9% for Aboriginal students, and it is likely that those not in school have different health and social profiles [68]. For example, it is likely that prevalence estimates of experiences of racism in school are likely to be underestimated, as racism associates with school disengagement and nonattendance [69]. Quantitative survey data is also unlikely to capture Aboriginal culture or structures and mechanisms of Australian society in their full complexity. In assigning sociocultural phenomena to a series of questions and numbers, we may not be measuring what we think we are [70]. We also realize there is a tendency of quantitative research to essentialize culture. We know that the Victorian population is heterogenous and that there are many expressions of Aboriginal culture and many encounters with the dominant culture, for these adolescents are all cultural beings. We know that the HOWRU Survey data did not include measures for Aboriginal cultural constructs of family, identity, and self-determination, which were identified as cultural determinants in the Victorian literature and discussed in early meetings with the collaborative. Future design of an Aboriginal module should consider indicators that capture these cultural concepts. We were also limited by the measure we used to capture "Aboriginal health and wellbeing". Although the K-10 has been used by others and has excellent agreement with a culturally modified K-5, it is a universal rather than Aboriginal-specific measure of health [71]. It is also a measure of distress rather than wellbeing. Since 2009, much work has been done to develop Aboriginal measures of health, and there are now measures such as the Aboriginal Risk and Resilience Questionnaire developed by Dr Graham Gee and the Victorian Aboriginal Health Service that could be used in present-day surveys to measure health and wellbeing [72]. We could not ascertain from the data if observed relationships were causal because of the cross-sectional nature of the HOWRU Survey. However, there is already strong evidence by way of art, music, protests, biography, and qualitative accounts that colonization characterized by generational displacement, child removal, policing, and racism is bad for Aboriginal health [12]. All our study did was quantify this association in a palatable, empirical form that Western policymakers will see as "evidence". We are also mindful that expressions of health and cultural or structural determinants can change across the life course, and associations reported here would need to be tested via longitudinal study to empirically demonstrate that structural and cultural determinants impact on health. We also cannot determine the intensity of social-cultural phenomena. Our conceptual framework is also one-dimensional and excludes time and space on health, aside from the inclusion of history as a contextual structural determinant. Conclusions Here Aboriginal voices contributed to epidemiology through a co-analysis process that determined "who" and "what" is upon Aboriginal people. Using a co-analysis approach, the voices of Aboriginal researchers and Aboriginal organizations were able to construct a social world that aligned more closely with their ways of knowing, doing, and being. The resulting co-analyses revealed that growing up in an Aboriginal family is likely the biggest cultural support for Aboriginal adolescents "likely being well", while past and present structural issues of ongoing institutionalized family dislocation and removal, negative contact with police, material poverty, exclusionary school programs and curriculums, and high rates of racism in schools are likely "what is upon" Aboriginal adolescents aged 14-17 years. These structural factors are all legitimate points for health intervention to improve health outcomes for Aboriginal adolescents, but they require Victorian systems to change rather than Aboriginal people or communities. School sets plans to achieve goals Have you set up a plan to achieve your future goals and aspirations, with the help of your teacher? Yes/no (no or don't know) Family affluence scale (HBSC) 4 items: Does your family own a car, van or truck? (no/yes/two);Do you have your own bedroom for yourself? (yes/no); During the past 12 months, how many times did you travel away on holiday with your family? (none, one, 2+); How many computers does your family own? (none/one/two/more than 2) 9 item scale using 4 questions Boyce et al. (2006) where low-med (0, 1, 2, 3, 4, 5) or high (6,7,8,9) [73] Feel they can access general practitioner Does adolescent feel that they can access GP services if needed? Yes/no Access to basic services There is access to basic services such as banks and medical clinics in my neighbourhood. Strongly agree/ agree/disagree/ strongly disagree Yes (strongly agree or agree)/no (disagree or strongly disagree)
2021-08-28T06:17:21.210Z
2021-08-01T00:00:00.000
{ "year": 2021, "sha1": "968ad95579e38b1545556612f1a5efcf06f11f71", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/18/16/8674/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c9a529e4979fd0faa8318f661c832399d5a37a29", "s2fieldsofstudy": [ "Education", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
254738851
pes2o/s2orc
v3-fos-license
A ‘Fool’ and His Sugar-Sweetened Beverage are Soon Taxed Tax policy informed by Libertarian paternalism suggests that taxes should be levied on non-‘rational’ choice (i.e., where a person makes a ‘foolish’ decision by their own internal standards). In respect of excise taxes on sugar sweetened beverages, the regressivity of such policies can then be justified by reference to a progressive health effect, since the poor are more sensitive to changes in price and disproportionately tend to consume sugar sweetened beverages. However, as it currently stands, that conclusion is based merely on a presumption of irrationality of the poor as a class and neither the relative price of goods subject to such taxes, nor the associated ‘welfare loss’ from the levy of the tax, have been systematically measured. Such a presumption of non-‘rationality’ in food choice only holds with respect to persons who are not bound by relative prices of food, namely the wealthy. Accordingly, it is reasonable for scholars to consider the levy of excise taxes on unhealthy food consumed primarily by the wealthy (e.g., foie gras) as a ‘nudge’ toward a healthier food choice. Furthermore, the poor are rational agents capable of analysing and comparing relative prices of food products taking into account the health effects. As various scholars have now proposed in medical journals, any incremental tax levied on the poor in respect of sugar-sweetened beverages should be offset, for example, with a credit for healthy foods including fruits and vegetables. Introduction Obesity is the disease and taxes levied on sugar-sweetened beverages are the treatment for it-so says Kelly Brownell. 1 Such tax policy 'therapy' for obesity is not so much a 'nudge' as the Libertarian paternalists Richard Thaler and Cass Sunstein have in mind, 2 but a more direct form of paternalism where consumer choices are presumed to be non-'rational' in many cases. Brownell accordingly wrote at one point that thirsty people should be encouraged to just drink water. 3 The New York Times printed a similarly blunt assessment of food choice: 'It is evident that some people just aren't responsible enough to feed themselves.' 4 And, this assessment builds on the purportedly 'libertarian' view of Thaler and Sunstein who wrote: 'The presumption that individual choices should be free from interference is usually based on the assumption that people do a good job of making choices, or at least that they do a far better job than third parties could do. As far as we can tell, there is little empirical support for this claim.' 5 The result is that consumer food choices can then be described as broadly non-'rational' meaning not in accordance with an individual's own preferences, which for the reasons explained in detail here has major implications for tax policy design. The regressivity of sin taxes is often presumed to have a progressive health effect sufficient to offset the negative aspects of regressivity. 6 Some empirical evidence has been given to support the claim by reference to the price elasticity of food demand where socioeconomic status is found to be significant in similar contexts of consumer choice 7 ; albeit no significant evidence has ever been developed on the underlying consumer preferences, or what is referred to in economic terms as consumer 'welfare'. This is important because taxes, especially regressive taxes, are presumed by standard economic theory to reduce the 'welfare' of the payor. If taxes are instead presumed to be beneficial to the payor, then the tax should be maximized thus resulting in a de facto prohibition by taxation. One immediate problem is that the presumption of non-'rationality' in food choice only holds with respect to persons who are not bound by the relative prices of food. Indeed, if one truly believes that sin taxes are broadly beneficial for classes of persons engaged in non-'rational' eating behaviour, the only persons we know for sure are behaving in such a non-'rational' manner are the wealthy (who are not bound by price of foods, by definition). To say anything about the rationality of the poor, who are bound by the relative price of foods, we would require empirical data on both relative prices and consumer food preferences. 8 Accordingly, in the absence of empirical data on consumer welfare, what we can coherently say in scientific terms is that sin taxes should instead be levied on the food choices of the wealthy which are not decisions subject to bounds in respect of price and could benefit by definition from a 'nudge' toward a healthier option. Nonetheless, current tax policy seems to be moving in the opposite direction toward the taxation of foods and other products used predominantly by the poor as a class. The key difference between tax policy derived by Brownell, et al. in respect of sugar sweetened beverages and prior tax policy is the presumption that taxes are beneficial to the persons required to pay those taxes. Simply put, Brownell, et al., have switched the philosophical premise that taxes, especially regressive taxes, are harmful to consumers. If taxes have such a beneficial and progressive health effect, then sin-taxes should be maximized to increase the welfare of such non-'rational' persons. Obviously, in the case that taxes increase individual welfare, such taxes should be increased especially on 'fools' and other irrational persons who stand to gain from incremental taxation, particularly poor persons as a class. And, as public health experts insist, there must be a progressive health effect of taxation for those persons who have the most to gain, namely the poor. Of course, such a claim hinges on the coherency and workability of the definition of non-'rational' consumer choice as proposed by Thaler and Sunstein. Libertarian Paternalism and Tax Policy Libertarian paternalism suggests that taxes should be levied on non-rational behaviour, where the term 'rational' typically is defined by economic theory (which technically relates to each person's own individuals preferences). The tax is thus ostensibly levied based on a non-rational choice, but crucially not levied simply for being obese. Muireann Quigley applied the economic rationale to health policy policies, as follows: There are… situations in which preventing people from making 'stupid decisions' could be seen as a legitimate sphere of action for government and regulators… Take for example unhealthy eating habits; say a person with a continuous diet high in refined sugar and saturated fats. These may impact on those around them; their family and friends if they become ill and, more importantly in the context of the discussion here, the state in terms of the cost of health care or work days lost. 9 Taxes so designed to offset economic 'externalities' (i.e., costs imposed on other persons) are referred to as Pigouvian taxes, 10 where gasoline taxes are often held out as the prime example. 11 However, with respect to obesity and taxes, the tax policy issue relates in significant part to 'internalities' (i.e., not externalities), where the consumer fails to take into account health costs to their own person from a consumption choice. 12 The discourse over tax policy is accordingly given along the lines of Pigouvian-type taxes except insofar as the harm is here an internal harm to the self. 13 The distinction between internal versus external harm is important because in the case of taxation premised on an internalized harm to personal health, the taxed person potentially ends up both paying the Pigouvian-type tax and also suffering the personal harm that the tax was designed to address. 14 In the analogous situation of a Pigouvian tax on an externality, the taxed person is obviously not the person that also suffers the external harm. The regulation of 'internalities' is thus rightly taken as paternalistic where the subject of the regulation essentially fails to properly assess personal costs in the decision to consume unhealthy food, such as sugar-sweetened beverages. 15 Riccardo Rebonato addresses the issue in Coasean terms as negotiation between one's current and future self over health internalities: '[F]or many individuals going about their lives consistently listening to what their rational selves suggest entails psychological costs they would not be prepared to pay.' 16 Libertarian paternalism proposes a justification for government regulation of an adult who fails to behave properly by their own standards. Camerer,et al.,pejoratively refers to this as 'idiotic' economic behaviour. 17 This paper proposes that consumers do make rational decisions despite significant barriers and limited bounds of possible decisions for the poor. The Limits of Libertarian Paternalism Robert Baldwin has optimistically described Thaler and Sunstein's proposal of 'Libertarian paternalism' as a new type of 'philosophy'. 18 Several scholars also predictably conclude that Thaler and Sunstein did not give a proposal reflective of Libertarian philosophy. 19 The words 'libertarian' and 'paternalism' so combined is akin to saying 'altruistic utilitarian' or 'social Nietzschean'. Notably, the latter oxymoron (by citing Nietzsche for social values) spawned an extensive responsive 11 See Mankiw (2009), 15; but see Masur and Posner, 108. 12 Herrnstein, et al. (1993). 13 See Masur and Posner (2015), but see Fleischer, (2015). 14 Strnad (2005Strnad ( , 1254 ('Imposing a tax in that amount means that the consumer will pay the same costs twice: the first time as a tax and the second time in the form of actual internal costs such as ill health'); but see Pomeranz (2012/13), 1004 ('[S]uch a tax would persistently and negatively impact those of low economic status. They would be doubly penalized for being poor: first because they do not have the resources to eat as well as the wealthy and second because a person of wealth could afford to be overweight or have diabetes and pay the tax'). 15 Rizzo and Whitman,707. 16 Rebonato, 376. 17 Camerer (2003). 18 Baldwin (2014). 19 Quigley, 606 ('First, nudges are neither libertarian nor paternalistic as is claimed'). literature in law journals, which now refers to the patently wrong application of philosophical ideas as 'intellectual voyeurism'. 20 So, the more critical response to the idea of 'Libertarian Paternalism' is simply to say that it is a wrong usage of words; for example, the analysis is based on several possible mistakes of facts in respect of describing economic choices. Another possible response to Libertarian paternalism is to orient it within philosophy. In that respect, the 'ought' versus 'is' framework of personal choice is psychologism (as extensively debated by German philosophers in the late Nineteenth century). 21 The 'ought' versus 'is' framework was also famously applied to the law by Hans Kelsen in what he referred to as 'normative' legal analysis. 22 The link between economic theory and forgotten lines of philosophical thought is discussed in more detail below. In economic terms, Libertarian paternalism is an attempted extension of the field of Behavioural Economics in a 'softer' form with some deference given to individual preferences in respect of choice. 23 Behavioural economics as a field of study arose in in the 1950s in response to objections to 'rational choice' of neoclassical economic theory as not corresponding very well to actual human behaviour. 24 Herbert Simon proposed using traditional economic analysis to develop 'approximate' ideas of rationality to better reflect the actual decisions of human beings. 25 The origin of behavioural economics reflects to some extent an internal dispute within the field of economics over the key issue of whether all economic behavior is rational enough to be studied as if it were rational 26 ; or, to put it differently, whether all economic behaviour is economic-enough to be studied as if it were economic. 27 Camerer, et al., accordingly defined behavioural economics as extending the normal bounds of 'rationality' in the eyes of economists 28 while still 'maintaining the emphasis on rigor and field applications that sets economics 20 Leiter (1992). 21 See Green (1999). 22 See Kelsen (1962), 79 ('The rule of law remains objective description; it does not become prescription'). 23 Yeung, 134 ('Soft paternalism involves intervention to prevent an agent from doing X, where the paternalist judges that, relative to the agent's own views of his or her self-interest, the doing of X is not in the agent's interests'). 24 See Becker (1962), Becker (1990), Thaler (1991), see also Yeung, 128 ('Behavioural law and economics. The findings of experimental cognitive psychologists identifying these and other systematic decision-making flaws have been seized upon by economists, generating a body of work which has become known as 'behavioural economics' and its offspring 'behavioural law and economics' (or the 'new law and economics'). Unlike orthodox law and economics methodology, new law and economics seeks to challenge the standard economic model by pointing to systematic divergences from the premise of the rational self-interested decision-maker that orthodox economic modelling takes as its starting point'). 25 Simon (1983), see also Sen (1977). 26 Rostain,978. 27 Cooter and Ulen (2012), 3 ('Economics generally provides a behavioral theory to predict how people respond to laws. This theory surpasses intuition just as science surpasses common sense'). 28 Rizzo and Whitman (2009), 686 ('The new paternalism is supported by a growing body of research in behavioral economics showing that individuals are not fully 'rational,' as economists understand that term, but instead are subject to a variety of cognitive errors and biases'). apart from other social sciences.' 29 Hence, the approach amounts to 'economic behaviourism' (i.e., distinguishable from the inverse: Behavioural Economics) reflecting the use of 'rigorous' economic methods where 'behaviourism' is generally taken as comprising the social sciences and psychology. The compromise accordingly gave rise to new hybrid fields of economics combined with something else, such as economic psychology. Such 'economic-behaviourism' acknowledges the possibility of non-economic frameworks of decision as opposed to the denial of the validity of 'non-rational' methods of choice. In any case, the word 'rational' refers to the neoclassical economic study of human decisions in economic terms; thus, 'non-rational' refers to evaluating human decisions under any other framework. This explains why hybrid fields including economic psychology are seen by some economists as the 'rigorous' study of non-rational modes of decisionmaking. As illustrated in the prior paragraph, the term 'rational' is defined by economists in a positivistic sense as economics itself. So, where a decision is inconsistent with economic theory, it can be described as irrational. Thomas Ulen defines 'economics' in positive terms as the rational analysis of human behavior. 30 Ulen's positive definition can be restated as follows: Economics is the study of human behavior that is admittedly not always ['rational'], but is thought to be [rational] enough in the aggregate that it can be studied as if it were [rational]. The word-order can be further re-arranged to yield the following helpful result: Economics is the study of human behavior that is admittedly not always ['economic']… This type of definitional positivism is what Richard Posner meant in his reference to a 'rational frog', 31 defined as simply a frog that acts consistently with what economists would expect for frogs and not as a claim as to the rationality of the frog itself. Posner writes: The basic assumption [of economics], that human behavior is rational, seems contradicted by the experiences and observations of everyday life. The contradiction is less acute when one understands that the concept of rationality used by the economist is objective rather than subjective, so that it would not be a solecism to speak of a rational frog. 32 29 Camerer et al. 1215. 30 Ulen (1999, 790; see also Posner (2003), 17 ('[R]ationality is the ability and inclination to use instrumental reasoning to get on in life'); Cooter and Ulen, 18 ('Indeed, some economists believe that the conditions they impose on the ordering or ranking of consumer preferences constitute what an economist means by the term rational'). 31 Posner (2003). Hence, it is not coherent to say that some persons are economic 'idiots' as a matter of neoclassical economic theory without exploring differences in individual preferences. 33 For this reason, Camerer, et al., qualify their position that the Libertarian paternalist approach ought to apply to situations rather than persons. However, Camerer, et al. then paradoxically proceeds to speak in respect of 'idiotic' persons. If the objective was to regulate situations, the economic proposals ought to be along the lines of reducing difficult or confusing financial situations that low-income persons face on a daily basis, such as arcane tax rules (the quintessential example of which is the earned income tax credit eligibility test in the United States). Yet, Camerer, et al., propose nothing of the sort. Their focus is instead on the 'bad' decisions of poor persons as a class, thus distinguishable from the wealthy who are able to make 'good' economic choices that are by definition rational. The economic proposal to tax 'fools' for purportedly 'irrational' behaviour (such as in the purchase of sugar-sweetened beverages) is partly a proposal to tax persons who do not think as economists think or choose as economists choose. In the practical terms of Libertarian paternalism, 'irrational' behavior becomes all nonneoclassical economic versions of choice. Any proposal to levy a tax on irrational 'idiots' who choose to drink sugar-sweetened beverages thus represents a proposal for a tax on persons who did not make the 'correct' economic choice. The idea is to levy a tax on persons that do not conform to the economic ideology of rational choice! 34 And, if Thaler and Sunstein are to be believed, this failure applies to most persons most of the time. However, among other problems, the expectations of behavioural economics are based on a probabilistic assessment of what economic behavior is expected to be in the population as a whole which may not apply to any one individual. 35 Libertarian paternalism can also be described as a simple 'ought'-based claim for social engineering toward homo economicus. 36 For example, no attempt has been made to quantify the 'welfare loss' to persons from the taxation of sugar-sweetened beverages. 37 This explains why economists do not proceed to draw utility functions in respect of sugar-sweetened-beverage tax proposals. Such functions would tend to disprove the normative economic argument that persons are not making a rational 33 Baldwin, 850 ('[P]ro-nudgers are too quick to portray some preferences as irrationalities. Thus, some of the 'biases and blunders' that Thaler and Sunstein cite as causes of poor decisions can be said to be preferences that deserve to be respected rather than cognitive or volitional failings that need to be reacted to with a nudge'). 34 Yeung, 128 ('[Libertarian paternalism identifies] fallible individuals who have inescapable difficulties in making decisions that conform to the rational actor model'). 35 Veetil (2011), 332 ('Also, if individuals sometimes make inoptimal decisions for themselves in the 'private sphere' then they are even more likely to make inoptimal decisions in the 'public sphere', ie, electing the government (planners). Buchanan (1954) in a paper titled ''Individual Choice in Voting and Market'' discusses the sources of deficiencies in the ''process of electing planners'' as compared to the 'process of choice in a market with monetary prices'). 36 Rostain (2000). 37 See Dolgin andDieterich (2011), 1126. A 'Fool' andHis Sugar-Sweetened Beverage are Soon Taxed 213 consumption choice with respect to sugar-sweetened beverages. 38 In lay terms, one might just say the normative economic claim regarding sugar-sweetened beverages is not a very good explanation of human behavior. The explanation does not take into account why or how much the regulated person values the sugar-sweetened beverage because losing the beverage is to lose that welfare. 39 Likewise, the tax policy proposals in respect of sugar-sweetened beverages are incomplete because the Pigouvian tax does not offset an externality. The tax potentially increases the harm to the affected person. As an illustration, Brownell, et al., propose to first levy a tax, and then to use the tax proceeds to achieve the desired causal result. The desired policy outcome depends on the expenditure made possible by the tax and not the tax itself. A similar tax policy result could be achieved by levying a tax on the income of the wealthy and using it for nutritional expenditure programs. 40 The levy of incremental tax on low income persons will partly tend toward making an economic situation worse for those low income persons who are already subject to extraordinarily high effective tax rates. 41 As an example, for women in the United States where excellent data is available, socioeconomic status is highly correlated with obesity. 42 Yet, it is solely in the circumstances where the wealthy are taxed that economists acknowledge, perhaps even set out to quantify, the various welfare losses of taxation. The relative worsening of socioeconomic status by incremental taxation will likely yield a host of nasty results including to public health outcomes and obesity rates. 43 Any policy analysis of a regressive tax proposal requires consideration of the costs and benefits, not just the benefits. 44 The common description of regressive taxation as a 'fairness' issue, as opposed to a causal issue is incomplete. 45 The approach depends on misapplying economic theory as a normative argument rather than as a scientific method (i.e., identifying only one-half the policy analysis relating to either costs or benefits). An extensive literature already exists on the unfairness of regressive excise taxes designed to limit consumer choice; however, 38 Baldwin, 846 ('A further concern of nudge's critics may be that the banner of libertarian paternalism may be used as a cover for the pursuit of social objectives (such as lowering hospitals' administration costs) rather than the welfare of the nudged individuals'). 39 Pratt, 128 ('If consumers avoid the tax by buying less-preferred, untaxed goods, instead of the taxed goods that they prefer, and no revenue is raised, this substitution could cause a welfare loss'). 40 Strnad, 1225 ('Tying junk food taxes to health-initiative expenditures may create political appeal, but from a normative standpoint the justification for connecting the tax and the expenditures is not clear. If nutrition education has high public value, the government should be willing to fund these activities through revenues raised from the most efficient source'). 41 For a calculation of effective tax rates by income level see : Bogenschneider, (2014). 42 Ogden, et al. (2010), 2 ('Among women, obesity prevalence increases as income decreases. Overall, 29.0% of women who live in households with income at or above 350% of the poverty level are obese and 42.0% of those with income below 130% of the poverty level are obese'). 43 See Bogenschneider (2016) (explaining that high rates of wage taxation should be expected to cause negative health outcomes in society). 44 Pratt, 129 ('Public health advocates who have proposed SSB taxes have ignored such welfare losses that their proposals might cause. They assume that the consequences of enacting an SSB tax would be entirely positive, especially with respect to low-income consumers'). 45 See e.g., Efrat and Efrat (2012), 250. what might be called the 'fairness issue' is beyond the scope of this paper which is concerned with the theoretical coherency of Libertarian Paternalism applied as tax policy. Behavioural Economics and Public Health The vast majority of the public health literature on the taxation of sugar-sweetened beverages presumes that people are not able to make good choices in respect to food. Scholars setting out to apply economic theory to health via the tax lever have referred to persons as 'idiotic', 46 'stupid', 47 'child-like', 'naïf's' 48 or 'lowcapacity'. 49 Notably, several major law review articles likewise address the issue as analogous to dealing with an 'unruly child'. 50 However, in terms of the history of taxation, Leona Helmsley may have said it best by coining the term 'little people' to describe persons that are required to pay taxes. 51 In economic terms, a decision regarding food consumption may accordingly be described as 'irrational', meaning non-rational decisions made by Helmsley's 'little people'. The literature goes on to assess whether tax policy can be effective in fostering a better and more 'rational' decision by the 'low-capacity' persons. The core thesis of this paper is that such name-calling is entirely a rhetorical and non-substantive methodology that does not reflect a coherent view of 'rational' choice. Empirical studies are mixed on the potential effectiveness of applying the 'lever' of tax policy in respect to sugar-sweetened beverages. Yet, several studies conclude that the tax incentive would only have an effect if the tax were very large in amount and the proceeds earmarked for nutritional literacy programs in low-income communities. 52 The standard policy recommendation now applied to taxation is thus essentially the same as the first proposal of Thaler and Sunstein that consumers, namely poor people, are not able to make good decisions about food and beverages. In other words, those 'irrational' fools should first be taxed, and then, the proceeds of the tax should be earmarked for re-education programs. A parallel line of research focuses on the 'innumerate' poor who are unable to read nutritional labels and ostensibly to determine prices of food. 53 The idea of re-education programs to 46 Camerer et al., 1211. 47 Quigley, 618. 48 Pratt, 130 (coining the term 'naif's'). Note that similar to this article Pratt offers the term to highlight an opposing policy position and not in a pejorative manner. 49 Baldwin, 840, 842 'Capacity' refers to the ability of that person to gain, receive absorb and act on information… Such individuals will possess a high ability to 'unearth' nudges, such as defaults, and to resist these. Low capacity individuals will struggle to absorb and act on even simple messages, even when disposed so to act'). 50 See e.g., Yeung (2012), Efrat and Efrat at 246. 51 Hammer (1990). 52 See Alemanno and Carreno (2013). 53 Brownell et al. ( ), 1603; see also Pratt (2012), 138 ('In addition to being mindful of consumers' enjoyment of their food, public health advocates should design food tax/subsidy systems so that consumers can understand them easily and quickly. In particular, the food classification system should be understandable to the ninety million Americans who do not read above a basic level or are innumerate'). reduce innumeracy in food choice are sometimes referred to as nutritional 'literacy' programs. The second-guessing of consumer choice does not seem on its face to be a 'Libertarian' approach at all. Thaler and Sunstein have an answer to this observation. They argue that libertarian paternalism is distinguishable from raw paternalism insofar as it is concerned only with nudging, not commanding, persons toward the decision that any rational person would have made anyway. 54 Of course, that choice happens to be the choice the 'parent' also recommends and where tax is levied on the wrong choice. In respect to obesity, this means that the 'rational' person would surely choose a more optimum diet if only given the opportunity to rethink their foolish beverage decision, and if confronted with better evidence and perhaps more time to mull it over. 55 Thaler and Sunstein further wrote: There is overwhelming evidence that obesity causes serious health risks, frequently leading to premature death. It is quite fantastic to suggest that everyone is choosing the optimal diet, or a diet that is preferable to what might be produced with third party interference. Of course rational people care about the taste of food, not simply about health, but the claim that Americans are choosing diets optimally would be hard to support. 56 Hence, Libertarian paternalism, at least the version as proposed by Thaler and Sunstein, is essentially equivalent to the non-Libertarian raw paternalism in respect to obesity policy. And, as long as we are talking about wealthy persons that can afford to buy fresh, healthy, foods and beverages without respect to price such an approach has the potential to be internally coherent in deriving tax policy. Such wealthy persons, for some reason other than relative price, still choose not to buy such foods. This 'bad' decision obviously is one not premised on a comparison of price with the health costs. Then, and only then, is the unbound food decision presumptively 'irrational' in economic terms without further investigation of relative price and consumption preferences. However, for a person that can afford to buy only canned, unhealthy, processed, foods, then the decision-matrix is bounded by the limits of possible food purchases by prices. 57 A decision made subject to the bounds of price is not properly described as economically 'irrational' without further data on relative prices to the consumer. Rizzo and Whitman refer to the Libertarian paternalist approach as a 'non-sequitor' for this reason. 58 As Katherine Pratt wrote in more conciliatory terms: 54 Galizzi (2012); Galle (2013); see also Barton and Grune-Yanoff (2015). 55 See Rizzo and Whitman, 712 ('The new paternalists claim to have found policy interventions that will make targeted agents better off according to the target agents' own preferences. What they have in fact found is evidence of internal conflict in the target agents' preferences, and then they have resolved the conflict in favor of the experts' preferences'). 56 Thaler and Sunstein (2003a, b), 1168. 57 Baldwin, 832 ('The proponents of nudge build on the well-established insights of cognitive psychology and behavioural economics to contend that control systems need to take on board the bounded rationality of citizens when they make daily decisions'). 58 Rizzo and Whitman,711. Public health paternalism is a much more controversial policy justification for food taxes, however. To date, public health advocates have focused only on the assumed benefits of food and soda taxes and have ignored inefficiencies and welfare losses that such taxes might cause, leaving their proposals vulnerable to economic counterarguments. 59 Furthermore, a person who does not behave as an economist might expect likely does not agree that he has made an 'irrational' choice to consume a sweetened beverage as opposed to water (as Brownell has in mind). As Pratt wrote: '[M]ost naive consumers may not understand that they need help structuring their dietary decisions.' 60 Indeed, such persons may even affirmatively substitute from the taxed sugar beverages to high-fructose fruit juice, for example. In such an unwelcome case of disobedient behaviour, the person seems to be intentionally choosing the 'irrational' choice. 61 Where behavioural economic analysis is based on non-quantitative or empirically based expectations about optimal choice without reference to either consumer preferences or relative price; this amounts to what might be called 'conjecturaleconomics'. 62 Such an approach is surely not 'science'. Furthermore, there is nothing empirically 'rigorous' about conjectural economic methods applied in this fashion. The purportedly 'economic' analysis applied is intentionally designed so as not to account for 'welfare losses'. Accounting for these losses would reduce the normative impact of the proposed storyline, which otherwise is a key element of economic theory. The formal argument that consumers are just 'idiots' is, in the best case, a rhetorical argument. 63 As Camerer, et al., argue: 'In a sense, behavioural economics extends the paternalistically protected category of 'idiots' to include most people, at predictable times.' 64 A contrary description of human choice is the following: The poor are making rational decisions albeit with significant barriers to choice and behavioural economists and others are simply misinformed (or naïve) 65 about the limited bounds of decision faced by the poor in respect of food choice. 59 Pratt, 139-40. 60 Ibid, 132. 61 Baldwin (2014), 842 ('Low capacity individuals who are ill-intentioned, will, moreover, have very limited ability to adjust their behaviour so as to reject messages that they disagree with and to act in ways that are inconsistent with such messages. They will, in turn, possess poor abilities to 'unearth' nudges such as defaults, and resist these'). 62 See Finkelstein, Ruhm and Kosa (2005), 244 ('Economists' first law of demand implies that a decrease in the price of food will cause consumption to increase. Moreover, if the price of calorie-dense, prepackaged, and/or prepared foods (e.g., fast food) falls faster than for less calorie dense foods (e.g., vegetables), then individuals will shift their consumption toward these cheaper alternatives'). 63 Rebonato (2014), 378 ('Of course, just observing what the individual chooses is no longer a viable option, as the preference that is satisfied by the observed action may be the uninformed or irrational one'). 64 Camerer et al., 1218. 65 Rizzo and Whitman, 711 ('The experts themselves have, at best, only a tenuous grip on the values of the targeted agents, which limits the direct applicability of their paternalistic theories to policy'). 'Symmetry' in Tax Policy 'Symmetry' is a key element of paternalistic tax policy proposals. Camerer, et al., propose that the tax system is asymmetric: 'A regulation is asymmetrically paternalistic if it creates large benefits for those who make errors, while imposing little or no harm on those who are fully rational.' 66 The logic is that the tax system implicitly redistributes from the 'rational' to the 'non-rational' members of society implying an asymmetry. But, this description of the tax system based on marginal statutory tax rates is flawed. The tax system in most Organisation for Economic Cooperation and Development (OECD) countries taxies primarily labor, even when the statutory rates of income taxation might be at least ostensibly progressive. In most OECD countries, roughly 85% of tax collections are derived from direct or indirect labor taxation. 67 Notably, wherever and whenever Pigouvian-type taxes on 'internalities' are proposed as a matter of tax policy, the tax is always directed against the poor. In contrast, an alternative view is that the predominant 'irrationality' in society is the stockpiling of wealth by persons with no plans to reinvest the stockpiled hoards of money into living persons as a type of resource-hoarding behaviour. 68 Furthermore, such capital accumulation for the sake of accumulation is not utility-maximizing behaviour except under very strained assumptions; in fact, Charles Dickens is thought to have written his novels in response to the writings of John Malthus on this point exactly. 69 Dickens clearly explains why Scrooge is not and ought not to be the norm of human economic behavior. The targeting of the poor using Libertarian paternalist argumentation is the primary form of 'asymmetry' in tax analyses. Prior economic discussions of sugarsweetened beverage taxes that focus exclusively on proposals for taxing the poor who are taken to be cognitively biased, irrational, and inferior. As Jennifer Pomeranz wrote in respect to prior sugar-sweetened beverage tax policy proposals: 'This tax would thus not address the poor eating practices of wealthier individuals even though it is equally unhealthy for them to consume an excess amount of food.' 70 A revision to tax paternalism is proposed here as 'symmetric'-paternalism where the word 'symmetric' refers to applying the same logic of paternalism normally reserved for the poor to the wealthy as well. The emphasis of Libertarian paternalism on rational choice is accordingly far different in terms of its policy implications than the proposition of Thaler and Sunstein to nudge toward rational choice. The benefits of Pigouvian taxes are 66 Ibid, 1212. 67 See OECD Revenue Statistics, 29 available at www.oecd.org/ctp/tax-policy/revenue-statistics-19963726.htm; Office of Management and Budget (OMB), The Budget for Fiscal Year 2015, Historical Tables, 32-33 www.whitehouse.gov/sites/default/files/omb/budget/fy2015/assets/hist.pdf. 68 Bogenschneider and Kasper, (2016). 69 , 19 ('One can also view many of the books of Charles Dickens as protests against Malthuś point of view'). 70 Pomeranz, 1004. optimized when the tax on externalities can be targeted to each person, 71 which also is true in respect to offsetting internality-type costs. The optimal amount of 'sin' taxes should be adjusted proportionately to take into account the internality cost to each individual person. 72 As illustration, a billionaire might be expected to pay £100,000 in tax for a six-pack of soda or a pack of cigarettes to reflect the internality cost to the health of such a valuable personage. Problems in 'Rational' Choice Theory Thaler and Sunstein wrote: 'As far as we can tell, there is little empirical support for [the claim that people make rational decisions].' 73 This represents the base claim of economics as a normative ideology where economics purports to be the study of 'rational' behaviour as defined by economic theory. However, economic behavior is in at least some cases not 'rational enough' to be studied as if it were 'rational'. Another possible conclusion is that the field of economics is flawed in its understanding of what rational human choice means or entails. In technical economic terms, the stubbornly irrational person that insists on consuming a sugar beverage in spite of the tax levy seems to be avoiding what is referred to as a 'welfare loss' of switching to water, for example. Presumptively, at least as a matter of utility theory, the person derives a sufficient utility from the product that exceeds the health cost, thus rendering the seemingly 'bad' decision rational in economic terms. As Katherine Pratt wrote: 'Public health advocates… should acknowledge the enjoyment that people derive from eating their favourite foods. The prohibitionist, absolutist, killjoy rhetoric of some advocates is too severe for most laypeople and plays into the hands of opponents of antiobesity taxes.' 74 Another problem in application of Libertarian paternalism to tax policy is that the direction of causation from marginal tax levies may be counterintuitive, such as with a 'Giffen good' referring to a product where in some situations an increase in price increases the demand. 75 For example, a tax levied on sugar-sweetened 71 Strnad, 1244 ('A Pigouvian tax schedule may be very complicated if the relationship of external cost to consumption is nonlinear'); see also Williams (2013/14), 164 ('In theory, we could craft millions of tiny little taxes to compensate for every 'market failure' we manage to uncover. But that's impractical, so instead we pick and choose a few sin taxes that we find especially appealing.') citing Thorndike, Tax.com (2012). 72 See Doucett (2015), 397-8 ('The most common attempt at implementing a tax policy is through a soda or junk food tax. This is a tax on an isolated, specific category of food or drink. These taxes are ''modeled after the 'sin taxes' already implemented on cigarettes and alcohol'' and seek to increase the price of these unhealthy foods and drinks enough to reduce consumption'). 73 Thaler and Sunstein, 1168. 74 Pratt,137. 75 See Cornelsen et al. (2014) ('[C]onsumers may still continue buying the now higher priced food but reduce the quantity of other foods they consume to continue to afford it, including healthy foods. This is known as the income effect and it is more likely to affect lower income earners as they spend a relatively greater share of their incomes on food'). Note also the possibility of the 'cross price effect'. Cornelsen, et al. at beverages could reduce the amount of disposable income available to persons and thereby increase (not decrease) the demand for sugar. The Supposed 'Non-rationality' of the Poor in Respect to Food In the real world, low-income persons are, by necessity, highly price-numerate in respect to consumer goods-often more so than the wealthy! The idea that poor people as a class are price-innumerate is accordingly best-viewed as naïve; many of the poor are masters of price in respect of consumer goods including food and always have been. The poor must out of necessity carefully evaluate relative prices in the supermarket. It may come as a surprise to many researchers that one of the most popular and long-running daytime television programs in the United States (The Price is Right) deals solely with guessing prices on consumer goods, a favourite pastime of the poor! In fact, the wealthy are not as knowledgeable, entertained-by, or as sensitive to price on consumer products where the cost is de minimus in respect of income. The wealthy are thus potentially more susceptible to poor decisions in respect of consumer goods. Decades of economic literature suggest taxes should be neutral to economic choice. In an ironic twist, Griffith and O'Connell argue that regressive taxes levied on food are most likely to change the behavior of the poor. So, this is the formal statement of a non-'neutral' tax policy by design. The non-neutrality of tax policy appears to be okay in the minds of researchers as long as the tax policy is levied on persons with less money. Griffith and O'Connell wrote: Often lower income households are the most price sensitive, which would mean they are likely to change their behaviour most as a consequence of a price rise. To assess the overall effect of the tax and whether low or high income consumers would be affected more, we would need to set the costs imposed by the tax on consumer through higher prices against the potential health benefits arising through their changed behaviour. 76 As discussed in further detail below, even if we accept Griffith and O'Connell's methodology, a sugar-sweetened beverage tax could likewise be levied on a wealthy person at a low rate without changing consumption behavior up to a relatively high threshold. By the very same logic, the amount of the sin tax policy for the wealthy should then be increased until the wealthy person responds to the incentive and decides to behave in a healthy manner for his own benefit. A discussion of whether higher tax rates on the wealthy is a good policy idea bleeds into traditional areas of tax incidence analyses; however, at least by the logic of Brownell, et al., the tax levies from the wealthy could be earmarked for re-education programs for lowerincome persons. 77 76 Griffith and O'Connell (2011). 77 See Andreyeva et al. (2011), 413 ('A modest tax on sugar-sweetened beverages could both raise significant revenues and improve public health by reducing obesity. To the extent that at least some of the tax revenues get invested in obesity prevention programs, the public health benefits could be even more pronounced'). Other Explanations for Ostensibly Irrational Choices 'Facts' in economics or anything else are a function of theory. Facts are not observed or known objectively or separate from the theory. And, this is partly what we call 'science', which explains why there is more to science than gathering and summarizing empirical observations. As Karl Popper explained, even such evidentiary gathering in the laboratory is a theory-driven endeavour. 78 Economics is premised upon a theory of choice regarding how human beings make decisions. In most scholarly discussions of food choice, rational choice theory is immediately abandoned as it relates to the food-purchase decisions by the poor. For example, in respect of the choices of the poor, scholars often attribute causation to something within the decision-making process that leads to a miscalculation of the respective cost-benefit analysis inherent to choosing healthy foods. So, a scientist evaluating this approach ought to ask: What is the underlying theory? Is the theory consistently applied? Where is the causative element for testing? The answers are respectively: 'undisclosed', 'no' and 'missing'. All of these indicate that the economic ideas of Libertarian paternalism are not science, so there is nothing whatsoever to test in the first place. With no possibility of falsification, indeed, there is nothing that might remotely be called 'science' or 'scientific inquiry' within the methodology of Libertarian Paternalism. In fact, there are other possible scientific explanations for an observation by economists of what they think is an 'irrational' choice in respect to food, such as the following: (1) Flawed theory of causation; (2) Mistake of 'fact' criteria within the theory; (3) Mistake of observation/counting (particularly in the social science). For example, a flawed theory seems to be the pertinent description of the state of affairs where economics does not seem to explain actual human choice in the world. Astonishingly, Thaler and Sustein begin with the purportedly flawed description of empirical economics, thus indicating the entire undertaking is non-scientific ab initio. A mistake of fact criteria might also be thought to occur where the 'welfare loss' criteria of economics is misapplied. A mistake of observation could be thought to occur with the econometric evaluation of what economists conclude 'probably is' in the world. 79 In other words, the assessment by economists of what they think 'is' may then be used to derive the homo economicus claim of what 'ought' to be. However, none of these are developed by economic researchers in the context of rational choice theory. The question then is if economists are not engaged in 'science', then what is the 'ought'-based choice framework of Libertarian paternalism? This issue is taken up in the next section. 78 Popper (1935Popper ( /2002he theoretician must long before [experimentation] have done his work, or at least what is the most important part of his work: he must have formulated his question as sharply as possible. Thus it is he who shows the experimenter the way. But even the experimenter is not in the main engaged in making exact observations; his work, too, is largely of a theoretical kind. Theory dominates the experimental work from its initial planning up to the finishing touches in the laboratory'). 79 See Rostain, 979 ('Evidence that people are not perfect utility maximizers, itself, does not create a problem for the traditional economics model. So long as human errors are random, get canceled in an aggregative analysis'). Problems in the Theory of Behavioural Economics Economic Claims of 'Is' Versus 'Ought' on Food Choice The use of the term 'irrational' to describe suboptimal 'rational' choice means that the behavioural economist is necessarily referring to a standard held in his own mind (the 'Ought'), and not the mind of the person engaged in economic choice (the 'Is'). Economic theory is accordingly the study of the economic idea of 'ought' 80 ; whereas, econometrics is the study of the economic consensus of what 'probably is'. The study of variances from expected results may by 'psychological' insofar as the expectation exists solely in the mind of the economic researcher. The renowned economic genius, Frank Ramsey, identified a similar problem in respect to probability theory (such as expected results in dice throws). 81 Since public choice theory deals with human choices in the public sphere that are unexpected, this means the given 'logic' of economics does not correspond to what economics expect to find in the world. The study of 'variances' from an idealized standard accordingly relates solely to the differences between economic theory and actual decisions in the world. Economic theory accordingly falls within the field of 'psychologism' (as defined by German philosophers in the late-nineteenth century). Economic theory is accordingly not the study of objective economics relations actually existing in the world, but is instead the study of economists themselves, or what economists think rationality entails. As Ramsey identified, statistics (and by extension, econometrics) is not the study of what 'is' in the world; rather, statistical analysis is the study of what economists reasonably agree 'probably is' given to an acceptable degree of likelihood. The main goal of econometrics is to arrive at a consensus view of whether a set of probabilistic expectations are to be deemed reasonable. A form of certainty can thereby be achieved by proscribing back the horizons of choice to the positive version that economics creates (i.e., describing the choices of Posner's 'rational frog'). Yet, this scaling back amounts to a simplifying assumption by the researcher to achieve research results (qua certainty) in statistical analysis. Hence, via simplification, certainty is achieved by intentionally limiting the possibilities of choice; consequently, critics will then inevitably set out to reverse simplifying assumptions and to re-complicate matters. Accordingly, it is inevitable that the bounds of 'rational' choice will shift outward again at some future point as economists develop critiques of the rationality of the 'frog' and thereby further develop what is subsequently agreed to be 'rational'. Hence, economic theory represents an unapologetic form of 'psychologism' or the psychology of what economists think logical or rational decisions represent at any point in time. A common simplifying assumption in economics is to proceed at the margin (or solely with respect to incremental changes), and thus, not by the average. This can lead to counter-intuitive tax policy recommendations, however. As David Madden explained in the context of the 'fat tax' in Ireland: 'The difficulties associated with non-marginal tax reforms have led a number of analysts to concentrate on marginal tax reforms. This approach has the advantage of not requiring estimates of individual demand and utility functions.' 82 Thus, any researcher who is concerned with decisions not at the margin will be automatically dissatisfied with economic research performed solely at the margin. However, in rejecting such arbitrary limitations, this thereby eliminates the possibility of certainty, and with it, the broader 'rationalized' choice. Perhaps the foremost example is where Pigouvian tax effects are described by economists as based on marginal effects only. This is akin to asking whether it is rational to push a pawn forward in the game of chess based solely on the marginal effects of that move. If that question can be answered, then the researcher should always take a step back to the prior move and evaluate that prior move. This has the practical effect of the re-introduction of uncertainty in determining the chess move or economic analysis of consumer behaviour as the case may be. Recycling 'Psychologism' as Behavioural Economics Libertarian paternalism is a type of psychologistic reasoning developed in part by Wilhelm Wundt. 83 Much of the analysis of Libertarian paternalism matches closely to Wundt's description of psychologism now manifested in economic theory. First, Libertarian paternalism is an 'ought'-based claim; second, economics purports to describe the 'rational' in human thinking; third, behavioural economics purports to be the only way to measure how persons engage in economic choice. Therefore, rational choice theory is the logic of human choice (i.e., economic psychology). Wundt's framework has been summarized as follows: 1. Normative-prescriptive disciplines-disciplines that tell us what we ought to do-must be based upon descriptive-explanatory sciences. 2. Logic a normative-prescriptive discipline concerning human thinking. 3. There is only one science which qualifies as constituting the descriptiveexplanatory foundation for logic: empirical psychology. Ergo, logic must be based upon psychology. 84 If economics were not a form of 'psychologism' (i.e., the study of 'ought' vs. 'is') and was rather the study of what actually 'is', then public choice theory could be understood as a series of proofs that the 'rational' methodology of economics is flawed when applied to actual human behavior in the world. 85 Hence, the description of Behavioural Economics as psychologism is not pejorative; rather, it should be taken as a compliment with internal validity derived from the methods of 82 Madden (2015), 106; see also Vallgarda et al. (2015). 83 Kusch (2015) citing Wundt (1910). 84 Kusch (2015), Part III ('Examples of Psychologistic Reasoning'). 85 Rostain, psychology as opposed to economics. The alternative conclusion is that economic methods so applied to choice that is agreed to be not-'economic' is methodologically incoherent or a non-sequitor. 86 Hyperbolic Discounting Economists also set out to study the differences in individual preferences in respect of health (and money). This is the attempt to explain health outcomes by measuring the differences in preferences between groups or class of persons for future health. 87 Such a Malthusian revivalist 88 approach is referred to as 'hyperbolic discounting' premised on the assumption that rationality entails exponential (i.e., hyperbolic) discounting in human preferences over time. 89 In general terms, hyperbolic discounting thus refers to the discount rate applied by persons on the valuation of future utility; or, the measurement of the rational tendency to value current rewards more than future rewards. 90 Libertarian paternalism can then be combined with hyperbolic discounting theory to advocate the 'nudge' of persons to correct suboptimal discounting preferences in respect of health. For example, as applied to obesity, the idea is that some persons apply a higher discount rate to future health than is optimal. The higher discount rate causes the person to overvalue current food consumption at the expense of future health outcomes. The hyperbolic discounting approach thus relates to the study of the preferences of the obese/poor (as opposed to their rationality); however, the preferences of the test group are presumed to be suboptimal in relation to some other group, namely the fit or the wealthy. The advantage of such an empirical approach to testing economic ideas not (i.e., not premised on re-defining the 'rational') is that empirical studies can be undertaken to test the theory. And, the empirical results are in! 91 In terms of health choice most empirical studies find that differences in discounting health do not explain health outcomes. In scientific terms, we might simply say that this hypothesis of hyperbolic discounting in respect of health outcomes has been falsified as the causal variable. Economists next turn to the auxiliary hypothesis that poor persons may have different discounting preferences for money, if not health. The underlying idea is that the poor may not save money properly by undervaluing the need to save, thus explaining why they are poor. And, here, finally, we get a victory for economic theory. The poor do value future money less than the rich value future money. 92 If we assume that saved money also generates hyperbolic returns higher than the discount rate for consumption, then any person that saved money would end up with exponentially more money over time. So, what does this auxiliary result mean exactly for health policy? The economic analysis of health choice begins with the presumption that human preferences are subject to hyperbolic discounting; likewise, compound interest generates hyperbolic returns to saving. The decision to save money would then depend on the relative rate of discount to future consumption versus the rate of return on money toward future consumption. One plausible conclusion would simply be that the rich have better options to invest money and expect to or actually achieve a higher rate of return than the poor. But, the other path so far unexplored as a matter of economic theory, is whether the decisions of the wealthy to save and not consume are rational. Thomas Stanley, in the Millionaire Next Door series, has given telling accounts of the extreme saving behaviour of the wealthy 93 -query then, is this Scroogeantype behaviour rational? Or, stated in terms of hyperbolic discounting theory, what are the preferences of an ultra-wealthy person engaged in extreme saving behaviour? One explanation for extreme saving behaviour is that, whether rational or not in economic terms, the person greatly discounts current consumption thus enabling hyperbolic returns to capital in some cases. 94 Thus, in terms of health policy, Libertarian paternalist theory applied to the wealthy (not just the poor) so engaged in extreme-saving would suggest that nudges or other regulation could be applied to encourage current personal health consumption for these wealthy persons who systematically underestimate the value of their own health. Several of Stanley's anecdotal examples of ultrawealthy food consumption behaviour (including Warren Buffet with very poor eating habits) 95 indicate that such persons improperly discount future health. In terms of Pigouvian tax policy (i.e. taxes levied to the extent of health internalities) the taxation of the wealthy with these types of preferences is the obvious place to start since we know the wealthy have sufficient income to make better choices if nudged by the tax system into doing so. Conclusion The Libertarian paternalist thesis that poor persons are 'idiotic', 'stupid', 'childlike', 'naïf's' or 'low-capacity', and therefore unable to make valid choices in respect of food is internally incoherent. The 'rational choice' framework of economics actually means that food decisions are presumptively valid even when the economic researcher might prefer a different alternative; moreover, when a researcher further claims that his or her methodology fails to predict human 93 Stanley and Danko (1996). 94 Note that Stanley only describes and measures successful outcomes, and not persons with failed investment outcomes that for whatever reason engaged in extreme saving and did not achieve a hyperbolic return from saving. 95 See Hahm, 'Warren Buffett will Not Apologize for His Junk Food Addiction', http://finance.yahoo.com/ news/warren-buffett-berkshire-hathaway-sweet-tooth-dairy-queen-coca-cola-see-s-candies-201539716.html. behaviour most of the time, then the theory is flawed and should be abandoned in favour of a better theory. And, where the Libertarian paternalist (qua economist) readily admits the theory so applied is not predictive of human behaviour most of the time, then a better theory should be readily available. We call that sort of approach to inquiry 'science'. We call a name-calling approach to inquiry directed against classes of persons 'rhetoric'. 96 Tax policy based on 'rhetoric' generally results in justifications for taxes levied predominantly on workers (or the poor) as a class of persons. Of course, the vast majority of the tax base is today levied on workers at least in OECD countries. Accordingly, the claim by Camerer, et al., that tax policy is 'asymmetric' because the poor make bad choices and pass the burden to the wealthy who make good choices is plainly factual error based on misunderstanding of the composition of the tax base. The predominant form of 'asymmetry' in respect of Libertarian paternalism is the application of the method solely to change the food choices of the poor. Without the collection of detailed data on relative prices or consumer preferences, Libertarian paternalism represents a coherent approach to tax policy only when directed against unhealthy food choices of the wealthy who are presumptively not subject to boundaries of food price. In the case of wealthy consumers, tax policy could be used as a signal to 'nudge' the person toward a better food choice based on their own preferences 97 ; however, in respect to low-income consumers, the tax-'nudge' may interfere with preferences and create a 'welfare loss'. Such a tax policy could also create perverse results with a 'Giffen' good, or as described by Pratt: The combination of a food tax and a healthy food subsidy conceivably could, in theory, lead weight-conscious, physically active people to gain weight, however (as a result of spending more time on healthy meal preparation and less time exercising), so we should try to determine whether this prediction is accurate regarding active consumers. 98 If the purpose of Pigouvian-type tax policy is to raise revenue and not necessarily to change behaviour, then it is conceivable that wealthy persons could be charged a high rate of excise tax to offset the internality health cost up to the price that would not change consumption behaviour. Finally, as a matter of tax policy design, the current tax system encompasses existing forms of taxation that are already serving public policy goals. For example, wage taxes are thought to fall on immobile workers as opposed to 'mobile' capital. Many economists say this tax policy should increase economic growth (albeit without empirical evidence for the claim). 99 However, since workers can only pay so many regressive taxes at once (wage taxes, gasoline taxes, fat taxes, council taxes, etc.), a coherent tax policy proposal must compare the costs and benefits of 96 See McCloskey (1983). 97 Rizzo and Whitman, 700 ('New paternalists claim that they are evaluating the observed behavior of the individual in terms of his own normative standard. This appears attractive until we realize that the individual has no unambiguous standard for the appropriate level of time discounting'). 98 Pratt,125. 99 Clausing,460,480. one regressive tax against another regressive tax as exclusive to each other; and not to make comparisons of a proposed tax against a status quo situation in an imaginary alien civilization where the poor do not pay any taxes. Hence, in any honest intellectual debate over economic policy, we should expect to observe preeminent economists debating the relative merit of wage taxes in comparison to sugar-sweetened beverages as possible alternatives. The absence of any inkling of such a debate in economic circles indicates the economic discourse in respect to tax policy is normative. The potential tax policy implications of switching the underlying philosophical premise of all tax policy are also far more profound than Brownell, et al., and other public health experts have thus far acknowledged. If taxes are to be maximized based on a presumption of non-'rational' choice associated with the product without data on relative price or consumer preferences, then what other consumer choices might also be thought to be irrational and therefore subject to tax? For example, are jet-skis also a presumptively non-'rational' purchase decision under this framework? What about 12-cylinder sports cars, or small-engine aircraft? The answer to each of these questions seems to be 'yes'. In terms of particular food products, what about gout-causing foie gras consumed primarily by the wealthy? Under the approach of Brownell, et al., the taxation should be maximized to stop the wealthy from harming themselves by consuming the foie gras product. The bottom line is that the link between Libertarian Paternalism and any particular tax policy and the specific consumer items upon which the policy is to be applied, at least thus far seems to be solely the class of the person making the purchase decision. Furthermore, if taxes are to be maximized under the Brownell, et al., framework for tax policy, there is as yet no coherent reason not to set the tax at a prohibitively high level on the respective product, thus resulting in de facto prohibition with an even greater public health benefit. 100 And, furthermore, if sin taxes are to be levied, is the objective of tax policy to design the tax so as to raise revenue or to deter consumption of the underlying product? Obviously, the producers of foie gras might be disappointed if the sin tax was set at a very high level (say £100,000 per serving) and this led to a decline in sales of the product. A coherent application of Libertarian paternalist theory to tax policy indicates that sin taxes (e.g., sugar sweetened beverage taxes) are a better fit when applied against the wealthy and not the poor. This is because the wealthy are the social group that have the greater flexibility in economic decision-making (i.e., the greatest 'bounds' of choice) and most readily admit a 'bad' decision was made in respect of food choice. Absent data on the relative price of healthy foods available to the poor, or evidence related to consumer 'welfare', behavioural economics can be applied to establish only that the wealthy could be 'nudged' toward better dietary decisions. 101 100 See Anderson (1997). 101 Frazao and Golan (2005), 106 ('If cost-per-calorie comparisons were useful measures of barriers to healthy eating, we would expect higher income individuals (for whom food costs should not be a barrier) to have more healthful diets than low income households. Although diet quality does increase with income levels, the improvement is slight. Basiotis et al. found that in 1999-2000, higher income households had a Healthy Eating Index of 65 (out of 100), compared with 61.7 for households below the poverty line'). Finally, in terms of moving toward a coherent tax policy, several medical scholars have sagely proposed that any incremental tax levied on the poor in respect of sugar-sweetened beverages should be offset with a credit for healthy foods, such as fruits and vegetables. 102 Nnoaham, et al., wrote: 'Targeted food-related taxes could be optimized by combining them with a subsidy on fruits and vegetables.' 103 Of course, such proposals do not directly address the reduction in 'welfare' to lowincome consumers in the loss of beverage choice in comparison to vegetables. However, such proposals do respect poor persons as rational agents able to analyse and compare relative prices. Ironically, recognizing rationality yields an analytically-superior version of tax policy analysis in comparison to the Libertarian paternalist version of behavioural economics that starts with the presumption that consumers are 'idiots', 'fools', 'unruly children', etc. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
2022-12-17T15:30:31.109Z
2017-03-25T00:00:00.000
{ "year": 2017, "sha1": "400ce7d5f332b97927e89f887ee7ddf428f57627", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10991-017-9199-1.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "400ce7d5f332b97927e89f887ee7ddf428f57627", "s2fieldsofstudy": [ "Medicine", "Economics" ], "extfieldsofstudy": [] }
119265607
pes2o/s2orc
v3-fos-license
BULKY HAMILTONIAN ISOTOPIES OF LAGRANGIAN TORI WITH APPLICATIONS We exhibit an example of a monotone Lagrangian torus inside the standard symplectic four dimensional unit ball which becomes Hamiltonian isotopic to a standard product torus only when considered inside a strictly larger ball (it is not even not symplectomorphic to a standard torus inside the unit ball). These tori are then used to construct new examples of symplectic embeddings of toric domains into the unit ball which are symplectically knotted in the sense of J. Gutt and M. Usher. In contrast to this, we establish a certain condition on the Gromov width of the complement of a Lagrangian torus inside the unit ball which ensures that it is a standard product torus. Introduction and results Our focus here is on the Liouville manifold (C 2 , ω 0 = dx 1 ∧ dy 1 + dx 2 ∧ dy 2 ) equipped with the standard Liouville form Denote by ζ 0 the corresponding Liouville vector field, which generates the flow φ t λ 0 : (C 2 , dλ 0 ) → (C 2 , e −t dλ 0 ), φ t λ 0 (z) = e t/2 z. To set up notation, we will use B 2n x (r) and D 2n x (r) to denote the open and closed balls inside C n of radius r > 0 centred at the point x ∈ C n , and S 2n−1 (r) ⊂ C n for the sphere of radius r > 0. Recall that a Lagrangian submanifold in this case is a half-dimensional submanifold to which λ 0 pulls back to a closed form. A Lagrangian isotopy is a smooth isotopy through Lagrangian embeddings; recall the standard fact that such an isotopy can be generated by a global Hamiltonian isotopy of the ambient symplectic manifold if and only if the pullbacks of λ 0 are constant in cohomology; see e.g. [27] by A. Weinstein. In general, we will call a smooth isotopy of a subset of a symplectic manifold Hamiltonian if it can be realised by an ambient Hamiltonian isotopy. The symplectic action class of a Lagrangian is the cohomology class σ L := [λ 0 | T L ] ∈ H 1 (L; R) pulled back to it, which by Stokes' theorem is the value of the symplectic area of any two-chain inside C 2 with boundary in that class. A torus is monotone if the symplectic area of a two-dimensional chain with boundary on it is proportional to the so-called Maslov class of the same chain; see V. Arnold [1] for the definition of the latter characteristic class. In particular, it follows that the Lagrangian product tori S 1 (a) × S 1 (b) ⊂ C 2 are monotone if and only if a = b. Note that the symplectic action class of he standard monotone product tori S 1 (r) × S 1 (r) ⊂ (C 2 , ω 0 ) takes values that are integer multiples of r 2 π > 0. These tori are usually called Clifford tori. R. Vianna [26] has shown that the classes of monotone Lagrangian tori inside (CP 2 , ω FS ) exhibit a very rich and interesting structure. In particular, they consist of infinitely many different Hamiltonian isotopy classes. The result [12,Theorem C] by the author together with E. Goodman and A. Ivrii implies that all of Vianna's tori can be placed inside the open unit ball (B 4 , ω 0 ) = (CP 2 \ ∞ , ω FS ) and thus, a fortiori, also give rise to infinitely many different Hamiltonian isotopy classes of monotone Lagrangian tori also when considered inside B 4 . In contrast to this, the only known Hamiltonian isotopy classes of Lagrangian tori inside C 2 are the product tori, together with linear rescalings of the "exotic" monotone torus [6] constructed by Y. Chekanov, which goes under the name of the Chekanov torus. We refer to the work [14] by A. Gadbled for the presentation that we will use here. We expect that Vianna's tori all become Hamiltonian isotopic to standard tori inside a ball which is strictly larger than the unit ball. (This can be confirmed by hand for e.g. certain particular Hamiltonian isotopies that takes Vianna's first exotic torus constructed in [25] into the unit ball.) Remark 1.1. Even though all Lagrangian tori are Lagrangian isotopic inside the ball by [12], there is still no classification of Lagrangian tori inside the plane up to Hamiltonian isotopy. Under additional assumptions concerning a certain linking behaviour with a conic; the author established a Hamiltonian classification in [11]. We begin by presenting a criterion for when a monotone Lagrangian torus inside the unit ball is Hamiltonian isotopic to a standard torus inside the ball itself. The characterisation can be reformulated in terms of the so-called Gromov width of the complement of the Lagrangian. This is a symplectic capacity that was introduced by M. Gromov in [15], which for a symplectic manifold (X 4 , ω) is equal to the supremum sup{π · r 2 ; ∃ϕ : (B 4 (r), ω 0 ) → (X, ω)} taken over all symplectic embeddings. Theorem 1.1. Let L ⊂ (B 4 , ω 0 ) be a Lagrangian torus inside the unit ball whose whose symplectic action class takes the values Zr 2 π on H 1 (L) for some fixed r ≥ 1/ √ 3. There exists a Hamiltonian isotopy inside the ball which takes L to the standard monotone product torus S 1 (r) × S 1 (r) if and only if one can find a symplectic embedding in the complement of L. Remark 1.2. In particular, this implies that the Gromov width of the complement of a Chekanov torus inside (B 4 , ω 0 ) is strictly less than π2/3 whenever its symplectic action assumes the values Zr 2 π for some fixed r ≥ 1/ √ 3. For interesting previous results about the Gromov width of the complement of Lagrangian submanifolds we refer to [3] by P. Biran as well as [4] by P. Biran and O. Cornea. We then show that the above theorem is sharp in the following sense: Even under the stronger assumption that L ⊂ B 4 \ B 4 ( 2/3 − ) is a Lagrangian torus that is Hamiltonian isotopic to S 1 (r) × S 1 (r) inside the full plane (C 2 , ω 0 ), there are cases when any such Hamiltonian isotopy must intersect S 3 = ∂D 4 at some moment in time. (In other words, the Hamiltonian isotopy cannot be confined to the unit ball that contains the original Lagrangian.) More precisely, we establish that Theorem 1.2. There exists a Lagrangian torus L ⊂ (B 4 , ω 0 ) which is Hamiltonian isotopic inside (C 2 , ω 0 ) to the standard product torus but where every such Hamiltonian isotopy necessarily satisfies φ t 0 Ht (L)∩S 3 = ∅ for at least one value t 0 ∈ [0, 1]. In addition, we may assume that one of the following holds: The torus L is constructed in Section 3 by using a probe; see Figures 3 and 4. In view of the above theorem, the following definition is natural. Consider two subsets A 0 , A 1 ⊂ (X 2n , dλ) of a Liouville domain with smooth boundary and denote by (X 2n , dλ) the completion of (X, dλ) to a noncompact Liouville manifold with a convex cylindrical end. In particular, where the latter exact symplectic manifold is the symplectisation of the boundary of (X, dλ). Definition 1.1. A Hamiltonian isotopy from A 0 to A 1 inside the completion of (X, dλ), i.e. a Hamiltonian isotopy φ t Ht : (X, dλ) → (X, dλ) which satisfies φ 0 Ht = Id, and φ 1 Ht (A 0 ) = A 1 , is said to be a bulky Hamiltonian isotopy from A 0 to A 1 relative X if there exists no smooth one-parameter family φ t,s of Hamiltonian isotopies of the same kind that satisfies φ t,0 = φ t Ht as well as φ t,1 (A 0 ) ⊂ X for all t ∈ [0, 1]. In other words, we can rephrase the above theorem as the statement that "any Hamiltonian from L to a standard torus inside C 2 is bulky relative the unit ball." The torus L that we construct in order to prove Theorem 1.2 is much more elementary than the tori constructed by Vianna. In fact, the example that we consider can be identified with the monotone Chekanov torus inside CP 2 , but where the embedding of B 4 → CP 2 that contains the torus is obtained by removing a line in the complement of the torus which is different from the "standard line at infinity"; see Figures 3 and 4. In order to distinguish L from a product torus inside the unit ball it suffices to compactify the ball to CP 2 , and then to use the classical result by Y. Chekanov and F. Schlenk [7] that the monotone Chekanov torus is not Hamiltonian isotopic to a product torus inside CP 2 . Remark 1.3. Another way to distinguish L and the product torus up to Hamiltonian isotopy inside the ball is to consider the superpotential that counts families of pseudoholomorphic Maslov-two discs with boundary on L; see e.g. the work [2] by D. Auroux. Here it is important to not only consider the count of pseudoholomorphic discs inside the ball. (In this case, that count is the same as for a product torus, by invariance of the potential under Hamiltonian isotopy.) More precisely, it is the terms in the superpotential that count the discs that pass through the line at infinity that distinguish L from the product torus. The important property here is that, for an almost complex structure on CP 2 = B 4 which makes the line at infinity holomorphic, the class of pseudoholomorphic discs of Maslov index two that pass through the line at infinity are a priori of minimal symplectic area, given the symplectic action properties of L. Hence, the count of these discs is invariant under deformations of the almost complex structure that keeps the line at infinity holomorphic. However, for a ball which is larger than the unit ball, these discs are no longer of minimal area. For that reason one should not expect them to be invariant. Additionally, in conjunction with Theorem 1.1, we can conclude that L is exotic also in the following sense which (at least a priori) is stronger: 1.1. Application to knotted symplectic embeddings. A typical symplectic embedding problem concerns the question whether there exists an embedding (Y 2n , dλ Y ) → (X 2n , Cdλ X ) of a symplectic manifold into e.g. an open Liouville domain (X, Cλ X ) for some C > 0. Here we assume that X is the interior of a compact Liouville domain with smooth boundary, while Y is compact subset of a symplectic domain with a sufficiently well-behaved boundary. Typically one is interested in the case when an obvious, or even canonical, such embedding exists for all C 0 sufficiently large. The natural question is then: how small can C > 0 be taken for a symplectic embedding to exist? The first nontrivial result about symplectic embeddings was Gromov's famous non-squeezing result [15], which showed that there are interesting symplectic obstructions beyond the obvious volume obstruction. Since then symplectic embedding problems have received a great amount of attention. And especially in dimension four, the situation is rather well understood for some particular cases of Y and X. Notably, see the seminal work [22] by D. McDuff, which answers the question when an ellipsoid can be embedded into a ball. Many of the natural examples of domains of the form Y ⊂ (C n , ω 0 ) that have been studied in the literature have the feature that ∂Y is foliated by (possibly degenerate) Lagrangian standard product tori. Most attention has been given to domains for which the standard Liouville vector field ζ 0 moreover is transverse to ∂Y. Such domains include closed balls D n (r), closed ellipsoids E(a, b) := {π z 1 2 /a + π z 2 2 /b ≤ 1}, as well as polydiscs D 2 (a) × D 2 (b) (the latter has a smooth boundary with corner equal to a Lagrangian product torus). Domains of this type are typically depicted by their image under the standard momentum map In this manner we obtain a direct connection between symplectic embedding problems and embedding problems for families of Lagrangian tori. This direction was taken in the work [18] In the case when there exists a symplectic embedding (Y 2n , dλ Y ) → (X 2n , dλ X ) one can further ask the question whether two different such embeddings have images that can be made to coincide after a symplectomorphism of the ambient space (X, dλ X ). It was shown by J. Gutt and M. Usher [17] that this is not necessarily the case, even if such a symplectomorphism exists for the completion of (X, dλ X ) to a Liouville manifold (X, dλ X ), in a number of cases. The same authors calls an embedding is called symplectically knotted (relative some other embedding) if there exits an ambient symplectomorphism inside the completion that takes the image of one embedding to the other, but when no such symplectomorphism exists of the original Liouville domain. We now show that, in view of Corollary 1.3, the embedding of a domain can be shown to be symplectically knotted by considering Lagrangian tori contained inside its boundary. be any closed symplectic domain that satisfies There exists a symplectic embedding for which the monotone Lagrangian torus ∂Y is mapped to a torus L as in Theorem 1.2, and such that for any > 0, one can find a symplectomorphism Φ : In view of Corollary 1.3 we conclude that: 3) by mere considerations of symplectic action. Example 1.4. The above method in particularly yields a symplectically knotted embedding of the polydisc , ω 0 ) into the unit ball, which was not covered in [17], and seems to be of a rather different nature than the examples therein. It is unclear to the author if this embedding remains symplectically knotted inside B 2 (1) × B 2 (1); if this is the case, this would answer Question 1.9 in the aforementioned paper. Proof of Theorem 1.4. The key point is that the Hamiltonian isotopy from to L inside C 2 that was constructed in Section 3 can be taken to fix the hypersurface { z 1 2 + 2 z 2 2 = 1} ⊂ D 4 setwise; this is the the hypersurface that contains the "probe" P 2 as well as the Lagrangian torus. To see this, it is convenient to extend the embedding ψ 2 of the probe constructed in the same section to a symplectic embedding defined using polar coordinate for some small δ > 0. Note that ω 0 pulls back to the product symplectic form ω 0 + sds ∧ dϕ on B 2 ( (1 − δ 2 )/2) × ((−δ, δ) × S 1 ), while the restriction Ψ 2 | {s=0} = ψ 2 is the original embedding of the probe P 2 from Section 3. One can then realise the Hamiltonian isotopy of the torus in the probe by a suitable lift of a Hamiltonian isotopy of (B 2 ( (1 − δ 2 )/2), ω 0 ) that is generated by a compactly supported Hamiltonian, to yield a Hamiltonian isotopy of the product of symplectic manifolds. Finally it suffices to make a cut-off of the Hamiltonian by a suitable smooth bump function. The proof of Theorem 1.1 After the application of the positive Liouville flow φ 4 λ 0 to both L and ϕ(B 4 ( 2/3)) for some small > 0 (recall that φ 4 λ 0 is a conformal symplectomorphism) we may in addition assume that the new Lagrangian torus has the symplectic actions Zπe 4 r/ √ 3 while being disjoint from the rescaled image e 2 ϕ(B 4 ( 2/3)) of a symplectic ball of slightly larger radius e 2 2/3. (Here we have made use of the assumption in Theorem 1.1 that the closure of the image of ϕ is contained inside the open unit ball.) If we manage to construct the sought Hamiltonian isotopy in this case, the general case will then also follow immediately. Indeed, it suffices to rescale the produced Hamiltonian isotopy by the negative-time Liouville flow φ −4 λ 0 . In view of the above, we will in the following restrict attention to the case when r > 1/ √ 3 and ϕ : ( 2.1. A neck-stretching sequence. Symplectic reduction applied to the boundary ∂D 4 = S 3 → CP 1 produces a compactification B 4 = CP 2 where the latter is equipped with the Fubini-Study symplectic form ω FS in which a line has symplectic are equal to ω FS = π. In particular, using ∞ to denote the line at infinity, we have (B 4 , ω 0 ) = (CP 2 \ ∞ , ω FS ). The main technical ingredient that we will need is neck-stretching around a hypersurface of contact type that can be identified with a small unit normal bundle around L. Neck-stretching first appeared in work [13] by Y. Eliashberg, A. Givental, and H. Hofer, and later made precise in the SFT compactness theorem [5] by F. Bourgeois, Y. Eliashberg, H. Hofer, C. Wysocki, and E. Zehnder and independently [8] by K. Cieliebak and K. Mohnke. Roughly speaking, neck-stretching is a conformal limit in which the symplectic manifold splits into several pieces, along with its pseudoholomorphic curves. For us it will be crucial to consider the neck-stretching limits of the foliation of pseudoholomorphic lines of CP 2 , which persists for arbitrary compatible almost complex structures by Gromov's classical result [15]. We follows the same strategy and work in the same setting as in [12, Section 3]; we direct the reader to that article for most technical points concerning the method. of contact type that can be seen as an embedded spherical normal bundle of L. Stretching the neck amounts to choosing a sequence J τ , τ ≥ 0, of compatible almost complex structures on CP 2 . More precisely, all J τ are fixed outside of the aforementioned concave end near L, and are all equal to the standard integrable complex structure i near the divisor ∞ . The limit compatible almost complex structure on CP 2 \ L will be denoted by J ∞ . in the cylindrical part of the concave end the sequence of complex structures becomes cylindrical on a larger and larger subset of the noncompact end as τ → +∞. In the limit the almost complex structure J ∞ is cylindrical on the entire end. We refer to [12, Sections 3 and 4] for more details, and the precise choices of almost complex structures. For the analysis that we conduct it is crucial that the cylindrical almost complex structure is chosen with respect to the contact form on U T * L induced by the flat metric on L. The reason is that, for instance, the nonexistence of contractible geodesics makes the breaking analysis of pseudoholomorphic curves significantly simpler. Recall that SFT compactness theorem implies that a sequence of finite energy J τ -holomorphic curves has a subsequence that converges to a pseudoholomorphic building which consists of several levels of punctured finite-energy pseudoholomorphic curves. These finite energy curves are asymptotic to Reeb chords on U T * L, i.e. lifted geodesics for the flat metric in the case under consideration. We will only be interested in the case of a sequence of J τ -holomorphic degree one curves in CP 2 , which are usually called lines. Recall that there exists a unique line through every two points, or through a point with a given complex tangency, by Gromov's classical result [15]; any lines is moreover automatically an embedding. In the case of a limit of lines the corresponding building will a priori consist of: • a non-empty top level consisting of punctured J ∞ -holomorphic spheres in CP 2 \ L; • a (possibly zero) number of middle levels consisting of punctured pseudoholomorphic spheres in R × U T * L for the cylindrical almost complex structure J cyl ; and • a (possibly empty) bottom level consisting of punctured pseudoholomorphic spheres in T * L for the almost complex structure J std defined in [12,Section 4], such that the spheres moreover can be glued along the punctures to form a continuous map from a sphere into CP 2 of degree one. Of course, it is also possible that the limit just consists of a component in the top level; in this case the sphere has no punctures (it is a compact J ∞ -holomorphic sphere of degree one in the usual sense). By positivity of intersection (see [20] by D. McDuff) one can deduce that any component arising in the limit is a (possibly trivial) branched cover of an embedded punctured sphere. Note that the almost complex structures J std and J cyl used here have the feature that the canonical and consider the J τ -holomorphic lines that pass through pt as well as some second fixed point on L. (By Gromov's result [15] there is always a unique such line.) A sequence of such lines for which τ → +∞ has a convergent subsequence by the SFT compactness theorem [5]. Due to the point constraint on L the limit is a pseudoholomorphic building in the class of a "broken" line that passes through both pt ∈ B 4 \ L as well as some point on the torus L. Using the monotonicity property for the symplectic area of a pseudoholomorphic curve (see [24] by J.-C. Sikorav) applied to the ball ϕ(B 4 pt ( 2/3)) ⊂ ϕ(B 4 (e 2/3)) and while using ( †), we deduce that for the unique top level component A pt ⊂ CP 2 \ L of the limit building that passes through the point pt. (This uniqueness is a consequence of positivity of intersection; again see [20].) From this we are able to conclude that: Lemma 2.1. For a generic point pt ∈ ϕ(B 4 ((e − 1) 2/3)) and a generic perturbation of the almost complex structure J ∞ in a unit normal bundle of L ∪ ∞ , we can assume that the component A pt is • disjoint from ∞ , • of symplectic area π2r 2 , where r < 1/ √ 2, and • of Maslov index four and embedded (thus in particular it is not a branched cover). Proof. Every broken pseudoholomorphic line must consist of a plane that is disjoint from ∞ by the flatness of the metric on L used in the construction of the neck-stretching sequence; see [12,Section 3]. Since these punctured spheres inside CP 2 \( ∞ ∪L) are of symplectic area equal to kr 2 π > kπ/3 for some k = 1, 2, 3, . . . , by our assumptions, and since A pt is of symplectic area at least π2/3 by the above argument based upon monotonicity, we conclude that A pt has area 2r 2 π (i.e. k = 2) and that r < 1/ √ 2. Furthermore, there can be no other punctured spheres in the top level that are disjoint from ∞ except A pt . (Here we use the property that a sphere of degree one is of total symplectic area equal to π.) To compute the Maslov index of A pt , we first observe that it can be at most four for a generic almost complex structure (it is sufficient to perturb near L) and positivity of the index; again see [12,Section 3]. Finally, since the point pt was chosen to be generic, we can assume that A pt is of Maslov index at least four, and moreover not a branched cover of a plane of Maslov index two. To that end, recall that the moduli space of planes of Maslov index two evaluates to a three-dimensional chain, and thus so does the multiply covered planes of Maslov index two. (The moduli space of simply covered planes of Maslov index four, on the other hand, evaluates to a five-dimensional chain.) Recall that the plane A pt must be embedded by positivity of intersection [20], since it is not a branched cover. Lemma 2.2. The J ∞ -holomorphic plane A pt ⊂ CP 2 \ ( ∞ ∪ L) of Maslov index four produced by the above lemma has a simply covered asymptotic Reeb orbit. Proof. Consider a sequence of J τ -holomorphic lines which satisfy a generic tangency condition at a generic point pt ∈ A pt as τ → +∞. Using the SFT compactness theorem, we can extract a limit holomorphic building from a convergent subsequence. We first claim that the limit component is smooth at the point where the tangency is taken. Indeed, positivity of intersection implies that in some neighbourhood of the point pt , the underlying simply covered curve must be smooth; see [20]. There is still the possibility that the building contains a branched cover of the component A pt with a branch point precisely at pt (such a curve satisfies any prescribed tangency condition). This scenario can be excluded by a symplectic area argument as in the proof of the previous lemma, using the fact that the symplectic area of a line is equal to π. (The hypothetical building would otherwise contain a component of symplectic area at least 2 · Apt ω FS = 4r 2 π > π.) To conclude, we have shown that we can find a limit that satisfies any generic tangency condition at any generic point in A pt , satisfying the additional property that the underlying point of the curve is smooth. A dimension analysis as in [12,Section 3] then implies that we can find an unbroken J ∞ -holomorphic line ⊂ CP 2 \ L (i.e. a pseudoholomorphic curve without punctures) that satisfies the tangency. (Any component in the top level of a broken line comes in a family of dimension strictly less than four.) Using the existence of the unbroken line that passes through A pt , together with fact that the connecting morphism H 2 (B 4 , L) δ − → H 1 (L) is an isomorphism, positivity of intersection [20] allows us to conclude that A pt • ≥ 2 if the asymptotic is multiply covered. Since two curves of degree one have algebraic intersection number [ ∞ ] • [ ∞ ] = 1, we finally arrive at the sought contradiction by yet another positivity of intersection argument The numbers indicate the dimension of the moduli space of the respective component (without any asymptotic constraint in the Bott manifolds of periodic Reeb orbits). The asymptotic orbits are lifts of the geodesics on L in the homology classes ±η ∈ H 1 (L). Without loss of generality we may assume that the bottom component is a cylinder that intersects L cleanly in the corresponding geodesic. 2.3. A condition for Hamiltonian unknottedness. The monotonicity combined with Lemma 2.1 now implies that the broken line produced in the previous subsection consists of precisely two components in its top level: the embedded plane A pt together with an embedded plane A ∞ that passes through ∞ ; both are simply covered and have simply covered asymptotics. Further, by the classification of pseudoholomorphic cylinders in [12,Section 4] implies that the component in the bottom level is a standard cylinder, which roughly speaking is the complexification of the geodesic in class ±η ∈ H 1 (L) to which the planes are asymptotic. Even if the original broken line does not pass through L, we can replace the cylinder in the bottom level with a cylinder that intersects L cleanly precisely in the corresponding geodesic; such a configuration is shown in Figure 1. Since the involved asymptotic orbits are simply covered by Lemma 2.2, the smoothing technique from [12,Section 5] can then be used to produce a smoothing of the above building to an embedded symplectic sphere that intersects L cleanly along the simply covered closed geodesic in class ±η ∈ H 1 (L) to which the planes A pt and A ∞ are asymptotic. In other words, the assumptions of the below proposition is met, from which the existence of the sought Hamiltonian isotopy then follows. Proposition 2.3. Assume that we can find a tame almost complex structure J on CP 2 which is standard near ∞ and for which there is a J-holomorphic line whose intersection with L is a simple closed curve of Maslov index four (computed using the trivialisation of T B 4 ). Then L is Hamiltonian isotopic to a product torus inside B 4 by a Hamiltonian isotopy supported inside the same ball. 2.4. The proof of Proposition 2.3. By the "refined" version of the nearby Lagrangian conjecture for the cotangent bundle (T * T 2 , d(p 1 dθ 1 +p 2 dθ 2 )) of a torus established in [11,Theorem B] it suffices to find a Hamiltonian isotopy of L supported inside B 4 that places the torus inside the subset (CP 2 \ ( ∞ ∪ {z 1 z 2 = 0}), ω FS ) ∼ = (T 2 × U, d(p 1 dθ 1 + p 2 dθ 2 )), and so that the torus moreover becomes homologically essential inside the same neighbourhood. Here z i denote the standard affine coordinates on CP 2 \ ∞ ∼ = C 2 , and is an open convex subset. To do this we will rely on the techniques from [11,Section 4.2], by which it suffices to find two J-holomorphic lines i ⊂ CP 2 , i = 1, 2, which intersects ∞ in two distinct points, and for which L ⊂ CP 2 \ ( ∞ ∪ 1 ∪ 2 ) is homologically essential. Namely, after a deformation near the nodes of ∞ ∪ 1 ∪ 2 that can be performed by hand, there then exists a Hamiltonian isotopy that fixes ∞ setwise and takes the three lines ∞ ∪ {z 1 z 2 = 0} to the three lines in standard position. In order to construct the J-holomorphic lines ∞ we need to again consider a neck-stretching sequence J τ induced by a flat metric on L. It will furthermore be crucial that: • J τ = i near ∞ , and • the line whose existence was assumed remains J τ -holomorphic for all τ ≥ 0. In other words, we want to converge to a building as shown in Figure 1 when taking the limit τ → +∞. We use ±η ∈ H 1 (L) to denote the homology class of the (unoriented) closed geodesic on L to which the planes involved in the limit are asymptotic. To ensure that J τ can be made to satisfy the latter bullet point above, we argue as follows. Lemma 2.4. After a Hamiltonian isotopy, can be made to coincide with a "complexified geodesic" (i.e. a J std -holomorphic cylinder explicitly described in [12,Section 4]) for the flat metric on L inside some Weinstein neighbourhood D ≤δ T * L → B 4 of L. Proof. Recall the standard fact that any smooth isotopy of L can be generated by an ambient Hamiltonian isotopy of its Weinstein neighbourhood D ≤δ T * L. In this manner we can thus deform in order to make it intersect L in a closed geodesic that represents [ ∩L] ∈ H 1 (L). The normal form for a symplectic neighbourhood can then readily be used to, first, make tangent to the complexified geodesic along L and, second, to make it coincide with the complexified geodesic in a neighbourhood. The existence of the broken pseudoholomorphic line arising from the limit of has the following strong, and for us crucial, implications. Lemma 2.5. For any neck-stretching sequence J τ as above for which remains pseudoholomorphic, a pseudoholomorphic building that arises as a limit of lines can contain only simply covered components in the bottom and middle levels, together with possibly branched covered cylinders asymptotic to the geodesic in the homology class ±η ∈ H 1 (L). In addition, if one component is a cylinder, then all components are simply covered cylinders with asymptotics to geodesics in the same homology class (not necessarily the class η). For a generic J τ , it follows that the building consists of at most one component in these levels, which moreover is simply covered. Proof. Recall that the bottom and middle levels are foliated by standard cylinders asymptotic to the geodesics in the homology class ±η ∈ H 1 (L); see [12,Section 4]. As a consequence, positivity of intersection [20] together with [ ∞ ]•[ ∞ ] = 1 implies that there can be no nontrivial branched cover of a component different from the aforementioned cylinders. (The nature of the SFT-convergence [5] implies that the J τ -holomorphic lines that converge to the broken configuration have an intersection number with which is strictly greater than one.) For the same reasons, under the assumption that the limit building contains a cylinder in its bottom or middle level, it follows that all components are cylinders in the same class. The consequence under the stronger assumption that J τ is generic can then be shown as follows. An index argument readily implies that there can be at most one cylinder asymptotic to a geodesics in the homology class ±η ∈ H 1 (L) arising in the limit, and that this cylinder moreover is simply covered. Indeed, otherwise one can readily extract a broken plane of Maslov index µ satisfying either µ < 2 or µ > 4 that arises as a sub-building of the limit. Following the construction of K. Cieliebak and K. Mohnke [9], we now consider the limit of lines that satisfy a generic tangency condition at a point pt ∈ L; also c.f. the author's work [10] which also is based upon Cieliebak and Mohnke's technique. The limit building then necessarily consists of precisely three components in its top level: the planes A 1 , A 2 , A ∞ ⊂ CP 2 \L of Maslov index two, where A ∞ intersects ∞ precisely once transversely and where A 1 , A 2 ⊂ B 4 \ L. The remaining component lives in the bottom level and is a three-punctured sphere C 0 ⊂ T * L which passes trough pt ∈ L, where it satisfies the prescribed generic tangency condition; see Figure 2 for a schematic picture. There are two possibilities for the three-punctured sphere C 0 ; either it is a twofold branched cover of a cylinder, or it is embedded; see [10] for more details. In this case C 0 is necessarily embedded. Here we have made heavy use of Lemma 2.5 (also in order to conclude that the planes A i themselves are not broken buildings). The goal is then to find a nonbroken pseudoholomorphic line that passes through each of the two planes A 1 , The sought J-holomorphic lines are finally constructed by following an idea due to K. Mohnke [23], in which Lemma 2.5 is the crucial ingredient that we need in order to rule out the appearance of branched covers. Proposition 2.6. One can find an unbroken J ∞ -holomorphic line i ⊂ CP 2 \L, i = 1, 2, that passes through the plane A i (but which is disjoint from the other plane A j with j = i). It follows that A 1 and A 2 are asymptotic to geodesics in different (primitive) homology classes in H 1 (L). Remark 2.1. It can also be shown that the lines exist under the mere assumption that A i are asymptotic to different (primitive) homology classes, but we do not need this fact here. Proof. Take a complex tangency transverse to a point p 1 ∈ A 1 and consider the sequence of J τ -holomorphic lines that satisfy this tangency. Since Lemma 2.5 implies that the limit line is smooth at p 1 , it must still be transverse to A 1 . For a generic p 1 and tangency, the limit line is not broken. Positivity of intersection and [ ∞ ] • [ ∞ ] = 1 implies that the limit line is disjoint from A 2 . An elementary topological consideration then implies that A 1 and A 2 indeed are asymptotic to geodesics in different homology classes. By symmetry we also obtain the sought line 2 . Since the lines i produced by the above proposition can be perturbed inside CP 2 \L through J-holomorphic lines, so that their unique intersection point is contained inside in the complement of ∞ , we have thus finally managed to produce the lines in the sought position. (The linking properties established in the above proposition implies that L is homologically essential in the complement of the lines. 3) × S 1 (1/ √ 3) can be isotoped to L inside the probe P 2 ⊂ D 4 . We can moreover place L inside B 4 ( √ 1 − r) for any r ∈ (0, 1/6). For example, the monotone Clifford torus of symplectic action π/3 is given by L 0 := ψ 2 (S 1 (1/ √ 3) × S 1 ) ⊂ S 3 ( 2/3). Considering a suitable smooth family of simple closed curves that all bound the area π/3 inside B 2 (1/ √ 3) we obtain a Hamiltonian isotopy L t := ψ 2 (γ t × S 1 ) ⊂ P 2 ⊂ (C 2 , ω 0 ) of Lagrangian tori. We will take L 0 to be the Clifford torus while L 1 the torus obtained from the curve γ 1 ⊂ B 2 (1/ √ 2) shown in Figures 3 and 4. Proof. The representative of the Chekanov torus described in [14] differs from L 1 simple by the linear change of coordinates In particular, the torus is clearly Hamiltonian isotopic to the Chekanov torus.
2019-03-05T17:31:40.000Z
2019-03-05T00:00:00.000
{ "year": 2019, "sha1": "00b9075416f08d28817a4a498938adcadbbeb22c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "00b9075416f08d28817a4a498938adcadbbeb22c", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }