id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
211772349
pes2o/s2orc
v3-fos-license
Benefits of Human Resource Information Systems in a South African Construction Organisation This study assessed the impact of Human resource information system (HRIS) within a large construction organisation in South Africa. The study adopted a quantitative research approach and used descriptive statistics to analyse the data gathered. Research respondents consisted of twenty-seven persons from the human resource department in an identified construction company in Gauteng, South Africa. The study revealed that HRIS is a management system in accordance with the legislation governing labour relations in the country which provides a clear vision of the business and saves time. Also, it minimises errors that are caused by human factor. This paper offers an overview of HRIS on Human Resource Management. Thus, highlighting its benefits thereof. Introduction The dynamic project-based and labour-intensive nature of the construction industry has a major impact on human resource management (HRM) [1]. The type of construction work required on a project may differ significantly from project to project [2]. This means that necessary skills and knowledge needed may change from one project to another. Hence, the industry relies on labour outsourcing [3], joint ventures, subcontracting, alliances, and the creation of new organisations to deliver projects [4]. All of these make the sector distinct, more so, they have made relationships between companies and their employees momentary and fluid, unlike in other industrial sectors. According to Loosemore et al. [5] the construction industry pays little attention to HRM issues, therefore making planning for employee requirements a vague exercise which may result in the decrease of employee productivity, increase in labour turnover, and reduced employee moral thus making it hard to plan for the future [6]. Moreover, the HRM focus has been centralised to head-office function and employee information becomes spread between interdependent departments within the organisation making its accessibility increasingly difficult [6]. Tetz [7] observed that there is a need to incorporate a wider range of employee's information thus ensuring an effective HRM decision-making process. A school of noted the increase in the adoption of information technology (IT) with applications that provide storage for the data required without cost increments in the administration of the HRM function [8]. Hence, the use of computers for the administration of employees has steadily increased in the past few years [9 -13]. According to Raiden et al., [8] Human Resource Information Systems (HRIS) have been developed to aid the HRM function with comprehensive expert or decision-support systems. With rapidly changing human resource requirements and the casual nature of employment within the construction sector, HRIS is a means for organisations to overcome these problems through the adequate, reliable and accurate accessibility to personnel information [6]. HRIS includes systems and processes that integrate HRM and IT hence, it has become vital a tool for many organisations. HRIS is an instrument that uses information technology (IT) for the effectiveness of HRM practices and applications and it is rapidly becoming its own IT field [14; 15]. The IT development has improved the manner in which employee information is gathered, [16]. IT has enabled the comprehensive adoption of HRIS applications and assisted companies to increase productivity by enhancing the proficiency of HRM [17]. Hence, the study by Panayotopoulou et al. [18], highlighted the importance of HRM investment in IT and educating and training employees about the benefits of using HRIS. Gill and Johnson [19] defined HRIS has a computerized system that comprises of a database or similar database that tracks employees and their employment records. Hence, it can be seen as an integrated system used to gather, store and analyse information regarding an organisations human resource [20]. According to Dessler [21], issues pertaining to HRM have been a major concern to mangers. This is because organisational objectives are met through the efforts of employees within the company hence, it is imperative that all personnel are effectively managed. The study by Lengnick-Hall and Moritz [22], highlighted that HRIS enables HRM personnel to contribute effectively to the organisational objectives. This can be achieved by systematising and decentralising the routine of HRM tasks. Also, HRIS provides HRM personnel with the time needed to direct their attention to other pressing matters within the organisations, such as talent management and leadership development [22]. Previous studies have revealed how the use of HRIS has increased within organisations, [23 -25]. Furthermore, Khera and Gulati, [26] noted that the use HRIS is not a new concept but it keeps evolving with changing environment. The study further identified the major role for HRIS is human resource planning (HRP), which serves as a major element in any company. It assists organisations to have updated information on current employees, as well as future workforce demand and supply. Correspondingly, the use of HRIS such as performance management, training and development, compensation management and corporate communication are being utilised within organisations [27; 28]. Studies by Hendirks, (2003), Beadles et al. [15] and Kovach et al. [29] observed the advantages of HRIS from a three-dimensional viewpoint: the advantages for management, the HR department and for employees. For management HRIS enables efficient decision-making; reduces costs and improved budget control; transparency within the organisation; a clear business vision and transparency in the process of hiring and firing of employees. While HRIS advantages for HR department are: centralised database with all the employees information; an up to date database, especially for regionally diversified organisations; paperless work thus reducing human error; system is in accordance with the legislation; a decrease or elimination of redundancy within the system; ensures company processes conform to standards; data is highly reliable; enhances employees satisfaction with the HR department because of the departments efficiency; enables control over internal migration of employees, manage their talents and the ability to take preventive measures to avoid disputes within the organisation. Meanwhile the benefits for employees include: the likelihood of independent access to data; saves time; automated tracking and reminder to the organisational obligations and events; encourages employees to take initiatives and make decisions based on the information obtained in the HRIS system; data is always available; information is readily available on the system; internal web training courses for employee development may be accessed, thus enhancing staff knowledge, skills and morale [22]. Additionally, Lengnick-Hall and Moritz [22] mentioned other benefits for implementing HRIS include: creation of HRM policies and programs; facilitate decision-making regarding employee transfer, promotion, nomination, retirement plans, provident funds, leave, travel allowances; providing information and submitting returns to governmental statutory bodies; gathering suitable data and changing them to information and knowledge for improved decision making; enhance competitiveness through the revaluation of HRM practices; create multiple HRM reports that are accurate and up-to-date; and improve employee satisfaction by delivering accurate HRM services promptly. It is imperative to note that for organisations to have an efficient HR department that will aid them to successfully compete in global markets and thus obtaining competitive advantage. An adequate updated information on their current employees and those they wish to employ in the future should always be conveniently accessible. It is based on this knowing that this paper assessed the benefits of HRIS in a South African construction organisation. Research Methodology The study employed quantitative methods in assessing the benefits of HRIS in a South African construction organisation. The organisation is a large contracting firm situated in Gauteng, Johannesburg. The approach adopted enabled the researcher to examine relationships among variables, using descriptive statistics. The study used a structured questionnaire as an instrument of data collection. The questionnaire for this study comprised of closed-ended questions using a five-point Likert scale, to measure the views of the participants by choosing a factor from a number of factors ranging from 'Strongly Disagree to 'Strongly Agree'. Twenty-seven questionnaires were distributed to the HR department within the company. A 100% response rate was achieved. The Cronbach's Alpha test for consistency was used to measure the internal consistency for the variables and yielded an alpha value of 0.972. Though, there are different reports on the acceptable alpha value, ranging from 0.70 to 0.95 [30 -32]. The study by George and Mallery [33] reported that any value above 0.7 is acceptable. Therefore, the questionnaire used for this study is reliable. The biographical data were analysed using percentage, while mean item score (MIS), and standard deviation (SD) were used to rank the identified benefits as rated by the respondents. The factor with the highest MIS was ranked first followed by the next in a descending order. More so, factors with MIS of 3.00 and above were consider significant benefits of HRIS on HRM in a South African within a construction organisation. Background Information The results from the data gathered on the respondent's background information showed that majority of the respondents were between the ages of 41 -45 years old (22.2%) and the least age category was between the ages of 20 -25 years (3.7%). Furthermore, results revealed that the average years of work experience among the respondents is 13.5 years. Additionally, the results revealed that most of the respondents either had a matric certificate (37%) or diploma (37%) with the least having a master's degree (3.7%). Based on the data analysis shown in Table 1 it can be deduced that the participants were matured enough in terms of work experience and qualified enough based on their educational background in the area of HR to give reasonable answers to the questions of this research study. Table 2 reveals how the participants ranked the benefits of HRIS. Improved management system in accordance with the legislation ranked first (MIS=3.70; SD=1.07); Standardization of business processes ranked second (MIS=3.67; SD=1.00); HRIS is an extensive database for a wide range of employee information ranked third (MIS=3.60; SD=1.05); HRIS compiles employee profile in compliance to the employment equity act ranked fourth (MIS=3.60; SD=1.12). Increase of overall decision-making efficiency ranked fifth (MIS=3.56; SD=1.05); HRIS is user friendly ranked sixth (MIS=3.52; SD=1.09); Provides business transparency ranked seventh (MIS=3.52; SD=1.22); and Provides a clear vision of the business ranked eighth (MIS=3.52; SD=1.19). Also, HRIS can compile the organisation's equity plan ranked ninth (MIS=3.48; SD=1.09), and HRIS saves time ranked tenth. Likewise, HRIS compiles a report on the skills development of employees ranked eleventh (MIS=3.44; SD=1.15; Elimination of paper forms ranked twelfth (MIS=3.41; SD=1.12); and automatic tracking/reminder to business obligations ranked twelfth (MIS=3.41; SD=1.12). Furthermore, minimize errors that are caused by human factor ranked thirteenth (MIS=3.37; SD=1.11); also, increasing staff morale ranked fourteenth (MIS=3.37; SD=1.24); lastly, insightful process of hiring and firing employees ranked fifteenth (MIS=3.11; SD=1.28). The data from the respondents is congruent with the suggestion by [20], [15] and [29] who noted the three-dimensional advantages of HRIS. Also, the data concurs with the study by [22], which revealed that the benefits of HRIS are; creation of HRM policies and programs; facilitate decision-making regarding employee transfer, promotion, nomination, retirement plans, provident funds, leave, travel allowances; providing information and submitting returns to governmental statutory bodies; gathering suitable data and converting them to information and knowledge for improved decision making; enhance competitiveness through the revaluation of HRM practices; create multiple HRM reports that are accurate and up-to-date; and improve employee satisfaction by delivering accurate HRM services promptly. Moreover, the submission by [18] added that HRIS provides HRM personnel with the time needed to direct their attention towards more business critical and strategic levels tasks, such as leadership development and talent management. Conclusion The conclusion is based on the review of literature and the analysed data from the respondents on the benefits of HRIS. The data from the respondents revealed that improved management system is in accordance with legislation, standardisation of business processes, HRIS is an extensive database for a wide range of employee information, HRIS compiles employee profiles in compliance to the employment equity act and it increases the of overall decision-making efficiency. The fact that this study only focused on the HR department in a single construction organisation poses as a limitation to the study, hence the results of these study cannot be generalised. Therefore, this study can be extended to other companies for a broader view. Likewise, a study on the types of HRIS employed within the different organisations may be examined.
2019-11-14T17:08:14.411Z
2019-11-13T00:00:00.000
{ "year": 2019, "sha1": "6a3d19f61b1b43de8a099e6100c4213e3b882a45", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/640/1/012007", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "eaf3b4899f5f470359b73a884c490d3df41d0fd6", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
27318023
pes2o/s2orc
v3-fos-license
Fed-Batch Synthesis of Poly(3-Hydroxybutyrate) and Poly(3-Hydroxybutyrate-co-4-Hydroxybutyrate) from Sucrose and 4-Hydroxybutyrate Precursors by Burkholderia sacchari Strain DSM 17165 Based on direct sucrose conversion, the bacterium Burkholderia sacchari is an excellent producer of the microbial homopolyester poly(3-hydroxybutyrate) (PHB). Restrictions of the strain’s wild type in metabolizing structurally related 3-hydroxyvalerate (3HV) precursors towards 3HV-containing polyhydroxyalkanoate (PHA) copolyester calls for alternatives. We demonstrate the highly productive biosynthesis of PHA copolyesters consisting of 3-hydroxybuytrate (3HB) and 4-hydroxybutyrate (4HB) monomers. Controlled bioreactor cultivations were carried out using saccharose from the Brazilian sugarcane industry as the main carbon source, with and without co-feeding with the 4HB-related precursor γ-butyrolactone (GBL). Without GBL co-feeding, the homopolyester PHB was produced at a volumetric productivity of 1.29 g/(L·h), a mass fraction of 0.52 g PHB per g biomass, and a final PHB concentration of 36.5 g/L; the maximum specific growth rate µmax amounted to 0.15 1/h. Adding GBL, we obtained 3HB and 4HB monomers in the polyester at a volumetric productivity of 1.87 g/(L·h), a mass fraction of 0.72 g PHA per g biomass, a final PHA concentration of 53.7 g/L, and a µmax of 0.18 1/h. Thermoanalysis revealed improved material properties of the second polyester in terms of reduced melting temperature Tm (161 °C vs. 178 °C) and decreased degree of crystallinity Xc (24% vs. 71%), indicating its enhanced suitability for polymer processing. Introduction Polyhydroxyalkanoates (PHA) are a versatile group of microbial biopolyesters with properties mimicking those of petrol-based plastics. A growing number of described bacterial and archaeal prokaryotic species accumulate PHA as refractive granular inclusion bodies in the cell's cytoplasm. PHA granules are surrounded by a complex membrane of proteins and lipids; these functional "carbonosomes" are typically accumulated under conditions of an excess exogenous carbon source in which are used in the bioprocesses and the distillation for ethanol recovery. Moreover, distillative ethanol recovery generates a mixture of medium-chain-length alcohols (butanol, pentanol, etc.), which are used by the company for extractive PHA recovery from microbial biomass. This strategy saves expenses for the typically applied and often halogenated extraction solvents, which considerably contribute to the entire PHA production costs [48]. Currently, PHA production at PHBISA is carried out using the well-known production strain Cupriavidus necator, a eubacterial organism lacking the enzymatic activity for sucrose cleavage; hence, sucrose hydrolysis of the monomeric sugars (glucose and fructose) is a needed laborious operation step during upstream processing. For further optimization of this sucrose-based PHA production process, the assessment of alternative production strains appears reasonable. Such new whole-cell biocatalysts should fulfill some requirements: Growth rate and volumetric PHA productivity that are competitive with the data known for C. necator; direct sucrose conversion without the need for hydrolysis; temperature optima in the slightly thermophile range (in order to save cooling costs, a decisive cost factor under the climatic conditions prevailing in São Paolo); and last but not least, the strain should be able to produce copolyesters with advanced material properties. A strain that appears promising in all these criteria is Burkholderia sacchari IPT 101 (DSM 17165), originally isolated from the soil of Brazilian sugarcane fields and investigated by Brämer and colleagues [49]. The strain is reported to accumulate high amounts of PHA inter alia from glucose [39,50], sucrose [49,50], glycerol [39,50], organic acids [51], pentose-rich substrate cocktails mimicking hydrolysates of bagasse [52], and hydrolyzed straw [53]. Aimed at the optimized utilization of lignocellulose hydrolysate, efforts are currently devoted to further improve the strain's substrate conversion ability in terms of xylose uptake [54]. PHA production by this organism and its mutant strains was demonstrated both in mechanically stirred tank bioreactors [52,53,55,56] and in airlift bioreactors [57]. As a drawback, the wild type strain displays insufficient ability for 3HV formation from structurally related precursors such as propionic acid, which is in contrast to pronounced 3HV formation by its mutant strain B. sacchari IPT 189 [54,56,58,59]. Formation of copolyesters consisting of 3HB and 4HB, hence P(3HB-co-4HB), was successfully demonstrated by co-feeding glucose or wheat straw hydrolysate (WSH) and the 4HB-related precursor compound γ-butyrolactone (GBL) [49]. Only recently has the production of copolyesters of 3HB and 3-hydroxyhexanoate (3HHx) by genetically engineered B. sacchari been reported [60]. In the present study, we demonstrate for the first time the feasibility of high-cell density production of PHB and P(3HB-co-4HB) by B. sacchari based on saccharose from PHBISA and the 4HB-precursor GBL, and for the first time, GBL's saponified form, 4-hydroxybutyrate sodium salt (Na-4HB). Furthermore, by addressing the contradictory literature information on the optimum temperature at which this organism thrives [50][51][52][53][54], we adapted the strain to an elevated cultivation temperature of 37 • C according to the requirements at the Brazilian production site [48,61]. Detailed kinetic data under controlled conditions in laboratory bioreactors, and an in-depth comparison of the polymer data of PHB and P(3HB-co-4HB), respectively, are provided. Strain Maintenance and Adaptation to Elevated Temperature Burkholderia sacchari DSM 17165 was purchased from DSMZ, Germany, and were grown on solid media plates (medium according to Küng [62] with 10 g/L of sucrose as the carbon source and 2 g/L ammonium sulfate as the nitrogen source). In two-week intervals, single colonies were transferred to new plates and incubated at 37 • C. All mineral components of the medium were purchased in p.a. quality (Company Roth, Graz, Austria), whereas sugarcane sucrose was obtained as unrefined saccharose directly from PHBISA. Shaking Flask Cultivation to Assess Production of 4HB-Containing PHA For preparation of pre-cultures, fresh single colonies from solid media were transferred to 100 mL of a liquid mineral medium containing the following components (g/L): KH 2 PO 4 , 9.0; Na 2 HPO 4 ·2H 2 O, 3.0; (NH 4 ) 2 SO 4 , 2.0; MgSO 4 ·7H 2 O, 0.2 g; CaCl 2 ·2H 2 O, 0.02; NH 4 Fe(III)citrate, 0.03; SL6, 1.0 (mL/L); sucrose, 15.0. These pre-cultures were incubated at 37 • C under continuous shaking; after 24 h, 5 mL of these pre-cultures were used for inoculation of four flasks each containing 100 mL of the minimal medium. The pH-value was adjusted to 7.0. After 8 h of incubation at 37 • C, 4HB-precursors were added to the cultures as follows: Two of the flasks were supplied with a solution of GBL, and two cultures with a solution of Na-4HB. Both solutions were added in a quantity to achieve a final precursor (GBL or the anion of 4HB, respectively) concentration of 1.5 g/L each. 15 h later, the re-feed of 4HB precursors was accomplished using the same quantity (1.5 g/L). After 47 h of cultivation, the experiment was stopped and the fermentation broth was analyzed for cell dry mass (CDM), PHA mass fraction in CDM, and PHA composition (fractions of 3HB and 4HB) (analytical methods vide infra). PHB Production Single colonies of B. sacchari were used to inoculate 100 mL (pre-cultures) of the medium according to Küng as described above. These pre-cultures were incubated (37 • C) for 36 h; then, 5 mL each of these pre-cultures were used for the inoculation of seven shaking flasks each containing 250 mL of the minimal medium. These cultures were incubated under continuous shaking at 37 • C for 36 h, until high cell densities (8-9 g/L) were reached, and two of them were used to inoculate a Labfors 3 bioreactor (Infors, CH) with an initial working volume of 1.5 L (1.0 L fresh medium with compounds calculated for 1.5 L plus 0.5 L inoculum). At the start of the cultivation, sucrose and (NH 4 ) 2 SO 4 amounted to 15 g/L and 2.5 g/L, respectively. The set point for dissolved oxygen concentration (DOC) was 40% of the air saturation during the growth phase, and 20% during nitrogen-limited conditions; DOC was controlled by automatic adjustment of the stirrer speed and aeration rate. The pH-value was set to 7.0 and controlled automatically by the addition of H 2 SO 4 (10%) to decrease the pH-value, and ammonia solution (25%) during the growth phase or NaOH (10%) during the accumulation phase to increase the pH-value. Hence, during the growth phase, the addition of the nitrogen source was coupled with pH-value correction. The cultivation was carried out at 37 • C. The time points of sugar addition (50% w/w aqueous solution of Brazilian sugarcane saccharose) are indicated in Figure 2 by arrows; the total amount of sucrose solution refeed amounted to 360 g. P(3HB-co-4HB) Production: This process was based on inoculum preparation according to the previous experiment. Cultivation in the bioreactor was performed using a minimal medium identical to the process at the company PHBISA (g/L): KH 2 PO 4 , 5.0; (NH 4 ) 2 SO 4 , 2.5; MgSO 4 ·7H 2 O, 0.8; NaCl; 1.0; CaCl 2 ·2H 2 O, 0.02; NH 4 Fe(III)citrate, 0.05; trace element solution SL6 2.5 mL/L; sucrose 30; and the 4HB-precursor 4HB was provided by dropwise addition during the accumulation phase (total addition of GBL 15.5 g/L). Also in this case, a Labfors 3 bioreactor with an initial working volume of 1.5 L (1.0 L fresh medium with compounds calculated for 1.5 L plus 0.5 L inoculum) was used with the same basic parameters (DOC, T, pH-value) as described for the previous fermentation. The time points of sugar addition are indicated in Figure 7 by the arrows; the total amount of sucrose refeed amounted to 207 g of solution. Cell Dry Mass (CDM) Determination A gravimetric method was used to determine CDM in the fermentation samples. Five mL of the culture broth was centrifuged in pre-weighed glass screw-cap tubes for 10 min at 10 • C and 4000 rpm in a Heraeus Megafuge 1.0 R refrigerated centrifuge (Heraeus, Hanau, Germany). The supernatant was decanted, and subsequently used for substrate analysis. The cell pellets were washed with distilled water, re-centrifuged, frozen, and lyophilized (freeze-dryer Christ Alpha 1-4 B, Martin Christ Gefriertrocknungsanlagen GmbH, Osterode am Harz, Germany) to constant mass. CDM was expressed as the mass difference between the tubes containing cell pellets minus the mass of the empty tubes. The determination was done in duplicate. The lyophilized pellets were subsequently used for determination of intracellular PHA as described in the next paragraph. Analysis of PHA Content in Biomass and Monomeric PHA Composition For the analysis of PHA, standards of P(3HB-co-5.0%-3HV) (Biopol TM , ICI, London, UK) were used for determination of the 3HB content; for determination of 4HB, "self-made" Na-4HB (next paragraph) was used as the reference material. Intracellular PHA in lyophilized biomass samples was transesterificated to volatile methyl esters of hydroxylkanoic acids via Braunegg's acidic methanolysis method [63]. Analyses were carried out with an Agilent Technologies 6850 gas chromatograph (30-m HP5 column, Hewlett-Packard, Palo Alto, CA, USA; Agilent 6850 Series Autosampler). The compounds were detected by a flame ionization detector; the split ratio was 1:10. Preparation of Na-4HB Na-4HB was synthesized by manually dropping a defined quantity of GBL into an equimolar aqueous solution of NaOH under continuous stirring and cooling. The obtained solution of Na-4HB was further frozen and lyophilized (freeze-dryer Christ Alpha 1-4 B) to obtain Na-4HB as a white powder. This powder was applied as a reference material for the analysis and as a co-substrate. Substrate Analysis The determination of carbon sources (sucrose and its hydrolysis products glucose, fructose, Na-4HB, and GBL) was accomplished by HPLC-RI using an Aminex HPX 87H column (thermostated at 75 • C, Biorad, Hercules, CA, USA), a LC-20AD pump, a SIC-20AC autosampler, a RID-10A refractive index detector, and a CTO-20AC column oven. Pure sucrose, glucose, fructose, Na-4HB, and GBL were used as standards for external calibration. Isocratic elution was carried out with 0.005 M H 2 SO 4 at a flow rate of 0.6 mL/min. Analysis of Nitrogen Source (NH 4 + ) The determination of the nitrogen source was done using an ammonium electrode (Orion) with ammonium sulfate solution standards (300-3000 ppm) as described previously [39]. PHA Recovery After the end of the experiments, the fermentation broth was in situ pasteurized (80 • C, 30 min). Afterwards, the biomass was separated from the liquid supernatant via centrifugation (12,000 g; Sorvall ® RC-5B Refrigerated Superspeed centrifuge, DuPont Instruments, Wilmington, NC, USA), frozen, and lyophilized (freeze-dryer Christ Alpha 1-4 B). Dry biomass was decreased by overnight stirring with a 10-fold mass of ethanol; after drying, PHA was extracted from the degreased, dried biomass by continuous overnight stirring in a 25-fold mass of chloroform in light-protected glass vessels. The solution containing the PHA was separated by vacuum-assisted filtration, and concentrated by evaporation of the major part of the solvent (Büchi Rotavapor ® R-300). This concentrated PHA solution was dropped into permanently stirred ice-cooled ethanol. Precipitated PHA filaments of high purity were obtained by vacuum-assisted filtration, dried, and subjected to polymer characterization (vide infra). Molecular Mass Distribution Gel Permeation Chromatography (GPC) analysis was carried out on a Waters 600 model (Waters Corporation, Milford, MA, USA) equipped with a Waters 410 Differential Refractometer and two PLgel 5 µm mixed-C columns (7.8 × 300 mm 2 ). The mobile phase constituted by chloroform (CHROMASOLV ® for HPLC amylene stabilized, Sigma-Aldrich, Milan, Italy) was eluted at a flow rate of 1 mL/min. Monodisperse polystyrene standards were used for calibration (range 500-1.800,000 g/mol). Samples were prepared at a concentration of ca. 0.5% (w/v). Thermoanalysis Differential Scanning Calorimetry (DSC) analysis was performed using a Mettler DSC-822E instrument (Mettler Toledo, Novate Milanese, Italy) under a nitrogen flow rate of 80 mL/min. The analysis was carried out in the range from −20 to 200 • C at a heating and cooling rate of 10 • C/min. By considering the second heating cycles in the thermograms, the glass transition temperature (T g ) was evaluated by analyzing the inflection point, while the melting temperature (T m ) and crystallinity (X c ) was evaluated by analyzing the endothermic peak. X c was determined by considering the value of the melting enthalpy of 146 J/g for the 100% crystalline PHB. Both characterization tests were carried out on five replicates for each kind of sample and the data were presented as mean ± standard deviation. Statistical differences were analyzed using one-way analysis of variance (ANOVA), and a Tukey test was used for post hoc analysis. A p-value < 0.05 was considered statistically significant. Results 3.1. Impact of 4HB-Precursors GBL and Na-4HB on Poly-(3-hydroxybutyrate-co-4-hydroxybutyrate) (P(3HB-co-4HB)) Biosynthesis by Burkholderia sacchari DSM 17165 on Sucrose Figure 1 illustrates the outcomes of the shaking flask experiment comparing the effect of adding 4HB-precursors GBL and Na-4HB to B. sacchari cultivated on sucrose as main carbon source. After 47 h of incubation, the CDM concentration was in the range of 5 g/L in all experimental setups. Final PHA concentrations amounted to 1-2 g/L without significant differences between the individual cultivation setups. Using GBL as the 4HB-related precursor, PHA fractions in the CDM were slightly lower than in the case of using Na-4HB, but almost identical to the setups without precursor addition (ca. 30% vs. ca. 35%, respectively). The 4HB fractions in PHA (4HB/PHA) differ in dependence on the applied precursor; using GBL, this value amounted to 20.8%, while it was only 14.1% when using Na-4HB. As expected, the setups cultivated on sucrose as the sole carbon source (no addition of 4HB-related precursors) resulted in the generation of the PHB homopolyester. Here, it has to be emphasized that it is not clear from the available data if the generated polyester is definitely a P(3HB-co-4HB) copolyester with random distribution of the individual building blocks, a blend of homopolymers consisting of 3HB or 4HB, respectively, or a blend of different P(3HB-co-4HB) copolyesters with different 4HB fractions. rate of 1 mL/min. Monodisperse polystyrene standards were used for calibration (range 500-1.800,000 g/mol). Samples were prepared at a concentration of ca. 0.5% (w/v). Thermoanalysis Differential Scanning Calorimetry (DSC) analysis was performed using a Mettler DSC-822E instrument (Mettler Toledo, Novate Milanese, Italy) under a nitrogen flow rate of 80 mL/min. The analysis was carried out in the range from −20 to 200 °C at a heating and cooling rate of 10 °C/min. By considering the second heating cycles in the thermograms, the glass transition temperature (Tg) was evaluated by analyzing the inflection point, while the melting temperature (Tm) and crystallinity (Xc) was evaluated by analyzing the endothermic peak. Xc was determined by considering the value of the melting enthalpy of 146 J/g for the 100% crystalline PHB. Both characterization tests were carried out on five replicates for each kind of sample and the data were presented as mean ± standard deviation. Statistical differences were analyzed using one-way analysis of variance (ANOVA), and a Tukey test was used for post hoc analysis. A p-value < 0.05 was considered statistically significant. Impact of 4HB-Precursors GBL and Na-4HB on Poly-(3-hydroxybutyrate-co-4-hydroxybutyrate) (P(3HB-co-4HB)) Biosynthesis by Burkholderia sacchari DSM 17165 on Sucrose Figure 1 illustrates the outcomes of the shaking flask experiment comparing the effect of adding 4HB-precursors GBL and Na-4HB to B. sacchari cultivated on sucrose as main carbon source. After 47 h of incubation, the CDM concentration was in the range of 5 g/L in all experimental setups. Final PHA concentrations amounted to 1-2 g/L without significant differences between the individual cultivation setups. Using GBL as the 4HB-related precursor, PHA fractions in the CDM were slightly lower than in the case of using Na-4HB, but almost identical to the setups without precursor addition (ca. 30% vs. ca. 35%, respectively). The 4HB fractions in PHA (4HB/PHA) differ in dependence on the applied precursor; using GBL, this value amounted to 20.8%, while it was only 14.1% when using Na-4HB. As expected, the setups cultivated on sucrose as the sole carbon source (no addition of 4HBrelated precursors) resulted in the generation of the PHB homopolyester. Here, it has to be emphasized that it is not clear from the available data if the generated polyester is definitely a P(3HBco-4HB) copolyester with random distribution of the individual building blocks, a blend of homopolymers consisting of 3HB or 4HB, respectively, or a blend of different P(3HB-co-4HB) copolyesters with different 4HB fractions. Bioprocess This experiment aimed to test a medium similar to the one used at the industrial company PHBISA for sucrose-based PHA production by C. necator, and to study its influence on the kinetic data and on the polymer production (cf. Materials and Methods section). Of major importance, it was intended to considerably increase the concentration of the residual biomass and to achieve higher productivities for PHA. This was accomplished using an advanced strategy for adding the nitrogen source (NH 4 + ) during the microbial growth phase by coupling the addition of the nitrogen source with the correction of the pH-value. Instead of a periodic re-feed of (NH 4 ) 2 SO 4 solution to maintain the nitrogen concentration at the desired level, NH 4 OH was used as a base for correction of the pH-value and, at the same time, to provide the nitrogen needed by the strain to grow. Hence, the addition of the nitrogen source was directly coupled to the excretion of acidic metabolites during the growth phase. After 12.5 h of fermentation, the NH 4 OH solution as the pH-correction agent was replaced by NaOH solution (20%) in order to provoke a nutritional stress by limitation of the nitrogen source to stop the biomass formation and to enhance PHA production; this time point is marked by a full line in Figure 3. The depletion of the nitrogen source occurred after 19 h of cultivation. Figure 2 illustrates the time curves of the sugar concentrations (sucrose, glucose, and fructose). It is easily seen that the strain possesses the metabolic ability to rapidly hydrolyze the disaccharide sucrose to its monomeric sugars by the excretion of an extracellular invertase enzyme. Immediately after inoculation, hydrolysis started, resulting in about 9 g/L sucrose and 6 g/L of monomers (glucose plus fructose) already present in the first sample taken at t = 0 h. The time points of sucrose additions are marked by arrows in Figure 2. Remarkably, the concentrations of the two monosaccharides do not follow the same trend with time, which might be due to the changing conversion rates of the individual monomers (glucose or fructose, respectively) with the changing environmental (nutritional) conditions during the cultivation. Mathematical modelling of the data to elucidate the metabolic processes should therefore be performed in follow-up experiments by specialists in the field of metabolic flux analysis. A total quantity of 360 g sucrose solution was added during the process. A total sugar consumption of 29.14 g/(L·h) was observed, and a conversion yield of sugar to CDM of 0.18 g/g (calculated for the entire sugar addition and also encompassing the not utilized sugar in the spent fermentation broth) ( Table 1). Limitation of the carbon source was avoided during the entire cultivation period by permanent monitoring (HPLC) and re-feeding ( Figure 2). This experiment aimed to test a medium similar to the one used at the industrial company PHBISA for sucrose-based PHA production by C. necator, and to study its influence on the kinetic data and on the polymer production (cf. Materials and Methods section). Of major importance, it was intended to considerably increase the concentration of the residual biomass and to achieve higher productivities for PHA. This was accomplished using an advanced strategy for adding the nitrogen source (NH4 + ) during the microbial growth phase by coupling the addition of the nitrogen source with the correction of the pH-value. Instead of a periodic re-feed of (NH4)2SO4 solution to maintain the nitrogen concentration at the desired level, NH4OH was used as a base for correction of the pHvalue and, at the same time, to provide the nitrogen needed by the strain to grow. Hence, the addition of the nitrogen source was directly coupled to the excretion of acidic metabolites during the growth phase. After 12.5 h of fermentation, the NH4OH solution as the pH-correction agent was replaced by NaOH solution (20%) in order to provoke a nutritional stress by limitation of the nitrogen source to stop the biomass formation and to enhance PHA production; this time point is marked by a full line in Figure 3. The depletion of the nitrogen source occurred after 19 h of cultivation. Figure 2 illustrates the time curves of the sugar concentrations (sucrose, glucose, and fructose). It is easily seen that the strain possesses the metabolic ability to rapidly hydrolyze the disaccharide sucrose to its monomeric sugars by the excretion of an extracellular invertase enzyme. Immediately after inoculation, hydrolysis started, resulting in about 9 g/L sucrose and 6 g/L of monomers (glucose plus fructose) already present in the first sample taken at t = 0 h. The time points of sucrose additions are marked by arrows in Figure 2. Remarkably, the concentrations of the two monosaccharides do not follow the same trend with time, which might be due to the changing conversion rates of the individual monomers (glucose or fructose, respectively) with the changing environmental (nutritional) conditions during the cultivation. Mathematical modelling of the data to elucidate the metabolic processes should therefore be performed in follow-up experiments by specialists in the field of metabolic flux analysis. A total quantity of 360 g sucrose solution was added during the process. A total sugar consumption of 29.14 g/(L·h) was observed, and a conversion yield of sugar to CDM of 0.18 g/g (calculated for the entire sugar addition and also encompassing the not utilized sugar in the spent fermentation broth) ( Table 1). Limitation of the carbon source was avoided during the entire cultivation period by permanent monitoring (HPLC) and re-feeding ( Figure 2). 177.6 ± 0.6 160.9 ± 0.8 Degree of crystallinity X c (%) 70.9 ± 0.9 24.0 ± 3.6 Figure 3 illustrates the time curves of the CDM, residual biomass, and PHA during the process. After the onset of nitrogen limitation after 19 h (indicated by a dashed line in Figure 3), the concentration of the residual biomass remained constant (35 g/L), whereas the PHA concentration increased, reaching a maximum concentration of 36.5 g/L at the end of the fermentation. This corresponds to a final CDM concentration of 70 g/L. Due to the fact that no 4HB-related precursors were supplied, homopolyester PHB was accumulated. The volumetric productivity for PHB, calculated for the entire process, amounted to 1.29 g/(L·h). For the entire process (t = 0 to 27.5 h), the yield for the conversion of sugars to CDM amounted to 0.18 g/g, whereas during the nitrogen-limited phase of cultivation, a conversion yield for sugars to PHB of 0.08 g/g was evidenced (Table 1). 177.6 ± 0.6 160.9 ± 0.8 Degree of crystallinity Xc (%) 70.9 ± 0.9 24.0 ± 3.6 Figure 3 illustrates the time curves of the CDM, residual biomass, and PHA during the process. After the onset of nitrogen limitation after 19 h (indicated by a dashed line in Figure 3), the concentration of the residual biomass remained constant (35 g/L), whereas the PHA concentration increased, reaching a maximum concentration of 36.5 g/L at the end of the fermentation. This corresponds to a final CDM concentration of 70 g/L. Due to the fact that no 4HB-related precursors were supplied, homopolyester PHB was accumulated. The volumetric productivity for PHB, calculated for the entire process, amounted to 1.29 g/(L·h). For the entire process (t = 0 to 27.5 h), the yield for the conversion of sugars to CDM amounted to 0.18 g/g, whereas during the nitrogen-limited phase of cultivation, a conversion yield for sugars to PHB of 0.08 g/g was evidenced (Table 1). Figure 4 illustrates the time curves of the specific growth rate µ and the specific product (PHB) formation rate q P for the entire process. Here, it is visible that the maximum specific growth (µ max = 0.41 1/h) was monitored at around 5 h of cultivation. For the entire growth phase (t = 3.75-13 h), µ max was determined with 0.15 1/h by plotting the natural logarithm LN of the residual biomass concentration vs. time. After the exchange of NH 4 OH by the NaOH solution and the resulting depletion of the nitrogen source, the specific growth tremendously decreased, and a slight decrease of the residual biomass concentration, indicated by the negative values for µ in Figure 4, was observed. The highest specific PHB production was observed starting from the onset of the exponential growth phase (t = 5 h) until the start of nitrogen depletion at t = 12 h; a q P of about 0.19 g/(g·h) was measured for the period between the two subsequent samplings at t = 6 and 8.5 h. In later periods of the process, only a slight increase of PHB production, manifested by low values for q P , was observed. Bioengineering 2017, 4, 36 9 of 19 Figure 4 illustrates the time curves of the specific growth rate µ and the specific product (PHB) formation rate qP for the entire process. Here, it is visible that the maximum specific growth (µmax = 0.41 1/h) was monitored at around 5 h of cultivation. For the entire growth phase (t = 3.75-13 h), µmax was determined with 0.15 1/h by plotting the natural logarithm LN of the residual biomass concentration vs. time. After the exchange of NH4OH by the NaOH solution and the resulting depletion of the nitrogen source, the specific growth tremendously decreased, and a slight decrease of the residual biomass concentration, indicated by the negative values for µ in Figure 4, was observed. The highest specific PHB production was observed starting from the onset of the exponential growth phase (t = 5 h) until the start of nitrogen depletion at t = 12 h; a qP of about 0.19 g/(g· h) was measured for the period between the two subsequent samplings at t = 6 and 8.5 h. In later periods of the process, only a slight increase of PHB production, manifested by low values for qP, was observed. Polymer Characterization: After the end of the experiment, the biomass was separated from the liquid supernatant via centrifugation, and was frozen and lyophilized. The dry biomass was decreased with ethanol and the polymer was extracted using chloroform. The weight average molecular mass (Mw) and the polydispersity (Pi; Mw/Mn) values of the extracted homopolymer were determined by gel permeation chromatography (GPC). The Mw was 627 ± 13 kDa and Pi was 2.66 ± 0.13 kDa (Table 1). Differential scanning calorimetry (DSC) analysis was carried out to determine the glass transition temperature (Tg), melting temperature (Tm), and crystallinity (Xc) of the PHB samples. Analysis of the obtained data showed that the Tg of the produced PHB was 1.0 ± 0.6 °C and the Tm was 177.6 ± 0.6 °C, while Xc was 70.9% ± 0.9%. Polymer Characterization: After the end of the experiment, the biomass was separated from the liquid supernatant via centrifugation, and was frozen and lyophilized. The dry biomass was decreased with ethanol and the polymer was extracted using chloroform. The weight average molecular mass (Mw) and the polydispersity (P i ; Mw/Mn) values of the extracted homopolymer were determined by gel permeation chromatography (GPC). The Mw was 627 ± 13 kDa and P i was 2.66 ± 0.13 kDa (Table 1). Differential scanning calorimetry (DSC) analysis was carried out to determine the glass transition temperature (T g ), melting temperature (T m ), and crystallinity (X c ) of the PHB samples. Analysis of the obtained data showed that the T g of the produced PHB was 1.0 ± 0.6 • C and the T m was 177.6 ± 0.6 • C, while X c was 70.9% ± 0.9%. Bioprocess Based on the results from the shaking flask scale reported in this study and previous findings which confirmed B. sacchari's potential to produce PHA containing 4HB by co-feeding sucrose and 4HB-related precursor compounds, this material was produced under controlled conditions at the bioreactor scale. It was aimed at generating a residual biomass concentration of about 20 g/L and a PHA mass fraction in CDM exceeding 60 g/L in order to be competitive with the C. necator-mediated sucrose-based PHA production process at PHBISA. Figure 5 shows the time curves of the CDM, PHA, and residual biomass, whereas Figure 6 illustrates the corresponding time curves of the sugar concentrations; again, arrows mark the time points of sucrose addition. Also in this cultivation, the nitrogen source (NH 4 + ) served as the growth-limiting factor. NH 4 + was added continuously during the growth phase as aqueous NH 4 OH solution (25%) according to the response of the pH-electrode. The maximum specific growth rate µ max measured between two subsequent samplings (t = 6-8 h) amounted to 0.23 1/h for the entire growth phase (t = 0-10 h), and the µ max for the entire exponential growth phase was determined to be 0.18 1/h. About 21 g/L of catalytically active residual biomass was produced until the onset of nitrogen depletion. Figure 7 shows the time curve of the main carbon source sucrose and its hydrolysis products glucose and fructose, which are produced by the extracellular invertase excreted by the organism; again, the rapid hydrolysis of sucrose is evident. Bioprocess Based on the results from the shaking flask scale reported in this study and previous findings which confirmed B. sacchari's potential to produce PHA containing 4HB by co-feeding sucrose and 4HB-related precursor compounds, this material was produced under controlled conditions at the bioreactor scale. It was aimed at generating a residual biomass concentration of about 20 g/L and a PHA mass fraction in CDM exceeding 60 g/L in order to be competitive with the C. necator-mediated sucrose-based PHA production process at PHBISA. Figure 5 shows the time curves of the CDM, PHA, and residual biomass, whereas Figure 6 illustrates the corresponding time curves of the sugar concentrations; again, arrows mark the time points of sucrose addition. Also in this cultivation, the nitrogen source (NH4 + ) served as the growthlimiting factor. NH4 + was added continuously during the growth phase as aqueous NH4OH solution (25%) according to the response of the pH-electrode. The maximum specific growth rate µmax measured between two subsequent samplings (t = 6-8 h) amounted to 0.23 1/h for the entire growth phase (t = 0-10 h), and the µmax for the entire exponential growth phase was determined to be 0.18 1/h. About 21 g/L of catalytically active residual biomass was produced until the onset of nitrogen depletion. Figure 7 shows the time curve of the main carbon source sucrose and its hydrolysis products glucose and fructose, which are produced by the extracellular invertase excreted by the organism; again, the rapid hydrolysis of sucrose is evident. After 10 h of fermentation, the nitrogen source supply was stopped by exchanging NH4OH with NaOH as the pH-value correction agent; now, the second phase of the process was initiated (accumulation phase). During this phase, the time curve of the residual biomass was constant and the increase of CDM until the end of the experiment was only due to the increasing intracellular concentration of PHA (see Figure 5). It is visible that already during the exponential phase of the microbial growth (t = 7-10 h) that considerable amounts of PHA were produced ("growth associated product formation"). During the phase of product formation, GBL was added dropwise in order to not move into inhibiting concentration ranges. The actual GBL concentration was always below the After 10 h of fermentation, the nitrogen source supply was stopped by exchanging NH 4 OH with NaOH as the pH-value correction agent; now, the second phase of the process was initiated (accumulation phase). During this phase, the time curve of the residual biomass was constant and the increase of CDM until the end of the experiment was only due to the increasing intracellular concentration of PHA (see Figure 5). It is visible that already during the exponential phase of the microbial growth (t = 7-10 h) that considerable amounts of PHA were produced ("growth associated product formation"). During the phase of product formation, GBL was added dropwise in order to not move into inhibiting concentration ranges. The actual GBL concentration was always below the detection limit when analyzing the samples; hence, GBL was completely converted by the cells. During the process, a total of 15.5 g/L GBL was added to the culture, distributed to a total of ten pulses of the substrate feed. At the end of the process, the final concentrations of CDM and PHA of 75.1 g/L and 53.7 g/L, respectively, were achieved, corresponding to a PHA mass fraction in CDM of 71.5%. The total PHA concentration remained constant from t = 27.5 h. The volumetric productivity of PHA for the entire process and the conversion yield of sugar to CDM were calculated as 1.87 g/(L·h) and 0.38 g/g, respectively, which signifies an enormous enhancement in comparison to the previous experiment (Table 1). Figure 7 illustrates the time curves of the specific growth rate µ, the specific PHA production rate q P , and the specific 4HB production rate for the entire process. Again, starting with nitrogen limitation at about t = 12 h, the values for µ drastically decreased, whereas the specific PHA productivity q P reached its highest values under nitrogen limited conditions; the maximum value for q P was reached between t = 16.5 and 19 h, and amounted to 0.17 g/(g·h). Maximum specific 4HB production occurred between t = 20 and 35 h, and was calculated with about 0.003 g/(g h). Co-feeding of GBL started after 20 h; until this time, the PHB homopolyester was produced (Figures 7 and 8). Starting with the sample taken at t = 23.5 h, 4HB-building blocks were detected in the polymer. The achieved 4HB fraction in PHA at the end of the fermentation was determined with 1.6% (mol/mol). The time curve of the polyester composition is illustrated in Figure 8. The essential process results are collected in Table 1 and directly compared with the outcomes of the previous process for the PHB production. (Figures 7 and 8). Starting with the sample taken at t = 23.5 h, 4HB-building blocks were detected in the polymer. The achieved 4HB fraction in PHA at the end of the fermentation was determined with 1.6% (mol/mol). The time curve of the polyester composition is illustrated in Figure 8. The essential process results are collected in Table 1 and directly compared with the outcomes of the previous process for the PHB production. Polymer Characterization: After the end of the experiment, the biomass was separated from the liquid supernatant via centrifugation, and was frozen and lyophilized. The dry biomass was decreased with ethanol and the polymer was extracted using chloroform. The Mw and Pi values of the extracted copolymer, determined by GPC, were 315 ± 24 kDa and 2.51 ± 0.15 kDa, respectively (Table 1). Statistical differences analyses showed that the Mw of P(3HB-co-4HB) was significantly lower than that of PHB. In addition, analysis of the DSC data showed that P(3HB-co-4HB) had significantly lower Xc (24.0% ± 3.6%) and Tm (160.9 ± 0.8 °C) than PHB, while Tg was in the same range (1.8 ± 0.2 °C). Table 1 compares both kinetic data and data from polymer characterization of both bioprocesses on the bioreactor scale. Polymer Characterization: After the end of the experiment, the biomass was separated from the liquid supernatant via centrifugation, and was frozen and lyophilized. The dry biomass was decreased with ethanol and the polymer was extracted using chloroform. The Mw and P i values of the extracted copolymer, determined by GPC, were 315 ± 24 kDa and 2.51 ± 0.15 kDa, respectively (Table 1). Statistical differences analyses showed that the Mw of P(3HB-co-4HB) was significantly lower than that of PHB. In addition, analysis of the DSC data showed that P(3HB-co-4HB) had significantly lower X c (24.0% ± 3.6%) and T m (160.9 ± 0.8 • C) than PHB, while T g was in the same range (1.8 ± 0.2 • C). Table 1 compares both kinetic data and data from polymer characterization of both bioprocesses on the bioreactor scale. Bioprocess The organism B. sacchari DSM 17165 possesses the desired ability to produce 4HB-containing PHA from sucrose plus both investigated 4HB precursors GBL and Na-4HB. The successful conversion of GBL towards 4HB building blocks is in agreement to previous findings reported by Cesário, who used glucose or WSH plus GBL for P(3HB-co-4HB) biosynthesis by this strain. These authors also tested P(3HB-co-4HB) production by this organism by using 1,4-butanediole as the 4HB-related precursor, revealing the incorporation of 4HB by GBL supplementation and the strain's inability to utilize 1,4-butanediole. No reports were previously available on the utilization of Na-4HB by this strain. The results reported by Cesário et al. show varying PHA fractions in CDM for the fed-batch cultivation of B. sacchari on glucose/GBL mixtures, dependent on the ratio of glucose/GBL. Cultivation on pure glucose resulted in 49.2% PHB in CDM; this value decreased with increasing GBL portions in the feed stream to only 7.1% using GBL as the sole carbon source [28]. In our shaking flask setups, the rather modest precursor supplementation of 1.5 g/L neither significantly impacted the CDM production or the PHA fraction in CDM compared to the precursor-free setups (sucrose as the sole carbon source). Remarkably, the application of the GBLs saponified from Na-4HB resulted in considerably lower 4HB fractions in PHA than observed when using the annular lactone (GBL) (21% vs. 14%). As assumed for C. necator [64] and Hydrogenophaga pseudoflava [65], GBL is imported into the cells as an intact lactone ring, which is opened only intracellularly. According to Valentin et al., only a part of 4HB is converted to 4-hydroxybutyryl-CoA (4HB-CoA) in the cells, whereas 4HB's major share is converted to succinic acid semialdehyde and succinic acid, which finally undergo conversion to the 3-hydroxybutyryl-CoA (3HB-CoA) precursor acetyl-CoA. PHA synthase in turn polymerizes 3HB-CoA and 4HB-CoA to P(3HB-co-4HB) [66]. As shown previously [39,52,53,55,57] and confirmed by the present work, nitrogen limitation is a suitable approach to boost PHA biosynthesis by B. sacchari. Generally, the strategy to constantly supply a nitrogen source by coupling the NH 4 OH supply to microbial growth by automatically responding to the signal of the pH-electrode was performed successfully to rapidly generate a high concentration of catalytically active biomass at a high specific growth rate. Only about 9 h (PHB production) or 12 h (production of 4HB-containing PHA) were needed to boost the concentration of the residual biomass above 20 g/L. This shows significant progress to comparable experiments carried out by Rocha and colleagues, who used the same strategy and achieved a maximum residual biomass of about 16 g/L after 24 h of cultivation using the mutant B. sacchari IPT 189 [55]. The maximum growth rates µ max obtained in our experiments (0. 15 [55]; this value was also obtained by da Cruz Pradella with B. sacchari IPT 189 by using a fedbatch feeding regime in an airlift reactor [57]. Reliable µ max values from the bioreactor scale cultivations of our production strain B. sacchari IPT 101 (DSM 17165) are available for xylose-based experiments, where µ max amounted to 0.07-0.21 1/h with dependence on the initial xylose concentration [52]. Using glucose during the growth phase, Rodriguez-Contreras obtained a µ max of 0.42 1/h [39]. Testing the effect of GBL on the growth of B. sacchari in shaking flask setups, Cesário et al. noticed a continued decrease of µ max from 0.32 to 0.19 1/h with GBL concentrations increasing from 5 to 40 g/L, with 40 g/L glucose as the main carbon source. In this study, µ max was unfortunately not reported for the fedbatch cultivations in the bioreactors for the production of PHB and P(3HB-co-4HB) [53]. Furthermore, we demonstrated that the organism can successfully be cultivated at an elevated temperature of 37 • C, which is beneficial for large scale operation in reactors integrated into the production facilities of the Brazilian sugarcane industry [48,61]. The cultivation temperature of 37 • C is in contrast to previous literature reports for this organism and its close relatives. Generally, 30 • C is reported as the optimum temperature to efficiently thrive most B. sacchari sp. [50]. In a mechanically stirred tank bioreactor, Raposo and colleagues cultivated the same strain for the production of PHB, xylitol, and xylonic acid at a temperature of 32 • C [61], whereas 30-32 • C was used by da Cruz Pradella et al. to culture its mutant strain B. sacchari IPT 189 for PHB biosynthesis in an airlift reactor [57], or by Rocha and colleagues in continuously operated bioreactor cultivations [55]. B. sacchari LFM 101, a strain that is most likely closely related to our production strain, was only recently tested by Nascimento et al. for PHA production on sucrose, glucose, and glycerol at both 30 and 35 • C. These authors report higher volumetric productivity and PHA fractions in CDM, and unaltered specific growth rates for cultivations carried out on glucose or sucrose at 35 • C or 30 • C, respectively. When using glycerol as the carbon source, no biomass formation or significant substrate consumption was observed, probably due to the lack of energy needed to convert the glycerol molecules [50]. As demonstrated by Rodriguez-Contreras et al. who operated a B. sacchari-mediated PHB production process at 37 • C, this problem can be overcome by feeding the cells with energy-rich carbohydrates like glucose or sucrose in the first stage (growth phase), and subsequently switching to glycerol feeding in the second phase (PHA accumulation) [39]. Values of 1.29 g/(L·h) (PHB) and 1.87 g/(L·h) (4HB-containing PHA) were achieved for the volumetric PHA productivity in the two conducted bioreactor experiments. These values are considerably higher than that reported for comparable experiments by Rodriguez-Contreras et al., who reported a volumetric productivity of 0.08 g/(L·h) for a two-stage process based on the co-feeding of B. sacchari with glucose and glycerol [39], and by Cesário and colleagues, who obtained 0.7 g/(L·h) for fed-batch cultures supplied with glucose and GBL, and 0.5 g/(L·h) when using WSH plus GBL for fed-batch P(3HB-co-4HB) production [53]. Here, it has to be emphasized that Cesário et al. [53] used considerably higher GBL dosage than we did in the study at hand; this, on the one hand, resulted in tripling the molar fractions of 4HB in PHA in comparison to our results, but, on the other hand, negatively influenced the overall volumetric PHA productivity as the fundamental economic parameter in PHA production. Regarding the obtained PHA contents in the biomass, our results show final PHA fractions in CDM of 52.4% for PHB, and 71.5% for P(3HB-co-4HB), respectively. The results by Cesário and colleagues report 73% PHB in CDM in fed-batch cultures with glucose as the sole carbon source, and 45% P(3HB-co-4HB) in CDM with pulse feeding 8 g/L GBL in the accumulation phase followed by continuously feeding GBL at a rate of 2.3 g/h. Fed-batch cultures of B. sacchari on WSH plus GBL reported in the same study resulted in a P(3HB-co-4HB) fraction in CDM of 27%. Interestingly, the authors found that in B. sacchari, the conversion yield of GBL towards 4HB can considerably be improved by supplementing acetate or propionate as additional "stimulants" for the 4HB biosynthesis [53]. Based on the works carried out by Lee et al. with C. necator, it was known previously that an increased acetyl-CoA pool from acetate conversion or from propionate ketolysis, respectively, inhibits the conversion of 4HB-CoA to acetyl-CoA, thus preserving a high 4HB-CoA pool available for the P(3HB-co-4HB) biosynthesis [67]. Using the mutant strain B. sacchari IPT 189, PHA copolyesters consisting of 3HB and 3HV were produced by Rocha et al. by co-feeding sucrose and propionic acid in two-stage bioreactor setups at a volumetric productivity of 1 g/(L·h); in these experiments, the biomass contained a PHA mass fraction of up to 60%, which is higher than in our PHB production process (52.4%), but lower than the value obtained in the present study for P(3HB-co-4HB) production (71.5%) [55]. The two-stage co-feeding experiments with B. sacchari carried out by Rodriguez-Contreras et al. on glucose and glycerol generated a PHA fraction in CDM that hardly exceeded 10% [39]. Using mixtures of xylose and glucose to mimic differently composed lignocellulosic hydrolysates, Raposo and associates produced PHB by fed-batch cultivations of B. sacchari in laboratory bioreactors. Changing the pulse size, feeding rate, and glucose/xylose ratio, the volumetric productivities decreased from 2.7 g/(L·h) (73% PHB in CDM) for pure glucose feeding to 0.07 (11% PHB in CDM) for xylose as the sole carbon source, indicating the inhibitory effect of this pentose sugar [52]. Polymer Characterization: The obtained data for polymer characterization were in the same range as the results provided by Cesário and colleagues, who extracted PHB and P(3HB-co-4HB) from B. sacchari biomass, cultivated on WSH, via the same method used in the present study. These authors describe a Mw for PHB of 790 kDa, and between 450 and 590 kDa for P(3HB-co-4HB); higher 4HB fractions gradually decreased the Mw values [29]. Our results report a M W of 627 kDa for PHB, and 315 kDa for P(3HB-co-4HB). The P i of our sucrose-based polyester samples was higher than the values reported for WSH-based PH. For PHB, we obtained a P i of 2.66, which is similar to the value obtained for the P(3HB-co-4HB) sample (2.51). For comparison, the PHB and P(3HB-co-4HB) samples produced by Cesário and colleagues had significantly lower P i , ranging from 1.4 to 1.7 [29]. Other comparable results were provided by Rosengart et al., who reported a P i of 2.33 for a B. sacchari-based PHB [68]. A considerably lower Mw (200 kDa) was described by Rodriguez-Contreras et al. for PHB obtained by co-feeding B. sacchari with glucose and glycerol; in this study, a P i of 2.5 was reported [39]. Here, it should be noted that glycerol feeding generally results in low molecular mass PHA if compared to sugar-based PHA production, as reported elsewhere [37,68]. This is due to the "endcapping effect", a phenomenon describing the termination of the in vivo PHA chain propagation in the presence of glycerol and other polyols [69]. The melting temperature T m reported by Cesário and colleagues amounted to 171.7 • C for PHB, and to 158.8 and 164.3 • C for P(3HB-co-4HB) with 7.6 or 4.6 mol% of 4HB, respectively [29]. In our case, the T m for PHB amounted to 177.6 • C, whereas for P(3HB-co-4HB) was only 160.9 • C, which matches well with the cited literature data. Our PHB displayed an X c of 70.9 • C, which is slightly higher than that reported for the WSH-based material (64.8%) [29]. A remarkably low X c of 24.0% was measured for our P(3HB-co-4HB), which is considerably lower than the value reported for P(3HB-co-4HB) based on WSH (between 47.2% and 52.3%) [29]. The PHB produced by Rodriguez-Contreras et al. on glucose plus glycerol displayed an X c of 72.8% and a T m of 163.3 • C [39]. Using PHB-rich biomass from a cultivation of B. sacchari on glucose, Rosengart et al. [21] compared the extraction performance of unusual extraction solvents (anisol, phenetole, and cyclohexanone) with the performance of classical chloroform extraction as used in our study, and by Cesário and colleagues [53]. As an outcome, the thermal properties (T m , T g , X c ) and molecular mass were fully comparable to the values obtained via chloroform extraction, thus demonstrating the feasibility of switching to sustainable, non-chlorinated alternatives to chloroform [21]. Conclusions The highest (up to now) reported productivity for B. sacchari-mediated biosynthesis of PHA with building blocks differing from 3HB is described in the present work. Adaptation of the production strain to an elevated temperature optimum of 37 • C makes it a feasible candidate for cost-efficient on-site PHB and P(3HB-co-4HB) production starting from cane sugar on the industrial scale. In any case, PHA production facilities should also in future be integrated into the existing production lines for sucrose-based bioethanol production in order to profit from reduced transportation costs, energetic autarky, and in-house availability of extraction solvents for PHA recovery from the biomass. Further efforts should be devoted to high-throughput continuous PHA production by this organism in a chemostat ("chemical environment is static") process regime. Similar to the results recently obtained by other production strains [70], the application of a multistep-continuous production in a bioreactor cascade displays a viable process-engineering tool to further increase volumetric productivity, and to trigger the distribution of 3HB and 4HB monomers in tailor-made copolyesters. Moreover, the highly effective invertase enzyme excreted by this strain deserves in-depth characterization and might be of interest for applications in food technology. Together with PHA production and other metabolites generated by this strain, such as xylitol or xylonic acid [52], this might open the door to implementing B. sacchari as a versatile platform to catalyze a bio-refinery plant starting from inexpensive feedstocks.
2017-05-15T19:17:51.377Z
2017-04-20T00:00:00.000
{ "year": 2017, "sha1": "94be1af0e56484d9d65bc744099e166c3a6d4328", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2306-5354/4/2/36/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2fd6f981eea325903291d48cd8834eebc341a534", "s2fieldsofstudy": [ "Biology", "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
73659692
pes2o/s2orc
v3-fos-license
THE STRATEGY DEVELOPMENT OF SMALL AND MEDUIM ENTERPRISES (SMEs) OF ACCOMMODATION AND FOOD SERVICE IN PHUKET This research aims to study the problematic conditions of small and medium enterprises, of hotels and food services, to provide a strategy for the development of small and medium enterprises, by providing hospitality and food services in Phuket by using integrated quantitative research which is divided into two phases. The first phase is a study of current problems of small and medium enterprises, accommodation and food services. The second phase is a strategy for the development of small and medium enterprises, accommodation and food services in Phuket by using qualitative research. Pieces of data were collected by the expert in-depth interview. Representatives from all sectors, local and national academics were to summarize information about strengths, weaknesses, opportunities and barriers to SWOT Analysis to find the right strategy and stage the forum. The participant were involved to present a draft strategy for the development of small and medium enterprises in Phuket accommodation and food service industry. The analysis of the data in the first phase of the questionnaire was using the statistical programme to distribute the frequency as a percentage value, average standard deviation and analysis of data in phase 2 by content analysis Introduction Small and medium-sized enterprises (SMEs) play an important role in the economic growth of different countries, such as the source of employment, making money for the country and creating good living for people in different countries.For ASEAN countries, the number of SMEs in ASEAN is accounted for 96 percent of the total number of enterprises.The economy of the ASEAN region contributes about 42 percent of the total economic value of the region, exporting 25 percent of total exports and employs 73 percent of all employment in the economy.When considering proportional statistics in the context of each member country, Indonesia has the highest number (196.9), followed by Thailand (43.94),Singapore (35.15),Brunei (23.99),and Malaysia (22.89) (Wongkhajohnkiat, 2017). Small and medium enterprises (SMEs) in ASEAN are important for job creation and income generation.It is also an important economic pillar of ASEAN.In Thailand, small and medium enterprises (SMEs) are the main mechanisms for enhancing the country's economic progress.It is the main mechanism for strengthening the country's economic progress by creating income for the country.As to induce employment tackle poverty, it can play a role in the foundation of economic development.It is also linked to large businesses, manufacturing, and trade as well as service sectors.The Thai government recognizes the importance of small and medium enterprises (SMEs) as it helps and develops small and medium enterprises (SMEs) continuously.In order to increase the capacity of small and medium enterprises (SMEs), the need for a strong hand can effectively compete through enhancing knowledge in improving the efficiency of the product development process and management within the organization in terms of restructuring of supporting mechanisms and the drive of small and medium enterprises (SMEs) into a system and a clear unity. In terms of access to finance and investment services for small and medium enterprises (SMEs), development of products and services, marketing and investment opportunities have taken place abroad.According to the Office of Small and Medium Enterprises Promotion (OSMEP), in 2014, statistics show that Thailand had more than 2.74 million SMEs across the country; with the proportion of SMEs accounting for 99.74 percent of total enterprises, and 99.28 percent were small enterprises (SE) while medium enterprise (ME), was the only company with a GDP of just 12,778, or 0.47 percent.Therefore, major economic development must be developed for SMEs to grow from small to medium size and medium to large.This increases the competitiveness.The country's economic potential has grown to become a high-income country.Thailand is the 51 st largest in the world.The area is 513,115 square kilometers and is the 20 th largest population in the world of about 67 million people.The main income is from the tourism industry and services.Thailand has world famous tourist attractions that generate income for the country, as well as exports, which play an important role in economic development with a GDP of around US $ 334,026 million.Thailand's economy is the 32 nd largest in the world.Thailand is also strategically located in the gateway to the heart of Asia, an important center of today's growing market economy (Siripattarasopon, 2016). Phuket is a province of Thailand with natural attractions, cultural attractions as well as other attractions.The province is a major tourist destination of the country which is considered the world famous.Each year, there are many tourists visiting.The importance of Phuket is a province with a tourism potential that is full of tourism resources as a source of income to Thailand.Phuket also creates jobs for the people because of tourism growth.As a result, the number of small and medium-sized enterprises, especially the food service and accommodation businesses, has increased.Phuket's economic overview comes from the service or type of accommodation and food service.It accounts for 45 percent of the provincial value.And Phuket operates 31 percent of food and accommodation.It is part of the service sector.The expansion of the economy in the Andaman area as a whole in the past has an average growth rate of about 5.5 percent per year.The growth rate is higher than the national average of 4.6 percent per year.The average growth rate in the South is 3.3 percent per year.It is the type of accommodation and food service.The country's average growth rate was 6.6 percent per year.The southern and southern provinces in the western region were similarly growing at an average of 11.8 percent per year.With average annual up to 7 percent.It accounts for nearly 50 percent of the provincial value of food products and services or 54 percent and 80 percent of the value of products in the category of accommodation and food services of the South (Office of Strategy Management Andaman, 2016). It is found that small and medium-sized enterprises and food service in Phuket trends to grow more.The benefits of the various provinces will be reflected in the implementation.However, problems arising from the operation of small and medium-sized enterprises, accommodation and food services are still present, such as management of marketing, finance, accounting, government support, the development of knowledge and many more.Therefore, this research is focused on studying the problems of small and medium enterprises, accommodation and food services in Phuket.It also provides strategies for the development of small and medium enterprises, lodging and food services in Phuket to strengthen small and medium enterprises, accommodation and food services and the next extension to come.It is highly recommended to conduct research. Management Strategy Strategy is the way of doing things that is expected to lead to organizational goals (Certo & Peter, 1991).Strategies will mean a plan, approach, or approach that will lead the organization to a result that is consistent with the mission and overall purpose of the organization.Researchers have identified the development strategy of small and medium enterprises (SMEs) of accommodation and food service in Phuket.This is a guideline or development method small and medium enterprises (SMEs) in Phuket to be effective.Weihrich (Suwan, 2011) that the acquisition of a strategy comes from analyzing the strengths, weaknesses, opportunities and threats to start thinking about the right strategy to making a list of the internal environment that is the primary strength by using the initials 'S'; making a list of the internal environment as the primary weakness by using the initials 'W'; writing the external environment entry as a primary opportunity by using the initials 'O'; writing the list of external environments that are barriers or primary threats by using the initials 'T'.In accordance with the matrix table -Main Opportunities (ST) -Main Strengths (ST) -Main Weaknesses (WO) -Core Weaknesses (WT) -Main Obstacles (WT) in each of the fields in the matrix table decided which strategy will be the most effective. Small and Medium Enterprises Office of Small and Medium Enterprises Promotion (2015a) gave a definition and importance of SMEs in Thailand and legalized the ministerial regulations, number of employment and fixed assets of small and medium enterprises on year 2002, authorized under the small and medium enterprise promotion act on year 2000.By requiring the enterprise to display, the image is shown in small and medium enterprises based on the meaning of each business.Characteristics of Small and Medium Enterprises (SMEs) Any occupation will vary depending on the nature of the process.Small and medium businesses (SMEs) have the following features (Manchaiarn, 2014): 1.Getting into business is easy.There are not many capital and facilities, and when it comes to the problem of loss, the chances of recovery are much easier than for large enterprises.2. Flexibility in management.Operators can control the company thoroughly and closely.3. Conduct business regardless of production.Distribution or service is highly flexible.In line with the era of production and trade, which requires rapid response.As well as production and trade aimed at a variety of forms or services rather than quantitative targeting.4. Can create specialized skills to achieve efficiency. Research Method This research uses a mixed methodology (Mixed Method) Step2: Organize a forum for public and private stakeholders.Phuket Hotels and Resorts are to confirm the draft strategy for the development of small and medium enterprises, accommodation and food services in Phuket and also presented to the national qualifications for the final draft. Result and Discussion Small and Medium Enterprises (SMEs) accommodation and food services in Phuket have high potential in the tourism industry.It is known and popular with tourists around the world for a long time.Considering the number of small and medium enterprises, accommodation and food services, it was registered as a juristic person, found that most of them were the business owners of about 47.0 percent of the total number of regular employees was less than 10 persons, representing 34.0 percent.And the logo of the business of 85.8 percent have their own website.The majority of fixed assets (including land) are more than 10,000,000 Baht or 27.0 percent.The average annual sales are more than 2,000,000 Baht or 27.0 percent. When considering the business management situation, it was found that the business itself was interested in financial planning and accounting the most, accounted at 60.80 percent, followed by management of 46.8 percent of the human resources management that is planned before the opening of the business at 70.5 percent.The criteria for selecting employees are based on knowledge at 79.5 percent, and personality issue respectively, accounted for 75.0 percent.The factor used to determine the compensation for work is the measurement of the ability to be the majority of 75.0 percent with monthly payment of 67.1 percent of the control.Every entrepreneur places importance on financial/accounting control 86.6 percent.And the performance or production accounted for 56.3 percent.When considering marketing, most of them are surveying the needs or popularity of customers regarding the services of the business by 67.78 percent.Most businesses have pricing methods based on cost, accounted competitively for 55.8 per-cent.The competitive price was 23.5 percent mostly using price strategy.However, prices are cut sharply for sale, especially off season (Low Season).Most entrepreneurs focus on marketing promotion activities.The most commonly used marketing promotion tools were advertising (72.0 percent), followed by sales staff accounted for 38.3 percent.In terms of production, the average number of customers per day was 21-50, accounting for 42.1 percent.Most of the investment comes from three sources, business owners and partners.Most of the capital comes from the owners themselves by 67.0 percent with loans from financial institutions accounted for 27.5 percent.Most entrepreneurs, if they need a loan, interest rate will be considered for 79.5 percent plus the cost of borrowing accounted for 66.8 percent.Most of the entrepreneurs have debt of more than 1,000,000 Baht or 22.0 percent.As far as preparation of financial statements was concerned, it was found that all operators had to make a profit statement by 62.0 percent.Accounting and financial matters were recorded by financial and accounting staff accounted for 64.8 percent.In terms of liquidity, all businesses have adequate cash reserves and the reserve of cash used in the business per day is mostly 50,000-100,000 Baht or 29.8 percent. Strategic plan for small and medium enterprises development Phuket hotels and food services for the purpose of this research divided into four strategies and 19 measures, comprised of Strategy 1: Strengthen the difference to attract consumers as a global travel destination, Strategy 2: Promotes cluster integration of accommodation and food services SMEs, Strategy 3: Adjusting funding measures to support economic growth, Strategy 4: Promote SMEs entrepreneurs to the global market. According to the current study, the status of small and medium enterprises (SMEs) of accommodation and food services in Phuket has high potential in the tourism industry.It is known and popular with tourists around the world for a long time.Considering the number of small and medium enterprises, accommodation and food services, registered as a juristic person, discovered that most of the private owners have less than 10 employees, annual net profits are no more than 5-10 percent.Most enterprises have a business logo and have their own website.Most businesses have fixed assets (including land) of more than 10,000,000 Baht or 27.0 percent.The average annual sales are more than 2,000,000 Baht or 27.0 percent. According to Sawangchai, Prasarnkarn, Chanrawang, Somwaythee, and Tasanon (2015) which studied on the status of small and medium enterprises (SMEs) in Andaman Province (Phuket, Phang Nga, Krabi), the results showed that SMEs in the triangle Province of Andaman were very diverse (Heterogeneous group), which composed of various types of business.Most of the businesses were owned by an individual and involved with service business.The average number of employees in each business was less than 10 people and earned about 1-5 million Baht per year.The average permanent assets of each business was found to be 1-5 million Baht and over, with the average net profit of 5-10 percent per year. In terms of the financial management status, it was found that most of the enterprises had their financial plans.Regarding their investment, most of the enterprises had their own money and borrowed some from the funding units.In terms of human resource management, the SMEs had their human resource plans and followed the regulations of salary payment of their staff.The staff of SMEs had been given clear explanations before assigning them to their jobs.Regarding production and services, the SMEs had surveyed the customers' demand before production and providing services. However, the most important internal factor which affected their businesses was found to be the ability of the administrators and the staff.The most important external factor which affected their businesses was found to be the number of competitors in the market.When considering the state of business management, it was found that the business was interested in financial planning mostly in accounting and human resource management that is planned before the opening of the business.The criteria for selecting employees are based on knowledge the personality.The factor used to determine the compensation for work is the measure of competence.There are monthly payments involved and controlled by every entrepreneur that places importance on financial/accounting. When considering marketing, most of them are surveying the needs or popularity of customers regarding the services of the business.Most businesses have pricing methods based on costs competitively.Prices are cut sharply for sale, especially off season (Low Season).When it comes to marketing promotion, most entrepreneurs focus on marketing promotion activities.The most commonly used marketing promotional tool is advertising. Considering the manufacturing aspect, the majority of clients is 21-50 on average.Most of the investment comes from three sources: from business owners and partners.Most of the capital comes from the owners themselves including loan from financial institution for investment.Most entrepreneurs, if they need a loan, it will be considered with interest rate and the cost of recovery.Most entrepreneurs have liabilities of more than 1,000,000 Baht.As for preparation of financial statements, it was found that all operators had to make a profit statement by financial and accounting officers who act as accountants.In terms of liquidity all businesses have adequate cash reserves.And the reserve of cash used in the business per day is mostly 50,000-100,000 Baht. According to Rangsungnern (2013) the study that indicated that today small businesses face the following challenges: 1) Market management issues because of the size of the small market in Nongkhai, the sustainability of the business is small.Nevertheless, the business has tried to plan its annual sales and has improved its sales plan to fit the situation.2) Financial management has been managed on the financial plan.But the problem with the debt to financial institutions are some of the interests, recovery period and loan approval.3) As for personal business management, businesses have a personalized management plan by recruiting employees from both internal and external sources.Businesses are also lacking in training to enhance their workforce, consideration of promotion based on employee competence.4) Manufacturing and/or service management of the business: the business is trying to produce products and services on time. Small and medium enterprise development strategy is based on Phuket accommodations and food service business strategic development for small and medium enterprises in hospitality and food services in Phuket.It is in line with Certo and Peter (1991) that strategy is the way of doing things is expected to lead to organizational objectives and consistent with Weihrich (Suwan, 2011), that the acquisition of a strategy comes from analyzing the strengths, weaknesses, opportunities and threats.To start thinking about the right strategy via making a key list of internal strengths by using the initials 'S'; making a key list of internal environment (key internal weakness) 'W'; writing the key external opportunities list using the 'O' letter; to write a list of external environment that is a barrier or key external threats, using the initials 'T'; Writing into the matrix table, and then pair up between the main strengths -the main opportunity (SO), the main strengths-primary obstacle (ST), primary weaknesses (WO), primary weaknesses (WT). Based on all the strategies in each square in the matrix, decide on the most effective strategy to be the core strategy and support measures.Strategy 1: "Strengthen the distinction to attract consumers to serve as a global tourist destination."Strategy 2: "Encourage cluster integration, SME entrepreneurship, accommodation and food services". According to Siriyong (2012) study on the model and strategy of small and medium sized businesses in Thailand to enhance their capacity and competitiveness in a sustainable manner.The study indicated that small and medium businesses of the four SMEs finalists in the 2011 will be formally engaged in both wholesale and retail trade, production services, and implementation strategies.The business consists of cost leadership, strategy to make a difference market development strategy product development.Strategy pricing strategy is specifically focusing on strategy to maintain customer base specific service strategies as a fast response strategy like networking strategies or business partner strategy of the old pledge, i.e. strategies to increase distribution channels, strategy for adding advertising and strategy to penetrate the furniture market.The problems and obstacles found in the business are floods, labor shortages and impact of the opening of the ASEAN Community.The problem is not finding the product at some time and the problem of lacking of government promotion. Strategy 3: "Adjusting the source of funds in the system to support economic growth".According to Srisom (2010), this may hinder the progress of the business as well as weaken the entrepreneurial spirit.This can happen from all directions.The external environment that affects business and the potential limits of the business itself.Small and medium sized businesses in Thailand are experiencing many problems.The problem that is often encountered is the shortage of funds.Small and medium sized enterprises often have problems borrowing money from financial institutions to invest or expand their investment or working capital.This is because there is no systematic accounting and lack of collateral loan.They have to rely on informal loans and pay high interest rates. Strategy 4: "Promote SME entrepreneurs to the international market".SME entrepreneurship strategies for foreign entrepreneurship focus on the strategy of entering the international market.Marketing program development and the concept of entering the international market.However, the development of the Thai economy and dynamics of the global economy, as a result, affected SMEs in Thailand to have to adapt themselves to compete in the global market by having a strategy that can enter the efficient international market.It focuses on strategies to keep pace with the global market.This is the strategy of Porter's competition, which will result in the Thai SMEs to compete in the global market." Conclussion The SMEs of accommodation and food services in Phuket has high potential in the tourism industry.It is known and popular with tourists around the world for a long time. Recommendation Local organizations should play a role in laying down rules and regulating the balance between resource users and local people, such as tourist attractions.The government should address domestic political problems and security issues by giving confidence to both investors and tourists.Government support should be integrated seriously by clearing data and policy and getting easy access.It should promote cultural heritage in order for older generation to gain knowledge and create value for businesses in the community and SMEs. Limitation This research was affected by the data obtained from the questionnaire.The general manager is 54.3 percent.Only 29.0 percent of the respondents were surveyed.It is believed that the research could be used as a strategic proposal for the purpose of the research.Organizing forum comment of stakeholders who are attending the forum is still lacking with interest from the management of each agency.Some are merely delegates to those who do not have the authority to make policy decisions.Criticism is not clear in the policy either. Figure 1 . Figure 1.Definition of small and medium enterprises (smes) in Thailand Source: Office of Small and Medium Enterprises Promotion (2015a) Phase 2 : Strategies for small and medium enterprises development, accommodation, and food Services in Phuket.Step 1. Strategic analysis.1. Research population consists of Expert is representatives from all sectors, local and national academics included 10 interviews (Representative from: Office of the National Economic and Social Development Board, Office of Small and Medium Enterprises Promotion (OSMEP), Department of Industrial Promotion Ministry of industry, Department of Business Development Ministry of Commerce, Chulalongkorn University, University of the Thai Chamber of Commerce, Khonkaen University, Phuket hotel and restaurant owners, Southern Thai Hotels Association and Phuket Tourism Association.2. The tools used in the study is the In-dept interview and using SWOT analysis to find out about strengths, weaknesses, opportunities and threats.3. The research instrument was a structured interview.Improving tool quality by offering expert in tool inspection.4. Data collection: The researcher contacted andmade a letter to the information provider.To apply for a self-interview with a lead letter.5.Checking and analyzing strategies.The researcher investigated the accuracy and reliability of triangular data.Triangulation is the verification of information in the field of data sources.Time source of data and personal sources Strategic Analysis by SWOT Analysis and Matrix Development byWeihrich (1982) to Get Strategy for Small and Medium Enterprise Development, Accommodation and Food Services in Phuket. The Strategy Development of Small and Meduim Enterprises 123 Sawangchai: . Quantitative Research Qualitative is divided into two phases.Here are the details:
2018-12-29T14:18:18.229Z
2018-10-01T00:00:00.000
{ "year": 2018, "sha1": "7bca7434906d15a44349fdcb528ad9c3ff866daf", "oa_license": "CCBY", "oa_url": "http://jurnalmanajemen.petra.ac.id/index.php/man/article/download/21106/19497", "oa_status": "GOLD", "pdf_src": "Unpaywall", "pdf_hash": "7bca7434906d15a44349fdcb528ad9c3ff866daf", "s2fieldsofstudy": [ "Business", "Economics" ], "extfieldsofstudy": [ "Business" ] }
219477831
pes2o/s2orc
v3-fos-license
On the Boundaries of Digital Markets Thepurpose of this chapter is to investigate a fundamental, but highly opaque, concept that is integral to current policy debates about European audiovisual industries. The concept in question is the “digital market.” This chapter asks: What exactly is a digitalmarket? Just as importantly,where is a digitalmarket?How are digitalmarkets defined and bounded?What does it mean to describe amarket as digital, as European, or as both? Can a digital market have a defined geography? I have written this chapter with a specific policy context in mind—namely, the European Union’s Digital SingleMarket (DSM) strategy. The DSM is a major policy intervention that raises fascinating questions for consumers, audiences, cultural producers, technology companies and regulators inside and outside Europe. The issues in play within the DSM are enormously varied, ranging from telecommunications standards to geoblocking. However, it is worth stressing that virtually all the issues being debated relate in some way to a central problematic: the boundaries of digital markets. Hence, it is an appropriate time to critically reflect on these three concepts (boundaries, the digital and markets) and the relations between them. Most of the literature onmarket boundaries is produced by economists and lawyers for the purposes of regulatory analysis. In the context of antitrust and competition law, market definition is a technical exercise to delineate what regulators call the “relevant market” and then to assess market shares, power and competition within that market (Evans 2012). A vast law and economics literature attends to the nuances of this topic, with the effect that market definition techniques have acquired a quasiscientific character through the use of formalist quantitative methods. However, the act of defining markets is always partly discursive. Market boundaries are inevitably malleable and open to contestation, because "the rigid and linear market boundaries envisioned by the law simply do not have parallels in the real world: border zones are invariably wide, and blurry" (Christophers 2013: 129). Fortunately, there are other ways to approach this problem. Scholars in the social sciences have been thinking about, and around, the general problem of market boundaries for some time. Recent work in sociology, geography and political theory provides useful concepts for understanding how market boundaries are drawn-politically, discursively and institutionally (Aspers 2011;Christophers 2013;Keat 1999). There is also a tradition of critical inquiry into the geography of markets within various strands of communication research (de Sola Pool 1990;Morley and Robins 1995;Berland 2009). Drawing on the analytical frameworks established in these fields can deepen our understanding of media policy debates, providing new ways to think about old problems. This chapter offers a selective summary of relevant ideas from these fields. By putting these ideas into dialogue with more familiar ways of thinking about markets, such as those inherited from economic and legal analysis, I hope to provide a conceptual frame through which we can see the DSM differently. My intended audience here is media and culture scholars and other readers who are interested not only in the DSM itself but also in what it might mean for debates about the geography of cultural markets. It should be emphasized that my aim is not to get into the weeds of policy detail but rather to interrogate some of the metaphors and concepts used within policy. What Is a Market? The market is a spatial metaphor, in the sense that it evokes a space of exchange whose boundaries are defined in such a way as to produce an inside and an outside. Hence, the question of market boundaries becomes essential. "To distinguish between markets, we must look at the boundaries of markets", writes the economic sociologist Patrik Aspers. "When does a market begin, and when does it end?" (Aspers 2011: 100). Defining markets is always a discursive exercise, because it involves asking questions about demand, value, comparability and commensurability. In other words, it involves asking questions about culture. Legal and statistical techniques for defining markets are still, by and large, based on an assumption of substitution (i.e., that a consumer will switch from one brand of sugar to another brand if the price rises beyond a certain amount). If substitution occurs, then it is assumed that a market of some kind exists, and the boundary is drawn accordingly. While this principle may work well enough for raw commodities, cultural goods-including audiovisual content-are not straightforwardly substitutable because they are subject to the vagaries of individual and collective taste, language, identity and comprehensibility. Their value is always uncertain. From the perspective of economics, this is "imperfect substitution"; for those in the arts and humanities, it is more likely to be understood as the fundamental irreducibility of human creativity. Audiovisual markets are paradigmatic cultural markets in the sense that they involve trade in images, affects and ideas, as well as physical products. It is often unclear where the audiovisual market begins; what dynamics of competition and substitution are at play within its boundaries; and how it interfaces with other kinds of markets, including leisure, entertainment and information markets. Is the consumer lining up at the cinema box office "in the market for" a movie only, or would they be just as happy watching something on television, going to a restaurant or reading a book? How substitutable are these experiences? Economic sociology provides some useful tools for thinking through these problems (Aspers 2011;Callon 1998). To summarize one strand of a complex literature, we can simply say that markets are a way of seeing and structuring economic activity. Markets exist to the extent that they are rendered visible through discourse and measurement, or formalized through regulation. However, transactions that occur within and define a given market are not always understood through the market frame. Many filmmakers simply do not accept the argument that they are operating within a market, preferring instead to see themselves as driven by creative, social and critical motivations. Even though markets become a necessary consideration at the point of distribution, the language of "the market" is not part of their professional identity, or exists as something that they define their work against. This is similar to what the political theorist Keat (2000) writes about in his work on market boundaries, which is concerned with delineating what he describes as the "limits of the market." For Keat, the critical study of market boundaries is a philosophical project addressing the following question: "where, and on what grounds, are the lines to be drawn between those social practices that properly belong to the market domain, and those that do not?" (Keat 2000: 70). This point about the limited acceptance of market discourse within audiovisual industry practice applies even to professions that are explicitly commercial. For example, many YouTube influencers would struggle to respond if asked about the boundaries of their market. However, they would certainly have a clear idea of who their audience is, what those individuals are looking for, how much they will pay, what forms such payment might take (attention, subscription, advertising, direct purchase) and where else they may go for similar content. In other words, "market" is a way of narrating a network of socioeconomic relations, rather than a thing that exists in its own right. Once defined or enumerated, markets also create their own social realities, hence the emphasis on the performativity of markets in recent economic sociology (Callon 1998). Market Space and Social Space A vital insight from the social science of markets is to always consider the messiness and friction built into markets. In other words, we must think of markets as social spaces constituted by history, politics and culture rather than abstract spaces constituted by exchange (Lefebvre 2000;Shields 2013: 74). This involves, among other things, paying attention to how official market boundaries come into contact with-and inevitably conflict with-consumer preferences, practices, activities and institutions that may or may not respect those boundaries. The effect of this thought experiment is to pluralize the idea of "the market" by acknowledging that there are, in fact, many different kinds of markets. More to the point, each market comprises a palimpsest of layers that interact in complex ways. In the case of audiovisual markets, these layers include: • The boundaries of territorial markets in copyright licensing, which usually but do not always align with the borders of nation states 1 ; • The preferred market boundaries of local distributors as expressed through their established sales practices (i.e., territorially anchored industry practice); • The geography of taste and demand (what people want to watch and where), which never aligns neatly with national borders or territorial market boundaries; • The linguistic geography of language competency and preference, and the availability of subtitles and dubbing; • The socioeconomic geography of consumers' ability and willingness to pay for such content; • The differential pricing and availability of that content in both formal and informal markets, and so on. In other words, the market as a regulatory construct is only one layer in the larger assemblage of preferences, transactions and dispositions that comprise this market. The official market, then, is one layer in a stack. The next logical step in this line of analysis is to consider the relations between layers. The most appropriate way to understand these relations, in my view, would be to describe them as disjunctive, following Arjun Appadurai's use of the term. Appadurai, in his canonical essay "Disjuncture and Difference in the Global Cultural Economy" (Appadurai 1996), famously envisaged the cultural economy as a series of "scapes" encompassing flows of finance, technology, media, migration and ideology. 2 Appadurai's basic point-often overlooked in subsequent commentary-is that the relations between these scapes are disjunctive: Flows in each "scape" are connected but not determinant. Each domain "is subject to its own constraints and incentives […] at the same time as each acts as a constraint and a parameter for movements in the others" (Appadurai 1996: 35). Hence, there is a degree of autonomy between ideas and technology, for example, or between human migration and technological development. Of course, there are also structural connections. This notion of disjuncture is a useful starting point for understanding the relations between audiovisual market layers. Extending this idea, we could say two things. First, relations between market layers are neither aligned nor coordinated, and often function out of kilter with one another. The geography of European languages, for example, often bears little resemblance to the territorial market boundaries established in copyright law, because linguistic communities are often spread across multiple national borders, and because many European nations contain multiple linguistic communities within their borders. Hence, the "language" layer and the "copyright territory" layer are not always aligned. Language is shaped by the contingency of history rather than the formal geometry of market boundaries. A second observation about market layers is that conflict tends to arise where there is an obvious disparity between two or more layers. This is the case, for example, when there is pent-up demand for a particular work but no formal availability to satisfy that demand (i.e., when the consumer demand layer is out of sync with the copyright licensing and/or availability layers and/or the socioeconomic layer). The result in this case would be market failure or piracy. Too much demand and too little formal availability means that activity tends to spill over into informal markets (Lobato and Thomas 2015). The analytical approach I have suggested allows us to see official market boundaries as one layer in a larger structure. This, I would argue, is an appropriate starting point for thinking about audiovisual markets. It is especially helpful for understanding controversies surrounding the DSM-a grand policy exercise that brings into focus the tension between market layers. The Digital Single Market and Its Discontents Let us now consider what this multi-layered approach to markets can reveal about the DSM. For those unfamiliar with the DSM, I begin with a summary of its key components and some well-known areas of controversy relating specifically to the audiovisual sector. By necessity, this is a brief overview of a complex topic. More detailed analysis can be found in subsequent chapters in this book and in other expert studies (Bondebjerg et al. 2015;Gomez Herrera and Martens 2018;Ibrus 2016;Trimble 2019). The DSM is a set of interlocked EU reforms designed to reduce barriers to trade in digital services and goods within the EU. It involves a raft of changes to EU law, standards and regulations across several policy areas, including copyright, consumer protection, telecommunications, e-commerce, media regulation and data policy. Many of the measures have already been passed by the European Parliament, though others, at the time of writing, are yet to be legislated. A key objective of the overall DSM package is to create seamless access to European digital services across the continent by reducing "unjustified" geoblocking and eliminating other technological barriers. A discussion of EU-wide copyright licensing was also part of the initial DSM agenda, but this was scaled back in the face of strong objections from filmmakers, producers and audiovisual distributors. The elements of the DSM that are most relevant for our analysis include new rules mandating the portability of digital media content for users moving between EU countries (so that Europeans can take their online subscription services with them when they travel within Europe) and measures designed to reduce geoblocking in ecommerce. Another important change is the 2018 revision of the Audiovisual Media Services Directive (AVMSD), which introduces a 30% European content quota and rules about the promotion and recommendation of this content within video streaming services. The DSM has been highly controversial, and it is helpful to reflect on why this is so. One reason is that the DSM embodies a number of connected, but philosophically incompatible, principles. These principles can be summarized as follows: • A rights-based principle of free movement and access to European culture, transposed into the realm of digital goods and services; • A vision of economic liberalization, competition and innovation; • A contemporary discourse of Internet freedom and a "borderless Internet" (especially evident in the comments of Andrus Ansip, EU Commissioner for the DSM); • An efficiency argument about solving market failures and blockages and improving consumer welfare; • A protectionist policy agenda (visible especially within the revised AVMSD) focused on supporting European producers through legislated protections and quotas; • A commitment to cultural diversity in line with EU principles. Clearly, these goals exist in a somewhat tense relationship. For example, the protectionism of content quotas pulls, ideologically, in a different direction from the DSM initiatives that prioritize liberalization. A second reason why the DSM has been divisive, especially among European filmmakers, is because the DSM is not strictly audiovisual policy. Instead, it is economic policy that encompasses audiovisual markets alongside other kinds of markets-including telecommunications and e-commerce. In this sense, the DSM's controversial nature can be partly attributed to the fact that it appears, at least discursively, as an attempt to redraw market boundaries around film and television and to reframe these media as part of a larger, non-medium-specific "digital" market. This was more a rhetorical move than a substantive policy shift. Despite the rhetoric of a single digital market, the DSM is really a collection of distinct initiatives designed to boost pan-EU trade within existing markets. Nonetheless, the rhetorical force of DSM-as a catchphrase-has been powerful, and somewhat counterproductive in terms of managing expectations for industry-specific solutions. Part of the problem here is the term "digital," itself a highly contentious category. Trade in digital goods frequently expands the geographic boundaries of the market, because it lowers transaction costs (Elzinga 1981). To add further complexity, digital markets also blur boundaries between formally distinct media such as cinema and television (media convergence), which means that product categories within a market may shift. In other words, the digital both pushes the geographic boundaries of the market outward while redrawing boundaries on the inside of the market. However, the digital is always bounded and territorialized through social practice, language and taste. It can only exist on the rough terrain of culture. Hence, we return, once more, to the disjuncture between market boundaries and social space. The Geoblocking Problem Let us turn to a specific issue within the DSM-geoblocking-that neatly illustrates this problem of disjuncture. Notwithstanding the many inherent tensions within the DSM project, it was the European Commission's proposals on geoblocking that generated the strongest opposition on the part of AV industry stakeholders and their representatives. Geoblocking is a technique of digital rights management in which IP geolocation is used as the basis for granting or restricting access to digital content, in order to conform to licensing agreements organized on a territorial/national basis. In this sense, geoblocking is the technical solution for extending the principle of territoriality online. Many consumers, of course, see geoblocking as a cause of great frustration. The European Commission's stated objective with the DSM was to reduce unjustified geoblocking, and by extension, to reduce digital market segmentation, geographic price differentiation and uneven availability of services within the EU. However, the implications of an end to geoblocking for long-established business models within the EU audiovisual sector were underestimated by EU policymakers. Anna Herold of the European Commission, writing in a personal capacity, has described this tension over geoblocking as "a fundamental controversy" that the DSM "stumbled upon" (Herold 2018: 255). Zahrádka (2018) has studied in detail how geoblocking became such a controversy within the DSM discussions, dividing stakeholders along different lines. On the one hand, consumer groups and Internet advocates strongly supported the aspects of the DSM that promoted cross-border access. On the other hand, European filmmakers, producers and distributors saw in the DSM's anti-geoblocking agenda a weakening of territorial copyright and pointed to unforeseen consequences. A key concern of these objectors was the destabilization of the presale financing model, which is premised on territorial exclusivity and market segmentation. For example, John McVay of the Producers Alliance for Cinema and Television (UK) warned that "Any intervention that undermines the ability to license on an exclusive territorial basis will lead to less investment in new productions and reduce the quality and range of content available to consumers" (Roxborough 2015). Other objectors were concerned that the DSM might reorganize the market around the needs of Netflix, Amazon, Google and Apple, who were arguably in the best position to benefit from the increased efficiencies of a single market. Indeed, several Silicon Valley companies expressed strong approval of the DSM. Google's Chairman Eric Schmidt stated that "To succeed globally, Europe needs a single digital market" (Mizroch and Jervell 2015), while Netflix CEO Reed Hastings went further, noting that "We really want the world to be a single market" (Stupp 2015). Comments such as these did not allay suspicions among smaller European players that they may have little to gain from redrawing market boundaries. Critics of the DSM also warned of harmful knock-on effects that would result from any weakening of territoriality. These included everything from the crippling of the theatrical exhibition sector and the financial ruin of smaller distributors who could not afford to license films on an EU-wide basis to the equalization of pricing across the EU (so that Romanians would have to pay the same as Germans for their digital movie rentals) and even the rise of dubbing in subtitle nations (because dubbing could be used as a de facto market separation measure) (Trimble 2019). As an example, consider the following remarks from Jelmer Hofkamp, secretary of the International Federation of Film Distributors' Associations (FIAD), offered in 2015 when the DSM territoriality debate was at its peak: If we look at the kind of tools they [the European Commission] want to use to create this DSM with a strong focus on availability, that is where they [the EC] go wrong because availability is not the same thing as building bigger audiences or having better circulation of European works. […] Is it the principle of availability of the single market they want, or is it actually a flourishing European production market and more circulation within the EU of European product? (Macnab 2015) Hofkamp's point here is that the stated aims of the DSM-including pan-European availability and circulation of EU works, and a flourishing European screen production culture-may be incompatible. In other words, the dream of unfettered digital availability cannot always be reconciled with the political economy of film and television production. This was not a consensus position, as those on the other side hotly disputed the claim. Pirate Party MEP Julia Reda argued that "Europe's 'natural' cultural and linguistic barriers are much more effective and unintrusive in achieving some market segmentation than discriminating viewers based on the country they are currently in." Reda added that "Nobody's going to stop going to the cinema in Portugal because a film is already viewable online on an Estonian website" (Reda 2016). As we can see from these various interventions, the key issue here for both sides was the relationship between the ideal boundaries of the market and the actual functioning of those markets in practice. Rightsholders, producers, audiovisual industry associations and some filmmakers argued that the dream of a pan-European audiovisual space was blind to the realpolitik of production, including the complex relationship between distribution guarantees and production financing. Meanwhile, consumer and Internet advocates argued that the existing system of territoriality simply could not be reconciled with the everyday practices of consumers, nor with the circulatory logics of data in the Internet age. Boundary Trouble How do we make sense of all this complex position-taking? One analytical possibility is to see the whole DSM/geoblocking affair as symptomatic of a wider conflict between the regulatory, industrial, social and cultural layers of the market, as defined earlier. For European filmmakers, producers and distributors, the vision of borderless consumption within the EU was problematic because it clashed with the existing institutional arrangements in the audiovisual sector. In other words, it was a conflict between the existing social space of the market and its new (proposed) abstract form. Internet and consumer advocates saw the situation through a different lens, but using similar categories. They argued that the artificial constraints of an outdated copyright system were being imposed on the "natural" geography of consumer demand. In both cases, the core conflict involved a perceived clash between real-world markets and official market boundaries. A second vector of conflict was whether market boundaries in the DSM would reflect consumers' needs or rightsholders' needs. Markets designed for rightsholders will logically have different boundaries from markets designed for consumers. Copyright is a system designed for rightsholders, in the sense that its incentives rely on protections and rights granted on a territorial basis, following the publishing industries' historical business model. In the DSM debate, the most vocal industry stakeholders-many of whom felt they had little to gain from an EU-wide market, because demand for their work was concentrated in a handful of key territorieswanted to retain the licensing cost structures that come with territory-by-territory licensing. In other words, they wanted small markets with tightly defined, enforceable boundaries. 3 On the other hand, consumers-to the extent that they care about market boundaries at all-are likely to prefer larger markets (for increased choice and to avoid the hassle of geoblocking) with permeable boundaries and no enforcement, except in those instances when drawing the boundary in such a way results in price differences. In other words, there is a clash not only between the material interests of producers and consumers, but also in how these interests map onto the abstract form of the market. The idea of a single audiovisual market, as articulated in the DSM discussions, proved to be divisive rather than unifying, because it brought into focus the many conflicts that arise when a boundary is moved. Conclusion In this chapter, I have argued that the Digital Single Market controversy is emblematic of a wider set of tensions about market boundaries, which are reconfigured but also reinforced by digitization. In this sense, the DSM draws our attention to the contradictions between the spatial free flow and spatial restriction that characterize digital media. Far from creating a flat space of friction-free commerce, the DSM reminds us that digital distribution has the potential to create new kinds of borders, new enclosures and new territorialities, as well as new mobilities. Scholarship and policy in this area need to retain a spatialized understanding of markets that sees them not as flat spaces of flow and exchange but as social spaces of friction. The coming years are likely to be marked by further disjuncture between the market layers I described earlier. While these problems are not unique to Europe, they will play out in an intensified form in Europe-because of its dense patchwork of media markets, institutions, languages and histories. The DSM is the most developed policy response to the problem of geoblocking that has been tried anywhere in the world. As such, it represents a major intervention into a hitherto unregulated aspect of everyday media usage affecting millions of users. Any scholar interested in questions of distribution, copyright, piracy and so on should be paying attention to the DSM, and especially to its evolution over time as the early proposals hit the hard ground of stakeholder self-interest. However, a key lesson from the DSM case is that territoriality in copyright cannot be reduced to a Manichean conflict between two sides, such as corporations versus the consumer, capital versus creativity or the powerful versus the powerless. The DSM has shown us, among other things, that moving a market boundary can have many unforeseen effects. As I have argued throughout this chapter, each stakeholder's position in the DSM debate has relied on a particular set of claims about the proper boundaries of the market. None of these claims are pure in their own right, but all of them reveal something about the disjuncture that characterizes cultural markets. Ramon Lobato is Senior Research Fellow at the School of Media and Communication, RMIT University, Melbourne. A screen industry scholar, he is known for his work on informal distribution, streaming and piracy. He is the author of Shadow Economies of Cinema: Mapping Informal Film Distribution (2012), The Informal Media Economy (2015 and Netflix Nations: The Geography of Digital Distribution (2019). He is a member of the editorial collective for the journal Media Industries. Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
2020-05-21T09:08:27.490Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "0760fa72cb4de83b9985b1f38de199b31591a828", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-030-44850-9_3.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "180a8c67b9290a64a38b7c71b7f2999c3c7ce0ec", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Political Science" ] }
46137819
pes2o/s2orc
v3-fos-license
Erythropoietin Stimulates Endothelial Progenitor Cells to Induce Endothelialization in an Aneurysm Neck After Coil Embolization by Modulating Vascular Endothelial Growth Factor This study explored a new approach to enhance aneurysm (AN) neck endothelialization via erythropoietin (EPO)‐induced endothelial progenitor cell (EPC) stimulation. Results suggest that EPO enhanced the endothelialization of a coiled embolization AN neck by stimulating EPCs via vascular endothelial growth factor modulation. Thus, the promotion of endothelialization with EPO provides an additional therapeutic option for preventing the recurrence of ANs. Endovascular coil embolization is an attractive therapy for cerebral ANs, but recurrence is a main problem affecting long‐term outcomes. In this study, we explored a new approach to enhance AN neck endothelialization via EPO‐induced EPC stimulation. Ninety adult male Sprague‐Dawley rats were selected for an in vivo assay, and 60 of them underwent microsurgery to create a coiled embolization AN model. The animals were treated with EPO, and endothelial repair was assessed via flow cytometry, immunofluorescence, electronic microscopy, cytokine detection, and routine blood work. EPO improved the viability, migration, cytokine modulation, and gene expression of bone marrow‐derived EPCs and the results showed that EPO increased the number of circulating EPCs and improved endothelialization compared with untreated rats (p < .05). EPO had no significant effect on the routine blood work parameters. In addition, the immunofluorescence analysis showed that the number of KDR+ cells in the AN neck was elevated in the EPO‐treated group (p < .05). Further study demonstrated that EPO promoted EPC viability and migration in vitro. The effects of EPO may be attributed to the modulation of vascular endothelial growth factor (VEGF). In particular, EPO enhanced the endothelialization of a coiled embolization AN neck by stimulating EPCs via VEGF modulation. Thus, the promotion of endothelialization with EPO provides an additional therapeutic option for preventing the recurrence of ANs. INTRODUCTION Compared with surgical management, endovascular coil embolization is an attractive approach for the treatment of unruptured, saccular cerebral aneurysms (ANs) because it is minimally invasive and efficient [1]. However, endovascular coil embolization has a high rate of recurrence (6.1%-33.6%) and rebleeding (0.11%-1.6%) [2][3][4][5][6]. Because the lack of endothelialization plays a crucial role in AN recurrence, the promotion of endothelialization in the AN neck may help prevent both recurrence and rupture. Endothelial progenitor cells (EPCs) were first identified from human peripheral blood and are derived from the bone marrow. Circulating EPCs promote endothelialization after coiled embolization treatments, and EPC therapy has been tested in many vascular diseases [7,8]. Studies have shown a correlation between a lack of EPCs and the incidence of cerebral ANs and have also demonstrated that bone marrow-derived EPCs Departments of a Neurosurgery and b Hand are involved in the process of AN repair [9]. However, the contribution of increased circulating EPCs after coiled embolization contributing to AN neck endothelialization remains unverified. Erythropoietin (EPO) is known for its function in erythropoiesis and has been reported to enhance the mobilization of stem cells from bone marrow and strengthen their viability and function. EPO has shown therapeutic effects in the treatment of myocardial infarcts, cerebral ANs, brain ischemia, and traumatic injuries [10][11][12] and may also promote angiogenesis to improve microcirculation and neovascularization and protect neural tissue. EPO exhibits a strong capacity to promote EPC differentiation and maturation toward endothelial cell lineages. Increasingly, studies have strongly indicated that EPO can reduce the formation and progression of cerebral ANs by promoting EPC mobilization and targeting them to sites of vascular injury [13][14][15]. These findings support the hypothesis that EPO may be used to protect against the recurrence of cerebral AN by stimulating EPCs and promoting endothelialization. We showed that EPCs enhanced AN neck endothelialization after coil therapy and that the administration of EPO increased the number and function of EPCs. Because of the potency of EPO in promoting EPCs, we explored the possibility of using EPO to promote AN neck EPC-induced endothelialization in a coiled embolization AN model. MATERIALS AND METHODS All animal procedures were carried out according to a protocol that was approved by the Institutional Animal Care and Use Committee (IACUC), and the experimental protocol was reviewed and approved by the Ethics Committee of Huashan Hospital affiliated with Fudan University in Shanghai, China. Adult Sprague-Dawley rats (150-200 g) were used in the experiments (Slac Laboratory Animal, Shanghai, China, http://english.sibs.cas.cn/rs/fs/shanghailaboratoryanimalcentercas). The coiled AN model was prepared as previously reported [16]. The AN-EPO group was administered 1.5 mg/kg Recombinant Rat EPO (R&D Systems, Minneapolis, MN, https://www.rndsystems.com) injected subcutaneously, and the AN group was given an equal amount of saline subcutaneously. The mock surgery (MS) group was given inhalation anesthesia and was treated with saline similarly to the AN group but without surgery. On days 10, 20, and 30, peripheral blood was collected to examine circulating EPCs via flow cytometry and changes in serum concentrations of vascular endothelial growth factor (VEGF), tumor necrosis factor (TNF)-a, and interleukin (IL)-6 via the MILLIPLEX MAP (EMD Millipore, Billerica, MA, http://www. emdmillipore.com). On day 30, the AN tissue was obtained for scanning electron microscopy, hematoxylin and eosin (H&E) staining, and immunofluorescence. EPCs were isolated from a healthy rat femur bone and cultured in EGM-2 medium (Lonza, Anaheim, CA, http:// www.lonza.com). EPCs were identified via Dil-ac-LDL (Invitrogen, Carlsbad, CA, https://www.thermofisher.com)/FITC-UEA-I (Sigma, San Louis, MO, http://www.sigmaaldrich.com) double staining and vascular endothelial growth factor receptor 2 (VEGFR2, also known as KDR; Abcam, Cambridge, MA, http://www.abcam.com/)/CD34 (R&D Systems, Minneapolis, MN, https://www.rndsystems.com) flow cytometry analyses. The viability of EPO-treated cells was tested via a Cell Counting Kit-8 (Dojindo Laboratories, Kumamoto, Japan, http:// www.dojindo.com), and the live and dead cells were distinguished using through calcein acetoxymethyl (AM) (AAT Bioquest, Sunnyvale, CA, http://www.aatbio.com/) and propidium iodide (PI) (BD Pharmingen, San Diego, CA, http://www.bdbiosciences.com) staining. The secreted VEGF, TNF-a, and IL-6 from cultured cells were analyzed by a MILLIPLEX MAP. Gene expression was evaluated by quantitative polymerase chain reaction (qPCR). The primer sequences are shown in supplemental online Table 1. Detailed methods are described in the supplemental online data. Statistical Analysis The statistical analysis was performed using IBM SPSS Statistics (Armonk, NY, http://www.ibm.com), and graphs were generated by GraphPad Prism (GraphPad, La Jolla, CA, http://www. graphpad.com/company). Two-way ANOVA tests were used to analyze the percent of circulating EPCs identified by flow cytometry. Independent sample t tests were used to determine the aneurysm repair score, von Willebrand factor (vWF) + cell count, KDR + cell count, and circulating cytokines [17]. One-way ANOVA tests were used to analyze the peripheral blood, the optical density values obtained from the cell viability test, the migration cell count, the levels of cytokines secreted from the cultured cells, and the gene expression levels obtained by qPCR. Differences with p , .05 were considered significant. Experimental Design, Histological Assessment, and Scanning Electron Microscopy Observations After the coil embolism aneurysm model was initiated, 90 rats survived until sacrifice. All coil embolism aneurysm specimens were acquired for subsequent use. H&E staining under low magnification demonstrated that a more integrated aneurysm neck was formed, and more spindlelike slender cells were observed in the aneurysm necks of the AN-EPO-treated rats. In contrast, in the AN rats, fewer vascular endothelial cells and only sparse fibrous cells were observed. The aneurysm repair score was significantly higher in the AN-EPO-treated rats compared with the AN rats, p , .05 (Fig. 1B). Scanning electron microscopy (SEM) examination revealed the level of endothelialization in the aneurysm neck and found better endothelial coverage in the AN-EPO-treated rats compared with the AN rats. This layer primarily consisted of simple squamous epithelial cells at the bottom of the aneurysm neck. Overall, rats treated without EPO demonstrated a similar sealing effect compared with the EPO-treated rats. However, endothelial cells were rarely observed in the AN rats; instead, they displayed long, flat, and fusiform cell morphologies (Fig. 1C). Under confocal microscopy, consecutive and compact vWF + cells were found in the surface of the AN neck in the AN-EPOtreated rats. It was noted that the vWF layer in the AN rats was not continuous (Fig. 1D). There were significantly more vWF + and KDR + cells in the inner surface of the AN neck in the AN-EPO-treated rats than in the AN rats, p , .05 (Fig. 1D). Circulating EPC Detection and Peripheral Blood Changes On day 10 after surgery, the circulating EPC count was significantly elevated in the AN-EPO-treated rats compared with the MS and AN rats (p , .05). On day 20, the circulating EPC count significantly increased in the AN and AN-EPO groups compared with the MS group. There was no indicated superiority in the EPO treatment group. On day 30, the AN and AN-EPO-treated rats continued to show to increase EPC counts, and this increase was more obvious in the EPO-treated group ( Fig. 2A). We performed a peripheral blood examination to exclude the possible side effect of EPO elevating the red blood cell (RBC) count. We found no significant differences between the AN-EPO group and the AN group for the RBC count, hemoglobin, red blood cell specific volume, mean corpuscular volume, mean corpuscular hemoglobin, and the mean corpuscular hemoglobin concentration on days 10, 20, and 30 (Fig. 2B). EPC Identification and Changes in EPO-Induced Cell Death, Viability, and Migration We isolated EPCs from the femur marrow and found that many cells had a round, cobblestone-like morphology in the primary adherent cell culture. Furthermore, most of these primary cells were able to uptake both Dil-ac-LDL and FITC-UEA-I (Fig. 3A). Through flow cytometry, we demonstrated that 62.96 6 4.48% of the cells were KDR + , 51.53 6 3.65% of the cells were CD34 + , and 33.91 6 1.77% of the cells were KDR/CD34 + . The flow cytometry analyses indicated that these KDR + /CD34 + cells were EPCs (Fig. 3B). PI and calcein AM-labeled cells were observed by confocal microscopy. No increase in the number of dead cells was observed in the EPO-treated rats after 7 days of EPO treatment compared with the control rats (Fig. 3C). We recorded the absorbance of EPCs in each well after their reaction with the reagents in a Cell Counting Kit-8. A significant increase in absorbance was observed on days 7 and 10 using high concentrations of EPO (150 mg/l and 15 mg/l), and an EPO concentration of 1.5 mg/l also presented a significant increase in absorbance on day 10. These findings indicate that EPCs presented improved cell viability when cultured with high concentrations of EPO for prolonged durations (Fig. 3D). We performed a scratch assay to assess the degree of EPC migration and found that treatments with 150 mg/l, 15 mg/l, and 1.5 mg/l EPO for 10 days significantly increased EPC migration, p , .05 (Fig. 3E). Cerebrovascular Cytokine Levels and Gene Expression After EPO Treatment We tested cytokines and chemokines in the peripheral blood in both the AN and AN-EPO groups, including VEGF, TNF-a, and IL-6. The peripheral blood VEGF concentration in the AN-EPO group was elevated on days 20 and 30 compared with the AN group. There was no significant differences in the TNF-a and IL-6 levels between the AN and AN-EPO-treated rats (Fig. 4A). However, significant changes in cytokines and chemokines levels were observed in vitro. On day 1, the levels of VEGF from EPO-treated cells were significantly increased, and the IL-6 level was decreased (Fig. 4B). On day 3, the VEGF levels of the EPCs treated with 150 mg/l, 15 mg/l, 0.15 mg/l, and 0.015 mg/l EPO were significantly higher than those observed in the control group. The reducing effect of EPO on the IL-6 levels was not obvious, and only 150 mg/l EPO induced inhibition. There were no significant changes in the TNF-a levels on either day 1 or day 3 (Fig. 4B). qPCR analysis was used to determine the VEGF, TNF-a, and IL-6 gene expression levels. After 1 and 3 days of 150 mg/l EPO treatment, the expression of VEGF showed evident elevation, and IL-6 showed gradually decreasing trend. No significant differences were observed for the TNF-a and IL-6 gene expression levels (Fig. 4C). DISCUSSION In our study, EPO was administered after the induction of a coiled embolization AN to prevent recurrence. We successfully established a coiled AN rat model via vasotransplantation. Half of the AN rats were treated with EPO. This EPO-treated group showed a significant increase in the number of circulating EPCs in the rats with aneurysms after coiling, and the EPCs also participated in the aneurysm neck endothelialization. EPO promoted the AN neck integration and endothelialization, as shown by SEM. We also found that in vitro bone marrow-derived EPCs could be enhanced after 7 days of EPO treatment. Our study showed that EPO increased VEGF levels in vivo and in vitro. The safety of short-term EPO treatment was indicated by showing fewer side effects on the AN rat peripheral blood work, such as increased RBC counts or EPC deaths in vitro. Endothelialization in the AN neck has been observed in clinical studies [18,19] and animal models [20]. Endothelial dysfunctions are regarded as the main contributor to the recurrence of postembolization cerebral AN. The promotion of endothelialization is key to preventing AN recurrence. Because EPCs were initially identified from human peripheral blood and found to express CD34 and KDR [21], EPC therapy was used to facilitate vascular repair and homeostasis [21,22]. The autologous transfusion of bone marrow-derived EPCs could be used for stroke treatment [23]. There is a growing interest in using circulating EPCs in clinical trials for ischemia, stroke, and vascular injury [24][25][26][27]. Our previous study [16] also showed that bone marrow-derived EPCs play a crucial role in the closure and reconstruction of the aneurysm neck orifice after aneurysm coiling. Medications, including statins, angiotensin converting enzyme inhibitors, and cancer drugs, have been studied to determine their effects on EPC mobilization. EPO has been shown to reduce the formation and progression of cerebral aneurysms in rats [28]. In this study, we found that the number of circulating EPCs was significantly increased at 10 days after EPO treatment in coiled AN rats compared with nontreated AN rats, which could be caused by mobilization promoted by EPO during early stages. On day 20, both the AN and AN-EPO-treated rats induced EPC mobilization and homing after vascular injury. During this stage, the effect of EPO did not appear to be significant. Late stages of endothelialization were evident on day 30. In these stages, the number of AN neck EPCs decreased, and the circulating EPC levels recovered to a relatively high level. During this period, the EPO-treated rats maintained an obviously higher number of circulating EPCs. Previous studies have shown strong evidence indicating that EPO can mobilize EPCs from bone marrow to increase the amount of circulating EPCs [28,29]. During the AN neck endothelialization, the circulating EPCs of the AN-EPO group were significantly increased significantly on day 10. With the completion of endothelialization, the circulating EPCs showed a decrease trend on days 20 and 30. In contrast, the circulating EPCs of the rats belonging to the AN group showed a significant increase on day 20 compared with the EPCs of the MS group and showed a decreasing trend in day 30. These findings indicate that EPO may lead to earlier mobilization. However, it is possible that the time interval between EPO administration and blood collection may partially affect the circulating EPCs. EPO is a hormone secreted by the kidney in response to hypoxia and plays a cardinal role in regulating plasma hemoglobin concentrations. According to routine blood tests, the short term use of EPO did not show any side effects in the AN-EPO rats. In in vitro EPC cultures, EPO did not show significant cytotoxicity. In a previous study, EPO showed vascular protection and endothelium-promoting properties. These effects were primarily mediated by inhibiting apoptosis, mobilizing endothelial progenitor cells, inhibiting the migration of inflammatory cells, and promoting angiogenesis [12]. Previous studies on EPO-mediated endothelialization and its molecular pathways have focused on antiapoptotic and survival signals, including the phosphatidylinositol 3-kinase pathway and the endothelial nitric oxide synthase pathway [30][31][32]. In our study, we assessed the levels of VEGF, TNF-a and IL-6. We found that VEGF increased after EPO treatment in vivo and in vitro. The modulation of VEGF by EPO is thought to be one of the most important factors in promoting AN neck endothelialization after coil embolization treatment. EPO modulation plays key roles via VEGF and VEGF receptors in many vascular diseases [32][33][34][35]. In EPC cultures treated with EPO for 1 and 3 days, the levels of IL-6 showed a decreasing trend. This may be attributed to the antiinflammatory effect of EPO. However, the in vivo IL-6 levels of AN rat sera and the in vitro gene expression profile of IL-6 did not correlate with this observation. This phenomenon may be induced by complex cytokine regulation and would require further study. Furthermore, no strong evidence or discernible trend was observed linking EPO and TNF-a in coiled AN rat sera or cultured EPCs. CONCLUSION In its recombinant form, EPO has been tested in clinical trials and proven to be beneficial in cerebral vascular diseases by providing vascular protection, inhibiting inflammation, and promoting endothelialization [36][37][38]. This study represents the first use of recombinant rat EPO to promote coiled AN neck endothelialization. We showed that EPO enhanced the endothelialization of coiled AN neck via VEGF modulation. EPO or its analogs may provide new therapeutic alternatives in preventing recurrence in coiled cerebral AN. However, there remain limitations. The peripheral blood cytokine levels may fluctuate in part owing to the deferent interval between administration and blood collection. No extensive dose response studies were performed in animal models. The exact and detailed mechanism through which EPO affects endothelialization remains unclear. Other important factors in addition to VEGF may be involved, such as hypoxia inducible factor 1 and stromal-derived factor 1, which are also known to mobilize EPCs under conditions of vascular disease and injury [39][40][41]. Further studies are required to explore the mechanism responsible for the EPO-based endothelialization of aneurysm necks. AUTHOR CONTRIBUTIONS P.L.: conception and design of the study, provision of study materials and animal models, collection and/or assembly of data, data analysis and interpretation, manuscript writing, final approval of manuscript; Y.Z.: animal model creation, blood sample analysis, histological examination, discussion of manuscript; Q.A.: animal model analysis, histological examination; Y.S.: data analysis and interpretation, manuscript writing; X.C.: database input, data interpretation; G.-Y.Y.: provision of study material or patients, revision and final approval of manuscript; W.Z.: conception and design, revision and final approval of manuscript. DISCLOSURE OF POTENTIAL CONFLICTS OF INTEREST The authors indicated no potential conflicts of interest.
2018-04-03T06:14:35.358Z
2016-06-28T00:00:00.000
{ "year": 2016, "sha1": "d45c76535f573f02b57c4fcd73551827e327a21a", "oa_license": "CCBYNC", "oa_url": "https://stemcellsjournals.onlinelibrary.wiley.com/doi/pdfdirect/10.5966/sctm.2015-0264", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "4b036ce6a1b2055ba64fb1341a277a2e4ae6eb9b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3809730
pes2o/s2orc
v3-fos-license
Unpacking the Relationship between Parental Migration and Child well-Being: Evidence from Moldova and Georgia Using household survey data collected between September 2011 and December 2012 from Moldova and Georgia, this paper measures and compares the multidimensional well-being of children with and without parents abroad. While a growing body of literature has addressed the effects of migration for children ‘left behind’, relatively few studies have empirically analysed if and to what extent migration implies different well-being outcomes for children, and fewer still have conducted comparisons across countries. To compare the outcomes of children in current- and non-migrant households, this paper defines a multidimensional well-being index comprised of six dimensions of wellness: education, physical health, housing conditions, protection, communication access, and emotional health. This paper challenges conventional wisdom that parental migration is harmful for child well-being: while in Moldova migration does not appear to correspond to any positive or negative well-being outcomes, in Georgia migration was linked to higher probabilities of children attaining well-being in the domains of communication access, housing, and combined well-being index. The different relationship between migration and child well-being in Moldova and Georgia likely reflects different migration trajectories, mobility patterns, and levels of maturity of each migration stream. Introduction In many societies experiencing large-scale mobility transitions, migration is often framed in public discourse as either a blessing or a curse for those households and family members Bleft behind^by migrating kin in the home country. This is true of both Moldova and Georgia, two post-Soviet countries that have experienced the emigration of large shares of their total populations (World Bank 2015). 1 Such largescale emigration has inspired growing concern about the potential costs and benefits of migration, particularly for those children who are Bleft behind^by their migrant parents. Migration and its consequences are difficult to quantify. Remittances are one of the best-explored outcomes of migration given the substantial financial flows they can represent: in Moldova, remittances accounted for over 24% of GDP in 2014 andin Georgia, 12% (World Bank 2015). Such remittance flows can play a key role in protecting recipient households from economic shocks and income vulnerability, yet it is unclear to what extent such transfers can replace the contributions that a migrant would make to the household if s/he were present. The impact of a migrant's absence is particularly pertinent to the context of child well-being, but relatively few empirical studies have attempted to define and measure multiple aspects of child well-being and its association with migration. Relatively little research has assessed potential trade-offs between increased material resources and the less-easily quantified consequences of parental absence such as the availability of child supervision (Kandel and Kao 2001); this is especially true of Moldova and Georgia, where limited research has explored the specific channels through which migration can affect the well-being of children. As with other Eastern European and former Soviet states, Moldova and Georgia have experienced a rapid rise in emigration that has inspired concern among policy makers and civil society organisations regarding the potential impacts these growing migration flows may have on society. While public discourse generally recognises the inflow of remittances as a positive outcome of migration, migration itself is otherwise generally regarded as deleterious for the societies and families involved in it. This paper bridges this gap by elaborating a multidimensional well-being index for children in Moldova and Georgia, which enables the well-being outcomes of children with and without migrant parents to be compared. The index builds on the Alkire and Foster (2011) methodology for the measurement of multidimensional poverty, the method underlying the multidimensional human development index that has been published in the Human Development Report annually since 2010 by UNDP. The well-being index is constructed around six domains of wellness representing different facets of a child's life: education, physical health, housing conditions, protection, communication access, and emotional health. This more holistic conceptualisation of child well-being enables exploration of how migration can possibly influence child well-being beyond traditional income or material well-being measures. The index has also been constructed to enable cross-country comparison of outcomes, which provides important analytical power to the method, particularly as it allows for discussion of how deviations in country context correspond to different well-being outcomes. The next section of this paper explores the theoretical relationship between migration and well-being and provides a brief overview of previous studies on the potential effects of migration on child well-being. The third section then reviews how child wellbeing should be defined and operationalised. Brief backgrounds of both Moldova and Georgia are provided before the data used in the analyses are described. The indicators and methodology for constructing and using the specified child well-being index are then explained, followed by a summary of results. This paper concludes with a discussion of the results. Migration & Well-Being By assessing the impacts of migration on child well-being, an implicit assumption is made that migration bears consequences for the individuals and households it affects. Migration and the well-being of children 'left behind' can be expected to be linked through several avenues, the most obvious of which is that migration may involve the withdrawal or addition of household-level resources that may be used to support child well-being. Neo-classical theories of migration, such as the new economics of labour migration (NELM) theory (Stark and Bloom 1985), suggested that the migration decision is made on a household level in response to anticipated costs and benefits of migration. Within this theory, migration is expected to be mutually beneficial for both the migrant and the sending household; the household will accept some of the costs associated with migration in return for remittances, which are a means of not only expanding household income but of diversifying its sources (Massey et al. 1993;Taylor 1999;Stark and Bloom 1985). As household members, children would be expected to benefit from the resources provided by migrants, particularly if used for expenditures such as healthcare and education. The resources a migrant can potentially share with the household in the country of origin can include not only financial capital, but can also include human capital, through the transmission of knowledge, values, and ideas in the form of Bsocial remittances^ (Levitt 1998;Acosta et al. 2007). Several studies have explored the potential uses of both financial and social remittances for children 'left behind'. Yang (2008) in the Philippines and Mansuri (2006) in Pakistan both suggested that the receipt of remittances can loosen household economic constraints, enabling children to pursue education and reducing child labour rates. Other studies have found a positive relationship between migration and child health outcomes, as remittances can be spent on higher quality foods, vitamins, and medicines (Salah 2008) as well as in preventative and curative healthcare (Cortés 2007). Other studies in countries such as Guatemala (Moran-Taylor 2008), El Salvador de la) Garza 2010, the Philippines (Edillon 2008;Yang 2008), and Pakistan (Mansuri 2006) have found strong associations between the receipt of remittances and higher rates of educational attainment, greater rates of participation in extra-curricular activities, and better grades. Many studies have noted that remittances are a main means through which migration can affect child well-being, but the act of migration is no guarantee that a migrant will send remittances. Particularly when migration is undertaken as a survival strategy and is funded through loans, children in migrant households may be placed in an even more tenuous economic situation than prior to migration, particularly if they shoulder the migration debt burden (van de) Glind 2010). In some situations, as a study of Kandel (2003) in Mexico found, migration may increase child labour rates, particularly among male children who must work to support the household. While remittances may enable greater expenditure on healthcare inputs, positive outcomes may develop only over time (McKenzie 2007;Hildebrandt et al. 2005). Migration can also bear negative potential consequences for child educational outcomes, with studies in Albania (Giannelli and Mangiavacchi 2010), Ecuador (Cortés 2007), and Moldova (Salah 2008) finding a relationship between parental absence and higher rates of school absenteeism, declining school performance, and declining graduation rates. The potential impacts of migration and child well-being can seldom be neatly designated as Bpositive^or Bnegative^; the relationship between migration and child well-being outcomes is dynamic and conditional on factors such as a child's age, post-migration caregiving arrangements, a household's socio-economic status, and the retained ties between a migrant and the household members remaining in the origin country (e.g., Cortés 2007;Moran-Taylor 2008;Mazzucato 2014). The generalizability of insights provided from past studies is generally limited, as few studies have used data on children specifically. Among those studies that have explicitly focused on children in migrant households, few have explored the situation of children remaining in the country of origin, and fewer still have engaged an appropriate control group against which the outcomes of children in migrant households can be compared (Graham and Jordan 2011). Past studies have also largely focused on singular aspects of well-being such as physical health or educational outcomes, but given the complex interplay between migration and the conditions that affect household members, a more encompassing assessment of migration's impact on well-being is needed. The present study builds on the insights and suggestions of past research by defining and operationalising well-being in a holistic, multidimensional well-being framework that allows comparison of wellness across dimensions as well as across two so far understudied countries. Defining Well-Being One of the first challenges faced in the assessment of child well-being is in defining the concept. The components of child well-being, while shared to a certain extent with that of adults, differ according to the specific needs and vulnerabilities children face (White et al. 2003;Brooks-Gunn and Duncan 1997;Waddington 2004). In acknowledging that children are a unique population group with differentiated needs, children must be emphasised as the unit of observation. As for any population group, decomposing the components of child well-being or poverty requires a conceptual basis. One of the most important sources for defining child deprivation-and its end result, poverty-are international instruments such as the Convention on the Rights of the Child (CRC), which provides a rights-based framework for approaching well-being. The CRC, which was adopted by the UN General Assembly in 1989, is an instrument for promotion and protection of children's rights that outlines minimum standards for Bthe treatment, care, survival, development, protection and participation that are due to every individual under age 18^ (UNICEF 2009;pg. 2). Within the CRC children are envisioned as rights holders, yet this entitlement to rights is both challenged and complemented by dependence on families, communities, and societies to attain minimum standards of wellness. Within this rights-based framework, child wellbeing can be understood as the realization of children's rights and the fulfilment of opportunities for a child to reach his/her potential (Bradshaw et al. 2007). Interpreted this way, well-being in the context of child's rights has strong parallels with the human development and capabilities approach championed by Amartya Sen. The capabilities approach envisions well-being as the product of an individual's effective opportunities or capabilities to attain a desired outcome; lack of capabilities, or the freedom to choose among them, limits the range of realizable functionings, leading to deprivation or poverty (Sen 1993;Robeyns 2005). Both the child's rights-based framework and capability approach envision well-being as inherently multidimensional, comprised of opportunities and entitlements in multiple facets of life; deprivation in single dimensions can thus lead to failure to attain well-being in total (Alkire 2002;Sen 1993;Robeyns 2005;Alkire and Foster 2011). To translate concepts of well-being into functional measurement instruments, a list of dimensions of well-being-and the indicators by which they can be measured-must be elaborated. A significant body of literature has addressed the multidimensional nature of child poverty (see Roelen and Gassmann 2008, for a review), much of which has adopted a rights-based perspective to define well-being domains (Alkire and Roche 2011). Based on reviewed literature, functionality in a cross-cultural context, and availability of data, the following definition of child well-being is operationalized in this study: Well-being is a multidimensional state of personal being comprised of both selfassessed (subjective) and externally-assessed (objective) positive outcomes across six realms of rights and opportunity: education, physical health, housing conditions, protection, access to communication, and emotional health. This definition recognises the inherent complexity and multidimensionality of wellbeing. Individual components of well-being and their expression are the products of ongoing and dynamic processes that change the risk factors and resources within a child's immediate and more distant development environment (Bradshaw et al. 2007). Migration is one such process that alters the context in which individuals develop and function, but its effects are not universal and homogenous. Country Backgrounds Moldova and Georgia both provide rich contexts in which to explore the possible relationship between parental migration and child well-being given the rapid mobility transitions both countries have experienced. Despite some commonalities in terms of the origin of large-scale emigration flows, Moldova and Georgia differ in important ways in terms of contemporary migration flows. Following the dissolution of the Soviet Union and subsequent independence in 1991, significant emigration from Moldova began in response to sharp economic declines. The loss of the separatist territory Transnistria and the downturn of the Russian economy at the end of the 1990s contributed to the dire economic situation Moldova found itself in 1999: gross domestic product was just 34% of the level experienced a decade earlier (Pantiru et al. 2007; CIVIS/IASCI 2010), and 71% of the population lived below the poverty line (IMF 2006). The extreme level of economic vulnerability provided the first initial Bpush^for large-scale emigration, which has continued relatively unabated since (CIVIS/IASCI 2010). As of 2013 over 859,400 people-equivalent to 24.2% of the total population-were estimated to live abroad, the majority of whom were in the Russian Federation, Ukraine, Italy, and Romania (World Bank 2015). In 2010 most migrants were thought to be of prime working age, with approximately 80% between the ages of 18 and 44 (CIVIS/IASCI 2010). Mobility trends in Georgia bear some similarity to those of Moldova, but the origin of large-scale migration in the post-Soviet period differs. Immediately following independence, migration flows were strongly characterised by the ethnic return of non-Georgians to countries such as Russia, Greece, and Israel as well as by conflictinduced displacement that promoted both internal and international migration (CRRC 2007). Internal conflict and ethnic strife during the early 1990s resulted in a several waves of migration, and the 2008 Russian-Georgian war over the territory of South Ossetia prompted some additional migration both within and beyond Georgia. As in Moldova, Georgia also experienced the deterioration of the economic system and state infrastructure, and despite reforms and political transitions in the early 2000s, widescale poverty and economic insecurity have remained a concern, with over half The different origins of migration flows from Moldova and Georgia correspond to different migration experiences for individuals from each country. While the migration stream from Moldova can be considered relatively Bimmature^, with low rates of settlement and family reunification in destination countries (CIVIS/IASCI 2010), emigration from Georgia has included more significant levels of settlement in host countries and lower rates of return, particularly among those individuals and households that left during the conflict period (CRRC 2007). Moldovan emigration is characterized by high levels of circularity, facilitated by favourable visa regimes with the Russian Federation and by access to the European Union among dual Moldovan-Romanian passport holders. Many Georgian emigrants are in a more disadvantaged position, particularly those residing in the EU without legal right to residency or work. Such differences in settlement patterns and legal regimes may translate into different interactions between migrants abroad and their households in the origin country, making Moldova and Georgia valuable comparative cases in understanding the relationship between migration and child well-being. While comparison of these specific countries may be telling of wider trends within the former Soviet space, understanding migration and family dynamics within this region can be instructive of how migration and child well-being intersect in other countries facing similar, overlapping transitions. Given the previous literature assessing the potential impacts of migration on children remaining in the home countries, predominantly focused on the South-East Asia and Latin American regions, the assessment of how migration and child well-being are related in the post-Soviet context can provide an expanded basis for comparison of dynamics. Data & Methodology The multidimensional child well-being index proposed and explored in the following analyses makes use of nationally-representative household data collected in Moldova and Georgia as part of a project that explored the potential consequences of migration for vulnerable populations Bleft behind^. In Moldova the household survey was implemented between September 2011 and March 2012; data was collected on 3571 households, of which 1983 contained one or more children under the age of 18. In Georgia the household survey was conducted between March and December of 2012 and captured information on 4010 households, of which 2394 contained one or more children. The survey was conducted in all regions of both countries except for the breakaway territory of Transnistria in Moldova and the de facto independent regions of Abkhazia and South Ossetia in Georgia. Within this survey, information was collected on specific aspects of children's lives. Caregivers of children in the household provided information about each child's physical and emotional health, educational behaviours, and time allocation. Household-level features such as quality of housing were assumed to apply to all household members equally. This information was collected from the primary respondent in each household. Information was collected on all children in the household aged 18 or below, but the following analysis focuses on school-aged children, those aged five to 17. The rationale for excluding children younger than five is driven by the definition and comparability of well-being indicators. Very young children have different needs that require different well-being indicators. Furthermore, data on emotional well-being through the Strength and Difficulties Questionnaire was not collected for the youngest child age cohorts. The upper age limit reflects the definition of a child according to the CRC. The survey also collected data on the migration histories of all household members, including the years of first and last migration. A migrant was defined as a person who had lived abroad for three or more months consecutively at one time. Children living in households where the migrant was not a mother or father were excluded from the analysis, as well as children with a returned migrant parent (to enable clearer comparison between children with current migrant-and non-migrant parents). Table 1 below provides an overview of characteristics of households used in the analysis. Descriptively, the two survey samples differ from one another. The sample collected in Moldova was slightly larger than that collected in Georgia. A greater number of migrant households were sampled in Georgia, but when population weights are applied to accommodate oversampling, the share of parent-away households in the Georgian sample is significantly smaller than the sample of such households in Moldova. The features of migrants included in the analytical sample also differed between the two countries, which can be seen in Table 2. In Moldova over 67% of migrant parents were male, while in Georgia a larger proportion of migrant parents were female (51%). Georgian migrants also tended to be slightly older and to have a slightly higher level of education: while the average migrant in Moldova had attained upper secondary education, an average Georgian migrant had a secondary degree and had incomplete tertiary education. A larger proportion of households in Georgia than in Moldova received remittances, which likely reflects differences in migration patterns such as degree of circularity and duration of migration. These initial descriptive differences may suggest that the experiences of children with migrant family members differs between the two countries. Indicators To analyse and compare the rates of multidimensional well-being between children with and without migrant family members, a child-specific well-being index was constructed with six dimensions of child well-being: education, physical health, housing conditions, protection, communication access, and emotional health. The current analysis drew from measurement tools expressly designed for the particular population of interest (children aged 5-17). Both Moldova and Georgia adopted the Convention on the Rights of the Child (CRC), which provides some guidance to the selection of indicators related to fundamental well-being standards that every child has the right to enjoy. The choice of indicators also allows for comparison of child well-being between the two countries. Table 3 contains the list of dimensions and indicators chosen for the measurement of child well-being. The educational well-being dimension is measured by school enrolment; for children aged five and six, school enrolment is measured by pre-school attendance, and for children aged seven and older, this indicator measures enrolment in the appropriate grade for a child's age. Physical health is measured by a child's receipt of the full regime of required vaccinations, which includes BCG, DPT, measles, and hepatitis B. Housing conditions are measured by access to electricity, proper flooring (e.g., not dirt or concrete), and a safe source of drinking water (e.g., not surface water or water from tenuous sources like rainwater collection). The dimension of protection is measured by Authors' calculation whether a caregiver reports repeatedly beating a child as punishment. Communication well-being is measured by access to a modern source of communication, in this case a mobile phone. While this indicator is measured on the household level, it can be expected that children living in households with technologies that facilitate communication will benefit individually from the greater level of connectedness. Finally, emotional well-being is measured by the total difficulties score of the Strength and Difficulties Questionnaire (SDQ), a behavioural screening instrument that uses 25 questions on psychological attributes to identify potential cases of mental health disorder (Goodman 1997). In contrast to other child well-being indices that include indicators of material well-being such as household income or expenditure, the index proposed here consciously omitted such indicators because they are likely to influence the attainment of well-being across all dimensions. Household poverty status, measured as having an adult equivalent expenditure below 60% of the median, is included as a control variable in all subsequent analyses. The indicators included in this index were chosen because they were both relevant and available in both countries, which enables comparison of the same concepts across differing contexts. They were also chosen for their ease of interpretation, as each indicator has a clear threshold for when a child does and does not meet acceptable levels of well-being. Methodology Child well-being was calculated in two steps. A child is considered not deprived if s/he meets the established well-being threshold set for a given indicator. Indicator wellbeing rates (IWB) are calculated by counting the number of children who passed the defined threshold, expressed as a share of all children (Roelen et al. 2011): where n is the number of children for which the indicator is observable and I ix is a binary variable taking the value 1 if child i has reached the threshold and 0 if the child has not with respect to indicator x. A second step involved building a multidimensional well-being index inspired by the Alkire and Foster (2011) methodology for the measurement of multidimensional poverty. A child is considered to be multidimensionally well if the weighted combination of dimensions is equal to or exceeds 70% of the total; in this index, a child must be well in at least four of six indicators to be considered multidimensionally well. Each domain is assigned equal weight, which facilitates the interpretation of results (Atkinson 2003) but also asserts that each dimension is considered of equal importance. The decision to set the cut-off at 70% follows the cut-off used for multidimensional child well-being indices (Roelen and Gassmann 2014). Children with and without migrant parents can then be compared. Multivariate analysis is applied to control for and identify other correlates that determine child well-being, such as personal characteristics of the child and regional or household characteristics. Separate binary outcome models are estimated for selected indicators using standard probit models, specified as: Where y i is the binary outcome variable, Φ is the standard normal distribution function, x i is a vector of explanatory variables, and β is a vector of coefficients to be estimated. In the following analysis, the dependent variable is the probability that an individual is well with respect to a specific indicator. In order to assess whether the effect of migration is significantly different between countries, models for each country are estimated separately, and a Wald chi square test is performed to establish if the coefficients for migration status significantly differ from each other. The formula for this statistic is written as: where b M is the coefficient for Moldova and b G is the coefficient for Georgia. 2 Differences in the migration coefficients may not necessarily indicate true differences in causal effects, for example when the two models differ in the degree of residual variation (or unobserved heterogeneity). If this is the case, the test would report a misleading result, as the differences in the migration coefficient would be driven by other unobserved correlates that are not included in the model. To correct for potential deviation in residual variation, ordinal generalized linear models are used that estimate heterogeneous choice models that allow for heteroskedasticity for the specified variables (in this case, the country). 3 The following section describes the results of the multidimensional index. Descriptive statistics for indicator-and multidimensional well-being are presented first in which group differences both within and between countries are tested. Results of these bivariate analyses are followed by the results of the multivariate analyses, which assess the relationship between migration and child well-being outcomes when accounting for other factors that can help predict child well-being. Table 4 provides an overview of well-being rates achieved by children in each study country for each indicator and on the total multidimensional well-being index. In Moldova, achieved rates of well-being ranged from 74% in the domain of housing well-being to 96% within the protection domain. With respect to the combined wellbeing index, 84% of children can be considered well, which reflects the overall high level of child well-being across the six dimensions. Children in Georgia expressed a similar level of overall well-being, with over 83% considered well on the total index level. Children in Georgia achieved the worst outcomes in the domain of physical health, with only 73.5% considered well, and the best outcomes in the domains of protection and emotional well-being, where 94% were considered well. Results Children in Moldova in migrant and non-migrant households did not attain statistically-different outcomes in any domain. In Georgia, children who had a parent abroad were better off in the dimensions of protection and communication and in the overall multidimensional index than children without migrant parents. Children with migrant parents were significantly worse off than children in non-migrant household in the dimensions of emotional well-being, however. Such differences appear to be associated with the absence of particular individuals: a father's migration was associated with worse outcomes in the protection and emotional well-being domains, whereas more limited differences in outcomes were associated with a mother's migration or the migration of both parents. The absence of both parents corresponded to worse outcomes for overall well-being measured by the combined index. Based on the bivariate analysis, migration appears to be a more important factor in shaping the well-being outcomes of children in Georgia than in Moldova. To determine the extent to which the migration of a parent corresponds to differences in child well-being outcomes when accounting for other relevant covariates, multivariate analyses utilising probit models were also conducted. The results are summarised in Table 5, which indicates the marginal effects of migration status for each aspect of well-being. 4 The multivariate analysis confirms some of the results of the bivariate analysis, namely that migration appears to have a more significant effect on the well-being of children in Georgia than in Moldova. Children in migrant households in Georgia had higher probabilities of being well in the domains of housing, communication, and on total index level than children in non-migrant households. In Moldova, migration was significantly associated with higher probabilities of being considered well in the protection domain but was otherwise non-significant. Significant differences between countries can be observed in the housing dimension and with respect to the combined multidimensional well-being index; in both cases, migration in Georgia was positively correlated with well-being, whereas in Moldova the migration coefficients were not significant. The extent to which parental migration is related to child well-being not only depends on whether there is a migrant parent in the household but also on who migrates and who adopts the role of the caregiver in the household. Tables 6 and 7 show the results of the extended models. The absence of either the mother or the father is positively correlated with well-being in the dimensions of housing and access to communication in Georgia, as well as on the overall child well-being. The positive correlation between parental migration and well-being in the communication dimension may be the direct effect of the desire of parents to stay in regular contact with their children. For Moldova the absence of the father has a positive effect on well-being in the protection domain, indicating that these children less frequently experience physical abuse. Authors' calculations. Note: *** p < 0.01; ** p < 0.05; * p < 0.1 significance levels based on chi2 test of independence comparing father abroad, mother abroad, both parents abroad and no migrant Other control variables not reported. Robust standard errors in parentheses; +p < 0.1; * p < 0.05; ** p < 0.01. a Differences between countries in the migration coefficient are significant at a + 10% level, *5% level, and **1% level based on Wald chi square test (corrected for unequal residual variation or unobserved heterogeneity) The specific caregiver of a child is significant for some dimensions, and can be either positive or negative. In Georgia, having a grandparent as primary caregiver (as compared to the mother) increases the likelihood of well-being in education and protection but decreases the likelihood of attaining housing and overall multidimensional well-being. If a non-parent relative is the caregiver the likelihood of being welloff is lower in the health and communication dimension, as well as in the multidimensional well-being index. Similar effects can be observed in Moldova. Children with a non-relative as main caregiver fare worse in the domains of health, communication, and the overall well-being index but they have higher probabilities of being well in the housing domain compared to children cared for by a mother. Having a grandparent caregiver is positively associated with protection well-being, while a father as caregiver seems to negatively affect the likelihood of attending school at the appropriate grade. Beyond parental migration status and type of caregiver, variables like household composition, household educational attainment, location, and child age are important determinants of child well-being in both Moldova and Georgia. The likelihood of education well-being increases with age in both countries, although the relationship may not be linear. Higher educational attainment in the household is positively and significantly associated with most well-being dimensions in both countries. Living in an urban environment is negatively correlated with health well-being but is positively correlated with well-being in housing, access to communication, and protection. In Moldova female children have a higher probability than male children of being well in the protection domain, and girls in both countries have higher probabilities of achieving emotional well-being than boys. The number of siblings is also important in Moldova for determining well-being, as a higher number of co-resident children corresponds to decreased chances of attaining housing, protection, and educational well-being. In Georgia, this variable is correlated (negatively) to housing well-being. Discussion In contrast to past studies that have found strong ties between parental absence through migration and differences in child well-being outcomes, these analyses of the relationship between parental migration and multidimensional child well-being suggests limited differences in the well-being outcomes of children with and without migrant parents. Three observations arise from these results: 1) in both Moldova and Georgia, rates of child well-being are generally high, but important differences in well-being attainment can be observed across dimensions; 2) the relationship between parental migration and child well-being not only varies by who specifically has migrated but also by the domain of well-being measured, and; 3) the influence of parental migration on child well-being varies by country. Each of these observations calls for greater reflection of how child well-being is shaped by larger societal and family processes. While one of these processes is migration, it is important to recognise that migration may bear a more limited influence on child well-being than other factors, and migration may itself be a proxy or manifestation of other processes (such as economic transition) that affect child well-being through other channels. Results highlight that well-being can (and should) be explored through its constituent parts, as children who may appear well on an aggregate level may nevertheless experience low levels of wellness in specific areas. The difference in rates of well-being attainment across dimensions also highlights the role of specific factors in contributing to well-being. For example, in both countries the physical location of a household (i.e., in an urban or rural area) corresponded to significant differences in outcomes across several domains of wellness, with children in rural areas expressing lower probabilities of being well in housing and communication in both countries. Parental migration corresponded to meaningful differences in specific aspects of well-being inconsistently, as only particular forms of parental migration related to differences in well-being outcomes. Results signal that parental migration seldom bears Robust standard errors in parentheses; +p < 0.1; * p < 0.05; ** p < 0.01 a significant influence on the attainment of well-being, but when it does, that relationship is generally positive and low in magnitude. Such results may reflect householdlevel coping and coordination mechanisms. Who specifically has migrated may imply who acts as a child's primary caregiver, and it is likely a combination of these traits that explains differing patterns of well-being attainment across dimensions. For example, in mother-away households, another family member-most often a grandmother-may assume responsibility for daily child-care activities. In Georgia, the caregiving transition is likely to be smooth, as grandparents often play an intense role in child-rearing even before migration, and parents who do engage in migration are likely to be enabled to do so given support by members of the extended family (Hofmann and Buckley 2011). Finally, the relationship between different forms of parental migration and child well-being differed widely between the study countries. Migration corresponded to limited differences in child well-being in Moldova yet corresponded to greater probabilities of children being considered well in several domains in Georgia. Differences between the countries may reflect differences in migration trajectories and the processes by which individuals are Bselected^into migration in the first place. Greater shares of Moldovan migrants than Georgian migrants were destined for countries in the Commonwealth of Independent States, namely the Russian Federation, where many migrants work in insecure and volatile sectors such as construction and agriculture. Given the souring relationship between Georgia and Russia, recent migration flows have turned more towards countries in and beyond the European Union, where many migrants work in home-, child-, and elder-care functions. Differences in the industries in which Moldovan and Georgian migrants work may correspond to differences in job security and exposure to unemployment or wage withholding, which may carry over into the costs and benefits the origin-country household bears for the migration of an individual member. Another difference between the countries is in who enters migration. Comparisons between the sample of migrants collected in Moldova and Georgia suggest that Georgian migrants are slightly better educated and older than their Moldovan counterparts. This may suggest that Georgian households that produce migrants are better off socio-economically than Moldovan households even before migration. Furthermore, Georgian migrants who are older may be less likely to have very young children, and again the predominance of multi-generational, extended households may ease caregiving transitions post-migration. The limited yet generally positive relationship between different forms of parental migration and multidimensional child well-being in Moldova and Georgia suggests that public discourses about migration-which largely anticipate strong, negative relationships between parental absence and child well-being-may be misplaced. Particularly in Moldova, where a great deal of research has focused on the dire consequences of migration for the Bleft behind^ (Pantiru et al. 2007), there is limited evidence to suggest that children with migrant parents suffer from that absence. The results do not dismiss the possibility that parental absence through migration can erode child well-being, but it does emphasise the need to understand how migration, family systems, and societal processes intersect to bolster or undermine child wellbeing and its various expressions and domains. The need to understand child well-being (and well-becoming) in context can provide essential guidance to future research and policy designs. By decomposing child wellbeing into different components, and by observing how migration intersects with other resources that can feed into those components, policies aimed at enhancing or protecting child well-being may be able to better target interventions to domains of more or less acute deprivation. Rather than treating parental migration as a key distinguishing factor that determines deprivation, it is important to map how parental migration is accommodated in family life and to understand the common factors that influence both initial migration decisions and the opportunities a child has to achieve functional wellness in different domains. The multiple and overlapping interactions among migration and other family-and societal processes suggests that context is essential in predicting how parental absence through migration may affect child well-being.
2018-04-03T03:58:13.643Z
2017-03-25T00:00:00.000
{ "year": 2017, "sha1": "7013ad37c4fab2c4edc2cc8c9281bfdd622fac40", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc5838114?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "7013ad37c4fab2c4edc2cc8c9281bfdd622fac40", "s2fieldsofstudy": [ "Psychology", "Economics" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
15142411
pes2o/s2orc
v3-fos-license
Identification of glycosylated marker proteins of epithelial polarity in MDCK cells by homology driven proteomics Background MDCK cells derived from canine kidney are an important experimental model system for investigating epithelial polarity in mammalian cells. Monoclonal antibodies against apical gp114 and basolateral p58 have served as important tools in these studies. However, the molecular identity of these membrane glycoproteins has not been known. Results We have identified the sialoglycoprotein gp114 as a dog homologue of the carcinoembryonic antigen-related cell adhesion molecule (CEACAM) family. Gp114 was enriched from tissue culture cells by subcellular fractionation and immunoaffinity chromatography. The identification was based on tandem mass spectrometry and homology based proteomics. In addition, the p58 basolateral marker glycoprotein was found to be the β subunit of Na+K+-ATPase. Conclusion Gp114 has been characterized previously regarding glycosylation dependent trafficking and lipid raft association. The identification as a member of the canine CEACAM family will enable synergy between the fields of epithelial cell biology and other research areas. Our approach exemplifies how membrane proteins can be identified from species with unsequenced genomes by homology based proteomics. This approach is applicable to any model system. Background Madin-Darby canine kidney (MDCK) cells are the best established mammalian model for studying epithelial cell biology. MDCK cells differentiate into polarized cells within a few days when grown on semi-permeable filter supports. The cells form an epithelial monolayer, with tight junctions separating an apical surface from a basolateral membrane facing the filter support and neighbouring cells. Both surfaces have a unique composition of proteins and lipids [1,2]. Newly synthesized secretory pro-teins are sorted in the trans-Golgi network and from there transported to the apical and basolateral surfaces. Sorting of proteins to the basolateral surface often relies on proteinaceous signals in cytoplasmically exposed domains of the protein. Association with lipid rafts and glycosylation have been proposed to be involved in apical targeting [3]. As marker proteins of the apical and basolateral plasma membrane of MDCK cells we have previously raised monoclonal antibodies recognizing two membrane glycopro-A. WGA lectin affinity chromatography of MDCK cell membrane proteins Figure 1 A. WGA lectin affinity chromatography of MDCK cell membrane proteins. Bound proteins were eluted with 0.3 M Nacetylglucosamine, and stained by Coomassie blue after gel electrophoresis. L: aliquot of loaded protein preparation. E: eluted protein pattern (bracket indicates 114 kDa region). B. Flow chart for the purification of gp114. MDCK cell membranes were recovered by high-speed centrifugation from a postmitochondrial supernatant and partially solubilized by treatment with the non-ionic detergent Triton X-100 on ice. Soluble proteins were applied to immunoaffinity columns, and the eluted fractions concentrated by methanol-chloroform extraction-precipitation. Gp114 did not accumulate at the "protein" interface between aqueous and lipid phase but stayed in the hydrophilic supernatant. C. Enrichment of gp114. Lane 1 (A) corresponds to the methanolic phase after chloroform-methanol extraction of eluted proteins from the gp114 immunoaffinity column. Gp114 (arrowhead) is only weakly stained by Coomassie solution. Lane 2 (Ip) contains deglycosylated gp114 (arrowhead) after immunoprecipitation which was confirmed by Western blotting (not shown). The heavy chain of gp114 IgG is indicated by a white arrowhead. teins. The apical marker protein gp114 is a highly glycosylated integral membrane protein with an apparent molecular weight of 114 kDa [4,5]. The basolateral marker protein has been termed p58 according to apparent molecular weight. In subconfluent monolayers of MDCK cells, p58 localizes to both the basolateral and the apical surface, but later disappears from the apical surface, concomitantly with the development of a tight monolayer [4]. The proteomic identification of membrane proteins of MDCK cells, especially when highly glycosylated, still presents a considerable challenge. First, it is rather difficult to isolate these proteins in sufficient amounts. Second, the canine genome is only partially available and EST sequences do not adequately cover its proteome. Conventional methods of database searching rely heavily on matching masses of intact peptides (peptide mass mapping) or their fragments (tandem mass spectrometry) to the corresponding masses obtained by in silico processing of protein sequences from database entries [6]. Stringent matching of computed and measured masses increases the specificity and the speed of database searching considerably, yet restricts the reach of proteomics methodologies down to a handful of favourably covered model species [7]. Recently developed methods of mass spectrometry driven sequence similarity searches [8,9] utilize redundant, degenerate and partially inaccurate peptide sequences, produced by de novo interpretation of tandem mass spectra and are capable of identifying distant homologues of known proteins from phylogenetically distant organisms [10]. In this work we applied immunoaffinity chromatography to enrich the heavily glycosylated membrane proteins gp114 and p58 and identify them by tandem mass spectrometry and homology driven proteomics. Enrichment of gp114 by immunoaffinity chromatography Our first approach was based on the glycoprotein properties of gp114 [5]. A membrane fraction of MDCK cells was enriched for glycoproteins by lectin affinity chromatography using wheat germ agglutinin. The 114 kDa region of the gel electrophoresis pattern (Figure 1a) was analyzed by mass spectrometry. Six peptides matched canine intercellular adhesion molecule 1 (ICAM-1). Other proteins identified were not apical proteins (α2-, β1-integrins, CD44, LAMP-2). A few peptides from low abundant spectra could not be assigned to any protein. Antibodies against dog ICAM-1 immunoprecipitated a 114 kDa protein, but this protein was not recognized by antibodies against gp114 (not shown). Therefore we concluded that gp114 is a protein different from ICAM-1. Immunoaffinity columns established with mouse anti gp114 IgG were used for a more efficient enrichment of gp114. Detergent soluble membrane protein fractions were applied to immunoaffinity columns, and the eluted fractions concentrated by methanol-chloroform extraction-precipitation. Surprisingly, gp114 did not partition with other proteins, but remained in the aqueous phase, probably due to its high glycosylation (see the flow chart of the purification of gp114, Figure 1b). Gel electrophoresis followed by Coomassie blue staining revealed a single faint band corresponding to gp114 (Figure 1c). In parallel, gp114 was first immunoprecipitated and then enzymatically deglycosylated since we anticipated that the high amount of glycosylation might affect the efficiency of tryptic digestion prior to the analysis by mass spectrometry ( Figure 1c). Identification of gp114 by mass spectrometry The MALDI TOF spectrum of a tryptic digest of the gp114 band contained only two peptide signals, which is a surprisingly low number for a protein of this size (Figure 2, inset). Since MALDI and electrospray spectra acquired from the same digest usually demonstrate different peptide profiles [11], the digest was further investigated by nanoelectrospray tandem mass spectrometry ( Figure 2). The Mascot database search with uninterpreted tandem mass spectra gave only three matches to immunoglobulin peptides, although 40 precursor ions were fragmented. The major peptide peaks in the spectrum remained unassigned. Therefore the unmatched tandem mass spectra were manually interpreted de novo by considering mass differences between adjacent peaks of fragment ions (Figure 3). This approach only rendered low confidence amino acid sequences since it is not known if the considered ions indeed belong to the same fragment series [12]. Furthermore, spectra from multiply charged precursor ions contained non-overlapping fragment series with different charge states, which did not cover the complete peptide sequence. Therefore the interpretation of each spectrum produced several inaccurate, partially redundant and incomplete peptide sequence proposals (Table 1). We then merged peptide sequence candidates obtained by the interpretation of all good quality tandem mass spectra into a single search string and employed the mass spectrometry-driven BLAST (MS BLAST) protocol for the identification of proteins by sequence similarity searching [9,10]. The database search confidently hit proteins of the carcinoembryonic antigen (CEA) protein family ( Table 1). The table shows only the first protein homologue of the database search, carcinoembryonic antigen-related cell adhesion molecule 8 (CEACAM8), but other CEA family proteins gave the same alignment with an identical score. Remarkably, not a single alignment covered the corresponding sequence completely. The conservation of gp114 across species is apparently not sufficient for an identification by cross-species matching of acquired tandem mass spectra matching using Mascot software [13]. In silico analysis of gp114 The CEA protein family consists of two separate branches, the membrane associated CEACAM proteins and the soluble pregnancy-specific glycoproteins (PSG). The CEACAM proteins are extensively spliced yielding numerous isoforms. In addition, some CEACAM proteins are modified to include a glycophosphatidylinositol (GPI) anchor instead of a transmembrane domain (reviewed in [14,15]). Gp114 is an integral membrane protein ( [16] and references therein) and belongs therefore to the CEACAM subgroup. MS BLAST searches could only be performed against a protein database. Once gp114 had been identified as a canine CEACAM protein, we used the human CEACAM1 nucleotide sequence to search for homologous genes. One genomic sequence (FE8, see Methods for details) contained an exon sequence homologous to human CEACAM1. Five of the sequenced peptides could be matched exactly to this translated exon sequence (Table 1). Other canine genomic sequences homologous to human CEACAM1 were either identical to FE8 or did not match the sequenced peptides. Thus identical peptides identified in the dog genome validated the sequence similarity identification by MS BLAST. One of the peptides (#6) had also been detected in our first analysis of lectin Nanoelectrospray spectrum of in-gel tryptic digest of gp114 bound 114 kDa proteins, but could not be assigned at that time. This confirmed that gp114 was indeed present in the lectin bound 114 kDa fraction, but could not be identified on the basis of a single peptide sequence. While this manuscript was under evaluation, the canine genome became publicly available [17]. A tentative amino acid sequence was obtained (see Methods for details) which was significantly similar to human CEACAM family proteins 1, 5, 8 and 6. CEACAM 5, 8 and 6 are GPI anchored proteins which have been reported to be expressed in humans only [14]. Furthermore, the predicted molecular weights of the mature proteins (without N-glycans) are 54 kDa for human CEACAM1, 71 kDa for CEACAM 5 and 32 kDa for CEACAM 8 and 6. Only the molecular weight of human CEACAM1 corresponds reasonably to the size of deglycosylated gp114 [18]. Deglycosylated gp114 (Figure 1c) gave the same two characteristic fragments as the untreated protein by MALDI TOF analysis (not shown). Other names for CEACAM1 are biliary glycoprotein, BGP1, TM-CEA and CD66a [19]. In summary, gp114 is a dog CEACAM protein, most likely CEACAM1. Properties of canine CEACAM/gp114 Apical sorting of gp114/canine CEACAM occurs directly to the surface with a half time of 45 minutes [16]. The glycans are of the N-glycosylated complex type containing sialic acid, contributing about half of the apparent molecular weight of gp114 [5,18]. However, in MDCK-RCA cells deficient in terminal glycosylation due to an inactivated UDP-galactose transporter [20,21], gp114 was missorted to the basolateral surface, whereas targeting of other apical proteins was not affected. Furthermore, endocytosis of gp114 is also highly increased in these cells compared to a very slow internalization in MDCK wild type cells [18]. Independently, gp114 was identified as a major protein undergoing bidirectional transcytosis in MDCK-RCA cells [22]. Antibody crosslinking shows that gp114 coclusters with lipid raft associated proteins in the apical membrane of MDCK cells [23]. Lipid raft microdomain association and glycosylation dependent trafficking (basolateral missorting, endocytosis, transcytosis) have not been reported for CEACAM proteins so far. Reversible association with lipid microdomains has been put forward as a core mechanism in the regulation of signal transduction at the plasma membrane [24]. Our identification enables the integration of the data obtained for gp114 with the characterization of CEACAM proteins from other approaches. Identification of p58 The p58 protein was enriched by immunoaffinity chromatography, similarly to gp114. The trypsin digested band of p58 was identified as the β-chain of canine Na + K + -ATPase by peptide mass fingerprinting. 15 peptides were matched to the masses of corresponding tryptic peptides with better than 100 ppm mass tolerance. The MOWSE score of 143 exceeded the significance threshold of 72 and thus the identification was considered confident. The β-subunit of Na + K + -ATPase contains three N-linked glycans, which is consistent with the apparent molecular weight of the expressed protein. The association of the βsubunit with the α-subunit is required for the enzyme complex to reach the plasma membrane (for a review, see [25]). The polarized expression of Na + K + -ATPase in epithelia depends on the association of β-subunits from neighbouring cells [26]. The molecular weight of the αsubunit corresponds to the protein coprecipitating with p58 antibodies under non-denaturing conditions (not shown). Na + K + -ATPase has also been used in other cell systems as a basolateral marker protein. MS identification of proteins from organisms with unsequenced genomes Mass spectrometry driven sequence similarity searches now make it possible to characterize proteins from model organisms with unsequenced genomes by their similarity to already available sequences. Computational simulations suggested that almost all proteins within mammalian phylogenetic lineage could be identified by MS BLAST sequence similarity searches using 10 sequenced tryptic peptides, which is a rather frequent outcome of tandem mass spectrometric experiments [10]. Importantly, the method imposes rather loose requirements on the quality of peptide sequences and thus paves the way to complete automation of the analytical routine. Mass spectrometric characterization of unknown proteins can be performed in a layered approach [7,27] i.e. conventional proteomics methods could be applied first to identify highly conserved proteins that share identical peptide sequences with their known homologues, and sequence similarity searches would only be applied to a selection of non-conserved proteins once the conventional methods failed. Thus we might anticipate that the scope of proteomics methods will be able to support biochemical research in any vertebrate model. Conclusion The apical marker glycoprotein gp114 has been enriched from tissue culture cells and identified by tandem mass spectrometry as canine carcinoembryonic antigen-related cell adhesion molecule (CEACAM). We exemplify the difficulties associated with identifying glycoproteins from model systems without sequenced genomes, and how to overcome them. The general strategy provides a frame-work which should be useful for many related approaches. Known properties of gp114 such as glycosylation dependent transcytosis and association with lipid microdomains involved in signal transduction can now be integrated with the knowledge about CEACAM proteins obtained by different approaches. IgG-protein G sepharose columns A membrane fraction from dog intestine was used for generation of monoclonal antibodies 4.6.5a (gp114) and 6.23.3 (p58) [4]. Hybridoma cells 4.6.5a and 6.23.3 were grown in serum-free HyQ SFX-MAb medium (HyClone, Logan, Utah) for two weeks. Supernatants were clarified by sequential centrifugation at 200 × g and 10,000 × g, Tandem mass spectrum of a precursor ion with m/z 656.8 and charge +4 (the charge was determined from mass difference between its isotopic peaks) Figure 3 Tandem mass spectrum of a precursor ion with m/z 656.8 and charge +4 (the charge was determined from mass difference between its isotopic peaks). The precursor ion is labeled with an asterisk. The spectrum was partially interpreted by considering precise mass differences between the adjacent fragment ions. Doubly charged fragment series rendered the sequence PGD-TASLTWF which was further extended toward the N-terminus using very low abundant ions in the m/z range > 1100 (not shown), but the sequence of the two N-terminal amino acid residues remained ambiguous (VL or LV). It was possible to determine the C-terminal amino acid (K) and a short sequence stretch (TVLP) spaced from the C-terminal lysine by two or three unknown amino acid residues (X). Bridging between the sequence stretches PGDTASLTWF and TVLP could have been achieved by one of the three isobaric combination of amino acid residues, and the order of amino acid residues remained unknown. Hence the peptide might contain the sequence WF-QGET-VL, or WF-QEGT-VL, or WF-QADT-VL, or WF-QDAT-VL..., etc. Membrane preparation MDCK cells were grown on plastic dishes corresponding to a surface area of 0.9 m 2 equivalent to 3.6 × 10 9 cells. A postmitochondrial supernatant was obtained by homogenizing cells in 0.25 M sucrose, 3 mM imidazol pH 7.4 (13× pushing through a 22-gauge needle) and centrifugation at 4,000 × g. Membranes were pelleted for 30 minutes at 100,000 × g, and treated on ice for 30 minutes with TNE1 (20 mM Tris pH 7.4, 150 mM NaCl, 5 mM EGTA) containing 1% w/v Triton x-100. Under these conditions, p58 and gp114 are efficiently solubilized. Not solubilized membranes were removed by centrifugation at 100,000 × g for 30 minutes, and the supernatant (19 mg total protein) used for immunoaffinity chromatography. Immunoaffinity chromatography The solubilized membrane preparations were passed three times over the IgG-protein G sepharose columns. Columns were washed with 50 ml of TNE2 (10 mM Tris pH 7.4, 150 mM NaCl, 1 mM EDTA) containing 0.1% w/ v Tx-100. Elution with 0.1 M glycine pH 2.6, 0.1% w/v Tx-100 was in 1 ml steps. Eluted fractions were neutralized, concentrated by spin columns (Centrikon YM-30), and then desalted and precipitated by methanol/chloroform extraction [28]. Aqueous supernatants were lyophilized and found to contain high amounts of gp114, but no p58. Highly glycosylated proteins have been reported to partition into the aqueous phase under these conditions [29]. Table 1: Peptide sequences of gp114 derived from MS/MS spectra. Peptide sequences from gp114 produced by the interpretation of MS/MS spectra and MS BLAST alignments with corresponding peptides. X are unidentified amino acid residues, L stands for both Leu and Ile residues; B stands for a generic trypsin cleavage site (Arg or Lys); sequences in brackets present isobaric combinations of amino acid residues, which could not be distinguished because of the absence of the corresponding fragment ions in the mass spectrum. All peptide sequence candidates from all fragmented precursors were merged into a single MS BLAST search string. Multiple sequence candidates per each fragmented precursor were allowed. Peptides 2, 4, 5, 3 and 6 (underlined) are contained in one putative exon derived from sequence FE8. Immunoprecipitation followed by enzymatic deglycosylation with PNGaseF (Roche) was according to standard procedures. Mass spectrometry Proteins separated by polyacrylamide gel electrophoresis were visualized by Coomassie staining, and bands were excised and digested by trypsin (Promega) as described [30]. 1 µl aliquots of digests were analyzed by MALDI peptide mapping on a Reflex IV MALDI TOF mass spectrometer (Bruker Daltonics, Germany) using AnchorChip™ targets as described [31]. Tryptic peptides were extracted from the gel matrix by 5% formic acid and acetonitrile, pooled and lyophilized. Peptides were sequenced by nanoelectrospray tandem mass spectrometry on a QSTAR Pulsar i quadrupole time-of-flight mass spectrometer (MDS Sciex, Canada). 40 tandem mass spectra were acquired from the digest of the gp114 band. Uninterpreted tandem mass spectra were first used to search a protein sequence database MSDB using Mascot software (Matrix Science Ltd, UK) v.1.8 installed on a local server. No restrictions on species of origin or protein molecular weight were imposed. All Mascot hits were further verified by manual inspection of matched tandem mass spectra. Spectra, which were not matched by Mascot, were manually interpreted de novo. The interpretation of each spectra rendered a few degenerate, redundant and incomplete peptide sequence candidates, which were assembled into a single MS BLAST [9] query string as described previously [32]. MS BLAST searches against the non-redundant protein database nrdb95 were performed on a web server [33]. In silico analysis The human CD66a sequence was blasted against a dog genomic database [34]. The best match was obtained with sequence G630P617675FE8.T0, which was then translated into the amino acid sequence. After the dog genome became available, we repeated our homology searches. Since the FE8 data showed that four of the sequenced peptides (#2-#4-#5-#3) form an almost continuous stretch, we could probe the new (nucleotide) databases now with a 65 amino acid sequence. Eight significant alignments were found, all on chromosome I. The top three alignments were investigated more closely (exclusion limit: better than 90% over 60 amino acids, taking into account that MS cannot distinguish between Ile and Leu or isobaric amino acid combinations). Only one translated nucleotide sequence contained peptides #1 and #6. Using human CEACAM family proteins for guidance, a tentative amino acid surfaced out of merging putative exons. This sequence was 58-61% identical and 67-71% similar to human CEACAM family proteins 1, 5, 8 and 6.
2016-05-12T22:15:10.714Z
2006-03-13T00:00:00.000
{ "year": 2006, "sha1": "261f5839fb0606b20039e0e36a489aaab63040a3", "oa_license": "CCBY", "oa_url": "https://bmcbiochem.biomedcentral.com/track/pdf/10.1186/1471-2091-7-8", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "6e90e06b7c220cf58f53b3df0f65b32a3b458070", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
239009977
pes2o/s2orc
v3-fos-license
ThERESA: Three-Dimensional Eclipse Mapping with Application to Synthetic JWST Data Spectroscopic eclipse observations, like those possible with the James Webb Space Telescope, should enable 3D mapping of exoplanet daysides. However, fully-flexible 3D planet models are overly complex for the data and computationally infeasible for data-fitting purposes. Here, we present ThERESA, a method to retrieve the 3D thermal structure of an exoplanet from eclipse observations by first retrieving 2D thermal maps at each wavelength and then placing them vertically in the atmosphere. This approach allows the 3D model to include complex thermal structures with a manageable number of parameters, hastening fit convergence and limiting overfitting. An analysis runs in a matter of days. We enforce consistency of the 3D model by comparing vertical placement of the 2D maps with their corresponding contribution functions. To test this approach, we generated a synthetic JWST NIRISS-like observation of a single hot-Jupiter eclipse using a global circulation model of WASP-76b and retrieved its 3D thermal structure. We find that a model which places the 2D maps at different depths depending on latitude and longitude is preferred over a model with a single pressure for each 2D map, indicating that ThERESA is able to retrieve 3D atmospheric structure from JWST observations. We successfully recover the temperatures of the planet's dayside, the eastward shift of its hotspot, and the thermal inversion. ThERESA is open-source and publicly available as a tool for the community. INTRODUCTION Exoplanet atmospheric retrieval is a method of inferring the temperature and composition of extrasolar planets based on the observed spectrum and photometry. Retrieval can be done in two ways: by modeling planetary emission, measured through direct observation or monitoring flux loss during eclipse, and by modeling the transmission of stellar light through the planet's atmosphere while the planet transits its host star (e.g., Deming & Seager 2017). Historically, due to the challenges of observing planets in the presence of much larger, brighter stars, exoplanet retrievals have been limited to measurement of bulk properties, using a single temperature-pressure profile and set of molecular abundance profiles for the entire planet (e.g., Kreidberg et al. 2015;Hardy et al. 2017;Garhart et al. 2018). Analyses like these can be biased, with the retrieved properties possibly not representative of any single location on the planet (Feng et al. 2016;Line & Parmentier 2016;Blecic et al. 2017;Lacy & Burrows 2020;MacDonald et al. 2020;Taylor et al. 2020). Eclipse mapping is a technique for converting exoplanet light curves to brightness maps. During eclipse ingress and egress, the planet's host star blocks and uncovers different slices of the planet in time. Brightness variations across the planet result in changes in the mor-phology of the eclipse light curve (Williams et al. 2006;Rauscher et al. 2007;Cowan & Fujii 2018). HD 189733 b is the only planet successfully mapped from eclipse observations, by stacking many 8.0 µm light curves (de Wit et al. 2012;Majeau et al. 2012). However, with the advent of the James Webb Space Telescope (JWST) in the near future, we expect observations of many more planets to be of sufficient quality for eclipse mapping (Schlawin et al. 2018). 2D eclipse mapping, where a single light curve is inverted to a spatial brightness map, is a complex process. Depending on assumptions about the planet's structure, retrieved maps can be strongly correlated with orbital and system parameters (de Wit et al. 2012). Rauscher et al. (2018) presented a mapping technique using an orthogonal basis of light curves, reducing parameter correlations and extracting the maximum information possible. In principle, spectroscopic eclipse observations, like those possible with JWST, should allow 3D eclipse mapping of exoplanets. Every wavelength probes different pressures in the planet's atmosphere. These ranges depend on the wavelength-dependent opacity of the atmosphere, which in turn depends on the absorption, emission, and scattering properties of the atmosphere's constituents. Thus, eclipse observations are sensitive to both temperature and composition as func-tions of latitude and longitude. In practice, however, 3D eclipse mapping is complex. Each thermal map computed from a spectral light curve corresponds to a range or ranges of pressures, which can vary significantly across the planet (Dobbs-Dixon & Cowan 2017). Unlike 2D mapping at a single wavelength, 3D mapping requires computationally-expensive radiative transfer calculations, which can be prohibitive when exploring model parameter space. Also, the 3D model parameter space is extensive, so one must make simplifying considerations based on the quality of the data. Mansfield et al. (2020) introduced a method to use clustering algorithms with a set of multi-wavelength maps to divide the planet into several regions with similar spectra. While not a fully-3D model, this method shows promise for distinguishing spatial regions with distinct thermal profiles or chemical compositions, determined from atmospheric retrieval of the individual regions. In this work we build upon Rauscher et al. (2018), combining eclipse mapping techniques with 1D radiative transfer to present a mapping method that captures the full 3D nature of exoplanet atmospheres while being maximally informative, with constraints to enforce physical plausibility and considerations that improve runtime. In Section 2 we describe our 2D and 3D mapping approaches, in Section 3 we apply our methods to synthetic observations, in Section 4 we compare our retrieved 2D and 3D maps with the input planet model, and in Section 5 we summarize our conclusions. METHODS Here we present the Three-dimensional Exoplanet Retrieval from Eclipse Spectroscopy of Atmospheres code (ThERESA 1 ). ThERESA combines the methods of Rauscher et al. (2018) with thermochemical equilibrium calculations, radiative transfer, and planet integration to simultaneously fit spectroscopic eclipse light curves, retrieving the three-dimensional thermal structure of exoplanets. The code operates in two modes -2D and 3D -with the former as a pre-requisite for the latter. The code structure is shown in Figure 1, with further description in the following sections. The version of ThERESA used for this analysis can be found at https://doi.org/10.5281/zenodo.5773215. 2D Mapping ThERESA's 2D mapping follows the methods of Rauscher et al. (2018). First, we calculate a basis of 1 https://github.com/rychallener/ThERESA light curves, at the supplied observation times, from positive and negative spherical-harmonic maps Y l m , up to a user-supplied complexity l max using the starry package (Luger et al. 2019). These light curves are then run through a principle component analysis (PCA) to determine a new basis set of orthogonal light curves, ordered by total power, which are used in a linear combination to individually model the spectroscopic light curves. In typical PCA, one subtracts the mean from each observation (spherical-harmonic light curve), computes the covariance matrix of the mean-subtracted set of observations, then calculates the eigenvalues and eigenvectors of this covariance matrix. The new set of observations ("eigencurves") is the dot product of the eigenvectors and the mean-subtracted light curves, and the eigenvalues are the contributions from each spherical harmonic map to generate each eigencurve. However, the meansubtraction causes the initial light curve basis set to have non-zero values during eclipse, a physical impossibility that propagates forward to the new basis set of orthogonal light curves. Integrating a map (an "eigenmap") created from these eigenvalues will not generate a light curve that matches the eigencurves. Therefore, we use truncated singular-value decomposition (TSVD), provided by the scikit-learn package (Pedregosa et al. 2011). TSVD does not do mean-subtraction so the resulting eigencurves have zero flux during eclipse, as expected, which is an improvement over Rauscher et al. (2018). Figure 2 shows an example of the transformation from spherical-harmonic maps and light curves to eigenmaps and eigencurves. We fit each star-normalized spectroscopic light curve, individually, as a linear combination of N eigencurves, the uniform-map light curve Y 0 0 , and a constant offset s corr to account for any stellar flux normalization errors. Each wavelength has its own set of fit values, and potentially its own set of eigencurves, since N and l max can be different for each light curve. Functionally, the model is where F sys is the system flux, c i are the light-curve weights, and E i are the eigencurves. The light-curve weights and s corr are the free parameters of the model. Although the synthetic data in this work should have s corr = 0, we still fit to this parameter to better represent an analysis of real data. We run a χ 2 minimization to determine the best fit and then run Markov-chain Monte Carlo (MCMC), through MC3 (Cubillos et al. 2017), to fully explore the parameter space. By construction, the eigenmaps and eigencurves are deviations from the uniform map and its corresponding light curve, respectively, so negative values are possible. If the parameter space is left unconstrained, some regions of the planet may be best fit with a negative flux. To avoid this non-physical scenario, we check for negative fluxes across the visible cells of the planet, based on the times of observation, and penalize the fit. The penalty scales with the magnitude of the negative flux, and we ensure that any negative fluxes result in a worse fit than any planet with all positive fluxes, which effectively guides the fit away from non-physical planets. While the eigenmaps seemingly provide information about the non-visible cells of the planet, these constraints are simply a consequence of the continuity of spherical harmonic maps. Given that we have no real information on those portions of the planet, we do not insist that they have positive fluxes. We then use the best-fitting parameters with the matching eigenmaps to construct a thermal flux map for the light curve observed at each wavelength (Equation 4 of Rauscher et al. 2018): where Z p is the thermal flux map, θ is latitude, φ is longitude, and Z i are the eigenmaps. These flux maps are converted to temperature maps 2 using Equation 8 of Rauscher et al. (2018): where λ is the band-averaged wavelength of the filter used to observe the corresponding light curve, R p is the radius of the planet, R s is the radius of the star, and T s is the stellar temperature. When performing 2D mapping, the primary user decisions are choosing l max , the maximum order of the spherical harmonic light curves, and N , the number of eigencurves to include in the fit. Larger l max (up to a limit) and N result in better fits, as more complex thermal structures become possible. However, these complex planets are often not justified by the quality of the where χ 2 is the traditional goodness-of-fit metric, k is the number of free parameters in the model (N + 2 per light curve, assuming a uniform map term c 0 and s corr ), and n data is the number of data points being fit. Thus, the BIC penalizes fits which are overly complex. We choose the 2D fit which results in the lowest BIC as the best fit. Application to Spitzer HD 189733b Observations To test our implementation of the methods in Rauscher et al. (2018), and the effects of TSVD PCA, we performed the same analysis of Spitzer phase curve and eclipse observations of HD 189733b. The data are the same as those used by Majeau et al. (2012), which include eclipse observations (Agol et al. 2010) and approximately a quarter of a phase curve (Knutson et al. 2007, re-reduced by Agol et al. 2010). Like Rauscher et al. (2018), we tested a range of values for l max and N , using the BIC to choose the model with the highest complexity justified by the data. Table 1 compares our goodness-of-fit statistics with Rauscher et al. (2018). We also determine that the optimal fit uses l max = 2 and N = 2 (see figure 3), and our preference for N = 2 over N = 3 is stronger. As expected, models with higher N result in lower χ 2 . For low N , we achieve better χ 2 and BIC values than Rauscher et al. (2018), although the difference is slight at N = 2. At higher N , differences between the results are statistically negligible. These differences are likely due to the PCA methods used and any differences in the spherical-harmonic lightcurve calculation packages used, since Rauscher et al. (2018) employed SPIDERMAN (Louden & Kreidberg 2018). Since our eigencurves differ from those used by Rauscher et al. (2018), there is no meaningful compar- Figure 3. Results of a 2D fit to the HD 189733b Spitzer observations. Top: The light-curve data and best-fitting model. The inset shows the best-fitting thermal map. A black box covers the longitudes not visible during the observations. The hotspot location is marked with a black dot. Bottom: The model residuals and uncertainties. Data are binned for visual clarity, to 10 data points per bin. ison between the best-fitting c i parameters. However, the stellar correction s corr has the same function in both works. Notably, we find s corr = −14 ± 20 ppm, consistent with zero, whereas Rauscher et al. (2018) find a stellar correction of 452 +39 −40 ppm. It is likely that our use of TSVD PCA to enforce zero flux during eigencurve eclipse means the stellar correction term is only correcting for normalization errors, and not also ensuring zero planet flux during eclipse. To determine the location of the planet's hotspot, we use starry to find the location of maximum brightness of the best-fit 2D thermal map. Then, to calculate an uncertainty, we repeat this process on 10,000 maps sampled from the MCMC posterior distribution, taking the standard deviation of these locations to be the uncertainty. For HD 189733b, we find a hotspot longitude of 21.8±1.5 • to the east of the substellar point, which very closely matches the 21.6 ± 1.6 • and 21.8 ± 1.5 • found by Rauscher et al. (2018) and Majeau et al. (2012), respectively. Thus, we are confident that we have accurately implemented 2D mapping. 3D Mapping Atmospheric opacity varies with wavelength, so spectrosopic eclipse observations probe multiple depths of the planet, in principle allowing for three-dimensional atmospheric retrieval. However, the relationship between wavelength and pressure is very complex. Each wavelength probes a range or ranges of pressures, and those pressures change with location on the planet (Dobbs-Dixon & Cowan 2017). Here, we combine our 2D maps vertically within an atmosphere, run radiative transfer, integrate over the planet, and compare to all spectroscopic light curves simultaneously with MCMC to retrieve the 3D thermal structure of the planet and parameter uncertainties. We assume that the planet's atmosphere is static, so that observation time only affects viewing geometry. The complexity of the radiative transfer calculation and the resulting computation cost forces us to discard the arbitrary resolution of the 2D maps in favor of a gridded planet. The grid size is up to the user (see Section 3.3.2, but smaller grid cells quickly increase mapping runtime. The 3D planet model is the relationship between 2D temperature maps and pressures. ThERESA offers several such models: 1. "isobaric" -The simplest model, where each 2D map is placed at a single pressure across the entire planet. This model has one free parameter, a pressure (in log space), for each 2D map. 2. "sinusoidal" -A model that allows the pressures of each 2D map to vary as a sinusoid with respect to latitude and longitude. This model has four free parameters for each 2D map: a base pressure, a latitudinal pressure variation amplitude, a longitudinal pressure variation amplitude, and a longitudinal shift, since hot Jupiters often exhibit hotspot offsets. Functionally, the model is: where a i are free parameters of the model. For simplicity, this model does not allow for latitudinal asymmetry, although if the 2D temperature maps are asymmetric in that manner (detectable in inclined orbits), we might expect a similar asymmetry in these pressure maps. 3. "quadratic" -A second-degree polynomial model, including cross terms, for a total of six free parameters. The functional form is: Unlike the isobaric and sinusoidal models, the quadratic model is not continuous opposite of the substellar point, where latitude rolls over from 180 • to -180 • . This is not an issue for eclipse observations, like those presented in this work, but may be problematic if all phases of the planet are visible in an observation. A similar problem occurs at the poles, although these regions contribute very little to the observed flux. 4. "flexible" -A maximally-flexible model that allows each 2D map to be placed at an arbitrary pressure for each grid cell. The number of free parameters is the number of visible grid cells multiplied by the number of 2D maps. These models represent a range of complexities, and the appropriate function can be chosen using the BIC (Equation 4), assuming that comparisons are made between fits to the same data. Once the temperature maps are placed vertically, ThERESA interpolates each grid cell's 1D thermal profile in logarithmic pressure space. The interpolation can be linear, a quadratic spline, or a cubic spline. For pressures above and below the placed temperature maps, temperatures can be extrapolated, set to isothermal with temperatures equal to the closest temperature map, or parameterized, as T top and T bot . When parameterized, the top and/or bottom of the atmosphere are given single temperatures across the entire planet and the pressures between are interpolated according to the chosen method. If any negative temperatures are found in the atmosphere grid cells that are visible during the observation, we discard the fit to avoid these non-physical models. In principle, atmospheric constituents could also be fitted parameters. If the data are of sufficient quality, molecular abundance and temeprature variations across the planet would create observable emission variability depending on the opacity of the atoms and molecules. However, without real 3D-capable data to test against, we assume the planet's chemistry is in thermochemical equilibrium, with solar atomic abundances. ThERESA offers two ways to calculate thermochemical equilibrium: rate 3 (Cubillos et al. 2019) and GGchem 4 (Woitke et al. 2018). With rate, ThERESA calculates abundances on-the-fly as needed. When using GGchem, the user must supply a pre-computed grid of abundances over an appropriate range of temperatures and at the same pressures of the atmosphere used in the the 3D fit, which ThERESA interpolates as necessary. ThERESA was designed with flexibility in mind, so other abundance prescriptions can be inserted. Next, ThERESA runs a radiative transfer calculation on each grid cell to compute planetary emission. We use TauREx 3 5 (Al-Refaie et al. 2019) to run these forward models. Like the atmospheric abundances, the forward model can be replaced with any similar function. We only run radiative transfer (and thermochemical equilibrium) on visible cells to reduce computation time. Spectra are computed at a higher resolution then integrated over the observation filters ("tophat" filters for spectral bins). For computational feasibility, we use the ExoTransmit (Kempton et al. 2017) molecular opacities (Freedman et al. , 2014Lupu et al. 2014), which have a resolution of 10 3 , but we note that using highresolution line lists such as HITRAN/HITEMP (Rothman et al. 2010;Gordon et al. 2017) or ExoMol (Tennyson et al. 2020) would be preferred for accuracy. In this work we use wide spectral bands, which minimizes the effect of using low-resolution opacities. Finally, we integrate over the planet at the observation geometry of each time in the light curves. The visibility V of each cell, computed prior to modeling to save computation time, is the integral of the area and incident angles combined with the blocking effect of the star, given by where d is the projected distance between the center of the visible portion of a grid cell and the center of the star, defined as x p is the x position of the planet, x s is the x position of the star, y p is the y position of the planet, and y s is the y position of the star, calculated by starry for each observation time. These positions include the effect of inclination. We multiply the planetary emission grid by V and sum at each observation time to calculate the planetary light curve. Like the 2D fit, we repeat this process in MCMC to fully explore parameter space. Due to model complexity, traditional least-squares fitting does not converge to a good fit, so we rely on MCMC for both model optimization and parameter space exploration. Enforcing Contribution Function Consistency Given absolute freedom to explore the parameter space, the 3D model will often find regions of parameter space which, while resulting in a "good" fit to the spectroscopic light curves, are physically implausible upon further inspection. For instance, the model may bury some of the 2D maps deep in the atmosphere where they have no effect on the planetary emission, or maps may be hidden vertically between other maps, where the combination of linear interpolation and discrete pressure layers causes them to not affect the thermal structure. To avoid scenarios like these, ThERESA has an option where maps are required to remain close (in pressure) to their corresponding contribution functions. Contribution functions show how much each layer of the atmosphere is contributing to the flux emitted at the top of the atmosphere as a function of wavelength (e.g. Knutson et al. 2009). When integrated over a filter's transmission curve, they show which layers contribute to the emission in that filter. That is, the contribution function for a given filter shows which pressures of the atmosphere are probed by an observation with that filter. Therefore, a 2D thermal map should be placed at a pressure near the peak contribution, or at least near the integrated average pressure, weighted by the contribution function for the corresponding filter. To enforce this condition, we apply a penalty on the 3D model's χ 2 . For each visible grid cell on the planet, we calculate the contribution function for each filter. We then treat this contribution function, in log pressure space, as a probability density function and compute the 68.3% credible region. Then, we take half the width of this region to be an approximate 1σ width of the contribution function, and the midpoint of this region to be the approximate "correct" location for the map. From this width and location, we compute a χ 2 for each contribution function and for each grid cell, and add that to the light-curve χ 2 . We caution that this effectively adds a number of data points equal to the number of visible grid cells, which means BIC comparisons between models that use contribution function fitting and have different grid sizes are invalid. Light-curve Generation In lieu of real observations, and as a ground-truth test case, we generated synthetic spectroscopic eclipse light curves of WASP-76b, an ultra-hot Jupiter with a strong eclipse signal. For the thermal structure, we adopt the results of a double-grey general circulation model (GCM, Rauscher & Menou 2012; Roman & Rauscher 2017) from Beltz et al. (2021), with no magnetic field. This thermal structure is shown in Figure 4. The GCM output has 48 latitudes, 96 longitudes, and 65 pressure layers ranging from 100 -0.00001 bar in log space. The double-grey absorption coefficients used in the GCM were chosen to roughly reproduce the average temperature profile for this planet using observational constraints from Fu et al. (2021), which includes a thermal inversion, likely due to TiO in the atmosphere. However, since it is double-gray, the GCM is only using the absorption coefficients to recreate the radiative state of the atmosphere, and is generally agnostic to the atmospheric composition. Since WASP-76b is too hot for rate, we use GGchem to compute molecular abundances. We then use Tau-REx to calculate grid emission, including opacity from H 2 O, CO, CO 2 , CH 4 , HCN, NH 3 , C 2 H 2 , and C 2 H 4 , and integrate the planet emission following the visibility function described in Equation 7. The radiative transfer includes no opacity from clouds, because ultra-hot Jupiters are not expected to form clouds, especially on the dayside (Helling et al. 2021;Roman et al. 2021). This is the same process used in the light-curve modeling, except the GCM has a higher grid resolution and the temperatures are set by a circulation model instead of the vertical placement of thermal maps. Our for- ward model (GCM and radiative transfer) and retrieval do not explicitly include TiO opacity, so our results are still self-consistent with the ground truth. For the light-curve simulation, we assume the system parameters listed in Table 2. We choose a wavelength range of 1.0 -2.5 µm, roughly equivalent to the first order of the JWST NIRISS single-object slitless spectroscopy observing mode, a recommended instrument and mode for transiting exoplanet observations. Using the JWST Exposure Time Calculator 6 , we determine the optimal observing strategy is 5 groups per integration, for a total exposure time of 13.30 s. We assume an observation from 0.4 to 0.6 orbital phase with this exposure time for a total of 2352 exposures over 8.69 hours. 6 https://jwst.etc.stsci.edu We calculate light-curve uncertainties as photon noise for a single eclipse, assuming a planetary equilibrium temperature of 2190 K. We divide the spectrum into 5 spectral bins of equal size in wavelength space and calculate the photon noise of each bin, assuming Planck functions for planetary and stellar emission. Star-normalized uncertainties range from 26 ppm at 1.14 µm to 55 ppm at 2.35 µm. These are optimistic, uncorrelated uncertainties, but until JWST is operational it is difficult to predict its behavior. 2D Maps We fit 2D maps to the synthetic light curves using the methods described in Section 2. Through a BIC comparison between all (l max , N ) pairs for each wavelength, we determine the best l max and N for each light curve (see Table 3). As expected, goodness-of-fit improves as N increases, and reaches a limit where increasing l max no longer improves the fit in a substantial way, once the spherical harmonics capture the observable temperature structure complexity. It is important to fit each light curve with its own l max and N ; in our case, using a single combination of l max and N results in worse fits (larger BICs) for all five light curves, with a total BIC ≈15 higher than the total BIC achieved with individual combinations of l max and N . The four eigenmaps from the fit to the 2.05 µm data, scaled by their best-fitting c i parameters, as well as the scaled uniform map component, are shown in Figure 5. The first eigencurve fits the large-scale substellarterminator contrast, which is extremely significant for ultra-hot Jupiters and contributes most of the lightcurve flux variation, evident in the significant magnitude of the scaled eigenmap. The second eigencurve ad-justs for the eastward shift of the planet's hotspot. The third and fourth curves make further corrections to the temperature variation between the substellar point and the dayside terminators. Eigenmap information content decreases with distance from the substellar point, reaching nearly zero in non-visible portions of the planet, where any variation is only due to the continuity and integration normalization of the original spherical harmonics. Figure 6 shows the light curves at each wavelength, the best-fitting models, and the residuals. Figure 7 shows the best-fitting thermal maps compared with filter-integrated maps of the GCM. 3D Map As described above, we fit a 3D model to the spectroscopic light curves by placing 2D maps vertically in the atmosphere, interpolating a 3D temperature grid, computing thermochemical equilibrium of the 3D grid, calculating radiative transfer for each planetary grid cell, and integrating over the planet for the system geometry of each exposure in the observation. Since WASP-76b is an ultra-hot Jupiter (zero-albedo, instantaneous redistribution equilibrium temperature of 2190 K), we use GGchem to calculate thermochemical equilibrium abundances. As with the light-curve generation, the radiative transfer includes H 2 O, CO, CO 2 , CH 4 , HCN, NH 3 , C 2 H 2 , and C 2 H 4 opacities, all from ExoTransmit. The atmosphere has 100 discrete layers, evenly spaced in logarithmic pressure space, from 10 −6 -10 2 bar. Spectral Binning With spectroscopic observations, the spectral binning is up to the observer. This choice is far from simple. Smaller bins lead to a better-sampled spectrum, and each wavelength bin likely probes a smaller range of pressures, potentially allowing for better characterization of the atmosphere, as the atmosphere is less likely to be well fit with all maps placed at similar pressures. However, small wavelength bins lead to more uncertain light curves and, thus, more uncertain 2D maps to use in the 3D retrieval. Since we do not incorporate the uncertainty of the 2D maps in our 3D retrieval (aside from adjusting their positions vertically), we take the cautious approach of minimizing this uncertainty by using only five evenly-spaced spectral bins from 1.0 -2.5 µm. Planet Grid Size Our formulation of the visibility function (Equation 7), counts planet grid cells as either fully visible or ob-scured depending on the position of the center of the grid cell relative to the center of the projected stellar disk. This approximation approaches a smooth eclipse for infinitely small grid cells, but 3D model calculation time scales approximately proportionally to the number of grid cells, quickly becoming infeasible. Therefore we must choose a planetary grid resolution that adequately captures the eclipse shape within observational uncertainties and keeps model runtime manageable. Figure 8 compares uniformly-bright WASP-76b eclipse ingress models for a range of grid resolutions with the analytic ingress model. At a resolution of 15 • × 15 • , the deviations from the analytic model are within the 1σ region for the filter which yields the highest signal-to-noise ratio, so we adopt this grid resolution. Temperature-Pressure Interpolation By default, ThERESA uses linear interpolation, in log-pressure space, to fill in the temperature profiles between the 2D maps. Since the atmosphere is discretized into pressure layers, if > 2 thermal maps are placed between two adjacent pressure layers for a given grid cell, the grid cells of the central maps of this grouping have no effect on the 3D thermal structure. This problem is significantly exacerbated when using the isobaric pressure mapping method, where entire 2D maps, rather than just a single cell of a map, can be hidden. We experimented with quadratic and cubic interpolation to mitigate this problem, as those methods make use of more than just the two adjacent thermal maps nearest to the interpolation point. However, these interpolation methods cause significant unrealistic variation in the vertical temperature structure, often creating negative temperatures, artificially limiting parameter space (as these negative temperature models are rejected). Therefore, we elect to use linear interpolation and rely on the contribution function constraint (see Section 2.2.1) to guide the 2D thermal maps to consistent pressure levels. While it is still possible to hide maps, for a good fit this will only happen when maps have similar temperatures. Pressure Mapping Function We tested each of the pressure mapping functions described in Section 2.2 -isobaric, sinusoidal, quadratic, and flexible -to determine which is most appropriate for our synthetic WASP-76b observation. For all four cases, we use an additional parameter to set the planet's internal temperature (at 100 bar) but leave the upper atmosphere to be isothermal at pressures lower than the highest 2D map in each grid cell. The isobaric model, which contains only 6 free parameters -a pressure level for each 2D temperature map and Note-The optimal fit for each light curve is in black, fits with a ∆BIC ≤ 2 (model preference ≈3:1 or better) are in blue, fits with 2 < ∆BIC ≤ 5 (model preference ≈3:1 -10:1) are in orange, and all worse fits are in red. In some cases (e.g., lmax = 2, N = 2 and lmax = 3, N = 2 for 1.44 µm), the BICs appear to be identical when printed with these digits, but the true best case is in black. an internal temperature -fits the data quite well. We achieve a reduced χ 2 of 1.067 and a BIC of 13750.55, including the penalty for contribution function fitting. The fit places the 2D maps for the four longest wavelengths, which all have similar temperatures, at ≈ 0.05 bar and puts the 1.14 µm map slightly deeper, at ≈ 0.08 bar, creating the temperature inversion seen in the GCM (see Figure 4). Figure 9 shows the 3D temperature structure, with each profile weighted by the contribution functions of each filter in each grid cell. The sinusoidal pressure mapping function will always result in a fit at least as good as the isobaric model, as there is a set of sinusoidal model parameters that replicates the isobaric model, but the additional complexity may not be justified by the quality of our data. Here, however, we find a significant improvement. First, we use the sinusoidal function but fix the phase of the longitudinal sinusoid (a 4 in Equation 5) equal to the hotspot longitude, effectively assuming the contribution function variation follows temperature variation. With this parameterization, we find a reduced χ 2 of 1.013 and a BIC of 13142.84. If we also allow the a 4 terms to vary, the sinusoidal model is no longer forced to tie the hottest part of each . Right: 2D thermal maps derived from fits to each light curve. The maps are plotted at the resolution of the GCM, but these maps can be computed at an arbitrary resolution. The blank spaces have undefined temperatures, as the 2D fit produces negative flux in those regions. This is permitted because those grid cells are never visible during the observation, so we formally have no information about them. 2D map to the highest or lowest pressure of that map's vertical position. This added flexibility improves the fit to a reduced χ 2 of 0.989 and a BIC of 12879.60. Looking at Figure 9, we see that the hottest portion of 1.75 µm 2D map is shifted to higher pressures and the 1.14 µm map is pushed deeper, smoothing out the temperature inversion. The quadratic model is the next step up in complexity. With this model, we achieve a reduced χ 2 of 1.037 and a BIC of 13580.20. This is a worse fit than the sinusoidal models, and a much higher BIC, suggesting that the sinusoidal model better captures the 3D placement of the 2D maps using fewer free parameters. We also tested the "flexible" 3D model. With 12 latitudes and 24 longitudes, and a visible range of φ ∈ (Equation 7) for a range of grid sizes, compared to the true ingress shape computed analytically with starry. Bottom: The difference between the true ingress and the gridded planets. The dashed lines indicate the normalized 1σ region for the highest signal-to-noise filter in our synthetic observation of WASP-76b. The light curve of a sufficiently high-resolution grid should fall within these boundaries. (−126 • , 126 • ), there are 216 visible and partially-visible grid cells. With 5 wavelength bins and an internal temperature parameter, there are 1081 model parameters in total (including the internal temperature). In a bestcase scenario, we would achieve a reduced χ 2 of 1, implying a BIC ≥ 21882.26, far greater than the BICs we achieve with simpler, less flexible models. Without a drastic reduction in observational uncertainties, the data are unable to support such a model. In fact, such a reduction in light-curve uncertainties would require a finer planetary grid (see Section 3.3.2), in turn requiring additional parameters, further increasing the BIC and decreasing preference for this model. As with choosing N in the 2D mapping, we use the BIC to determine the optimal 3D model. For these data, we achieve the lowest BIC using the free-phase sinusoidal model. Credible Region Errors To assess the completeness of the MCMC parameterspace exploration, we compute the Steps Per Effectively Independent Sample (SPEIS), Effective Sample Size (ESS), and the absolute error on our 68.3% (1σ) credible region σ C of our posterior distribution following Harrington et al. (2021). We compute SPEIS using the initial positive sequence estimator (Geyer 1992), then divide the total number of iterations by the SPEIS to calculate ESS. Then, where C = 0.683 is a given credible region. We calculate an ESS for each parameter of each chain and sum over all chains to get a total ESS. Figure 9. Left to right: the input temperature grid from the GCM, the isobaric retrieval, the fixed-phase sinusoidal retrieval, the free-phase sinusoidal retrieval, and the quadratic retrieval. The colored lines indicate temperature profiles along the equator, with the GCM downsampled to the spatial resolution of the fitted temperature grid. All others are plotted in gray. For the fitted grid, plotted color opacity for each profile is weighted by the maximum contribution function (over all filters) for each grid cell and pressure layer, normalized to the largest contribution. Non-visible grid cells have a contribution of 0. Dots indicate the placement of the 2D temperature maps for each (visible) grid cell, and their plotting opacity is the ratio of the total contribution from that grid cell and filter to the maximum total contribution. orbits with non-zero (but less than 1 − R p /R s ) impact parameters. Using the subsample of posterior maps, we also can calculate uncertainties on the temperature maps, at arbitrary resolution, by evaluating the standard deviation of the temperature in each grid cell. Figure 11 shows the full temperature uncertainty maps and equatorial profiles. Uncertainties are low, at ≈ 10 K, for grid cells that are visible throughout the observation. Grid cells at higher longitudes are less well constrained because 1) they are only visible or partially visible for a portion of the observation, and 2) they contribute less to the planet-integrated flux due to the visibility function (Equation 7). Likewise, high latitudes are less well constrained. While the thermal maps agree with the GCM at many locations on the planet, there are a few places where the discrepancy is significant within the uncertainties. In several bandpasses we overestimate the temperature of the hotspot, four of the maps overestimate the temperature of the western terminator, and all five maps underestimate temperatures at the extreme ends of the visible longitudes. These discrepancies can all be explained by the simplicity of the 2D model, which only contains two to four statistically-justified eigencurves. The model cannot capture the fine details of the GCM thermal structure, and the uncertainties are likewise constrained. Indeed, if we increase the number of eigencurves, we both improve the match with the GCM and increase the temperature uncertainties to encompass the difference between the fit and truth (see Figure 12). However, this also significantly increases the uncertainty on the location of the planet's hotspot and the additional parameters are not statistically justified, evident in the minor changes to the best fit as more parameters are introduced. For example, adding only one more eigen- Figure 11. Uncertainties in the 2D temperature map fits, calculated as the standard deviation of the posterior distribution of maps. Plots are at the spatial resolution of the GCM, although they can be computed at any resolution. Axes are limited to locations visible during the observation. Left: Difference between the GCM (in black) and fitted equatorial temperatures. The shaded regions denote the 1σ, 2σ, and 3σ boundaries. Right: Temperature uncertainty maps, calculated as the standard deviation at each location. curve to the 1.14 µm fit (N = 3) increases the BIC by 5.92 (Table 3), a preference for the N = 2 model of ≈19:1. This figure also shows the fraction of the observation where each equatorial longitude is visible, which highlights the correlation between the observability of a location on the planet and the uncertainty on the retrieved temperature of that location. When interpreting analyses of real data we must be aware of these limitations. 3D Retrieval At first glance, the best-fitting 3D models all appear to be physically unrealistic, with internal temperatures at ≈ 500 K, much lower than the GCM internal temperature of ≈ 3000 K. However, if we examine the contribution functions of the best-fitting model (shown in line opacity in Figure 9), we see that our spectral bins are primarily sensitive to pressures from 0.001 -1 bar, and the majority of the emitted flux comes from the hottest grid cells at pressures between 0.01 and 0.1 bar, far from Figure 12. A comparison of the equatorial temperature uncertainties on 1.14 µm maps retrieved with different numbers of eigencurves. The blue region denotes 1, 2, and 3σ. Only visible longitudes are shown. Top: The equatorial temperatures of the GCM and the best fit. Middle: The difference between the GCM and fitted equatorial temperatures. The blue data point shows the best-fit hotspot longitude and its 3σ uncertainty (y location arbitrary). Bottom: The fraction of the observation during which a given longitude, along the equator, is visible to any degree, including the effects of both planet rotation and the occultation by the star. The plateau centered on 0 • shows the longitudes which are visible for the whole observation, except during eclipse. The smaller plateaus near −90 • and 90 • are the longitudes visible for the entirety of pre-or post-eclipse but not vice versa due to planet rotation during eclipse. the interior of the planet at 100 bar. Therefore, the internal temperature parameter is only affecting the emitted flux by controlling the temperature gradient near the deepest thermal map, and the absolute value of the parameter is unimportant. In all our fits, this extends the thermal inversion to the deepest visible pressure layers, consistent with the thermal inversion in the GCM, which continues down to ≈ 0.5 bar at the substellar point. The temperature structure as a whole is physically implausible at higher pressures, but the portions sensed by the observation are reasonable and similar to the GCM. We note that our MCMC analyses started from a more plausible internal temperature of 3000 K, using a uniform prior over the range [0, 4000] K, but without wavelengths which probe the deep atmosphere, those fits were quickly ruled out in favor of the fits presented here. To further understand the effects of the T bot parameter, we examined the thermal profiles in the MCMC posterior as a function of T bot . First, we looked for potential correlations between T bot and the location of the thermal inversion, measured by evaluating the 3D thermal profiles at high vertical (pressure) resolution and finding the pressure level where the temperature gradient becomes negative. This relationship, for the substellar point, is shown in Figure 13, which shows a minor positive correlation, although the variation in the inver- sion pressure is small. Thus, the T bot parameter is not affecting the location of the thermal inversion. There are many similarities between the isobaric fit and the two sinusoidal fits. In all cases, the 1.14 µm temperature map is placed at higher pressures than the others to create the thermal inversion near 0.04 bar and the planet has a low internal temperature to continue the temperature inversion. The upper atmosphere is roughly the same, especially in the hottest grid cells. However, while the isobaric model is forced to place all maps near their peak contributions at the planet's hotspot, the additional flexibility of the sinusoidal model allows the temperature maps to better match their contribution functions (and the emitted spectra) across the entire planet. This is evident in the significantly improved χ 2 and BIC. We take the free-phase sinusoidal model to be the optimal fit. Figure 14 shows the optimal 3D model fit to the light curves, and Table 4 lists the model parameters with their 1σ credible regions, SPEIS, ESS, and σ C , from a 28-day run with ≈ 810, 000 iterations over 7 Markov chains. We discard the first 80,000 iterations of each chain, as the χ 2 was still improving significantly, so these iterations are not representative of the true parameter space. These parameters give information about the pressures probed by each filter, as follows: a 1 is the average logarithmic pressure probed by the corresponding temperature map; a 2 is the change in logarithmic pressure probed from the equator to the poles; and a 3 is a similar change from the longitude a 4 to a 4 + 180 • . However, one must keep in mind that the observation is not sensitive to all longitudes, so the a 3 parameter should only be considered representative of the longitudinal change in probed pressure within the range of visible longitudes (-126 • to 126 • for this observation). Since the noise in the observation is purely uncorrelated, we can easily see where the model fails to match the observation. There is a minor negative slope in the residuals at 1.14 µm, indicating extra flux east of the substellar point and a flux deficiency west of the substellar point, at those wavelengths. This agrees with the more eastern hotspots found in our 2D maps. Our best fit also produces slightly less flux in the 2.05 µm band than the GCM. All these differences are well within the uncertainties, however, as indicated by the low χ 2 . Increasing N in the 2D fit could allow for fine adjustment of the 3D thermal structure and improve the 3D fit in these areas, but, as discussed above, additional free parameters are not justified by the data uncertainties. Fits using the simpler pressure-mapping functions have slightly stronger variations in the residuals, but very similar shapes (e.g., a stronger slope at 1.14 µm). This figure also shows a comparison between the 3D model contribution functions and the vertical placement of the 2D thermal maps, to examine the effectiveness of our contribution function consistency requirement (Section 2.2.1). The range of vertical placement of the 2D thermal maps (gray boxes) align well with the peaks of the contribution functions, demonstrating that our χ 2 penalty is successfully guiding the fit to a consistent result. Figure 15 shows a comparison of the equatorial contribution functions with the 2D map placements overlaid, which confirms that the 2D maps' vertical position matches the contribution functions, especially for the hottest grid cells. Uncertainties on the placement of the 2D maps are straightforward to evaluate, using the marginalized MCMC posterior distributions. For instance, for the isobaric model, the 2D maps are placed, in order of increasing wavelength, at −1.115 +0.006 −0.014 , −1.326 +0.004 −0.017 , −1.325 +0.011 −0.001 , −1.353 +0.010 −0.023 , and −1.326 +0.005 −0.014 log(bar pressure), with an internal temperature of 630 +64 −38 K. The uncertainties on these model parameters, and those of the other 3D model functions, can be lower than the model's vertical resolution since the discretized layers of the model have temperatures interpolated from the vertical placement of the 2D temperature maps. The 2D maps can be placed anywhere within the pressure range of the model, and are not restricted to the precise layering of the model. However, uncertainties on the 3D temperature model as a whole are more nuanced. A distribution of 3D mod-els generated from the MCMC parameter posterior distribution will lead one to believe the atmosphere is very well constrained. However, much like the uncertainties on the 2D models are restricted by the flexibility of the eigenmaps, uncertainties on the best-fitting 3D models are restricted by the models' functional forms. Additionally, the uncertainties on the 2D maps are not propagated to the 3D fits. The upper atmosphere will appear constrained to absolute certainty because the model sets these temperatures to be isothermal with the lowest-pressure 2D map in each grid cell. Therefore, temperatures at these low pressures do not vary in the MCMC unless the 2D maps are swapping positions, and even then the upper atmosphere of each grid cell can only have a number of unique temperatures equal to the number of 2D maps present. For pressures between 2D map placements, the temperature profile interpolation and vertical shifting of the 2D maps leads to a range of temperatures in the posterior distribution of models, but variation is still significantly limited by the temperatures present in the 2D maps. At higher pressures, temperatures are strongly tied to the deepest 2D map but some variation is allowed by the internal tem- Figure 15. The normalized equatorial contribution functions for each spectroscopic bin. The vertical, equatorial placements of the 2D thermal maps from the optimal fit of the free-phase sinusoidal function (Table 4) are overplotted in red. perature parameter. Uncertainties on the 3D model are much better understood by studying the contribution functions, as uncertainties should be considered unconstrained in portions of the atmosphere with zero contribution to the planet's flux. Thermal Inversion The presence of strong optical absorbers, such as TiO, is often invoked to explain the presence of thermal inversions, as this molecule can significantly heat a planet's upper atmosphere (Hubeny et al. 2003;Fortney et al. 2006Fortney et al. , 2008. Evidence of TiO has been found in observations of multiple hot Jupiters (e.g., Kirk et al. 2021;Changeat & Edwards 2021), including WASP-76b (Fu et al. 2021). Both our radiative transfer forward model of the ground truth and the retrieved 3D model do not include TiO, and yet we retrieve the input thermal inversion. As described in Section 3, the temperature inversion in the GCM is produced by the relative ratio between the infrared and optical absorption coefficients, which are broadband and do not include molecular spectral absorption (such as by TiO). The retrieval framework does not require TiO, but for a different reason; the temperature profiles are entirely parameterized and so do not require a physical heating mechanism to produce an inversion. This is a strength of the model, as real exoplanet atmospheres are likely in complex thermal states which can, in principle, be recreated by our model. While we achieve an excellent fit to the eclipse spectra without TiO, it is possible that the inclusion of this molecule would affect the emission spectra enough to shift the pressures probed by our observations, changing the optimal locations of the 2D temperature maps. However, TiO opacity peaks in the optical and falls off rapidly into the near-infrared wavelengths (e.g., Gharib-Nezhad et al. 2021). Thus, the impact on our models should be small. CONCLUSIONS In preparation for the 3D exoplanet mapping capabilities of JWST, we have presented the ThERESA code, a fast, public, open-source package for 3D exoplanet atmosphere retrieval. The code builds upon the maximallyinformative 2D mapping techniques of Rauscher et al. (2018) with the addition of 3D planet models, composition calculations, radiative transfer computation, and planet integration to model observed eclipse spectra. Thus, we combine a 2D mapping scheme with 1D radiative transfer into a 3D model with a manageable number of parameters, enabling fast fit convergence. For example, the isobaric model, running on 11 processors and < 5 gigabytes of memory, with 12 latitudes, 24 longitudes, 100 pressure layers, and eight molecules, converges in < 3 days of runtime. More complex models can take a few weeks depending on the desired error level in the parameter credible regions. ThERESA improves upon the 2D mapping methods of Rauscher et al. (2018) by (1) using TSVD PCA to ensure that eigencurves have the expected zero flux during eclipse, reducing the need for a stellar correction term, and (2) restricting parameter space to avoid non-physical negative fluxes at visible locations on the planet, forcing positive temperatures and enabling radiative transfer calculations. Through a reanalysis of Spitzer HD 189733 b observations, we demonstrated the accuracy of ThERESA's implementation of 2D eigencurve mapping. Our measurement of the eastward shift of the planet's hotspot agrees extremely well with previous studies (Majeau et al. 2012;Rauscher et al. 2018). Our 3D planet models consist of functions which attach 2D maps to pressures that can vary by position on the planet, using functions which range in complexity is the sinusoidal amplitude of variation in pressures probed by the map with latitude, a3 is the sinusoidal amplitude of variation in pressures probed with longitude, and a4 is the phase shift of the longitudinal sinusoid. All quantities are in log(pressure), in bars. See Equation 5 b Steps Per Effectively Independent Sample c Effective Sample Size from a single pressure per map to a maximally-flexible model with a parameter for the vertical position of every 2D map in every grid cell. We also require that 2D maps be placed near the peaks of their corresponding contribution functions, ensuring consistency between the 3D model and the radiative transfer calculations. To test the accuracy of our retrieval method, and to explore the capabilities of eclipse mapping with JWST, we generated a synthetic eclipse observation from WASP-76b GCM results. Our 2D maps are able to retrieve the large-scale thermal structure of the GCM with l max ≤ 5 and N ≤ 4, the highest-complexity fits justified by a BIC comparison. We caution that the limited complexity of the eigencurves and eigenmaps limits the structures possible in the best-fitting maps and their uncertainties. Future mapping analyses must be cognizant of these limitations when presenting results. Our 3D retrievals, regardless of the temperature-topressure model used, were able to accurately determine the temperatures of the planet's atmosphere that we are sensitive to, from ∼ 0.001 − 0.1 bar, including the presence of a thermal inversion at the planet's hotspot near 0.04 bar, while maintaining a radiatively consistent atmosphere by ensuring that 3D model contribution functions match the vertical placements of the 2D temperature maps. Through a BIC comparison, a sinusoidal model function that includes combined latitudinal, longitudinal, and vertical information was preferred over a purely isobaric model, demonstrating that 3D models are necessary to interpret JWST-like observations, and ThERESA can perform the analyses on such data.
2021-10-18T01:16:02.481Z
2021-10-15T00:00:00.000
{ "year": 2021, "sha1": "c287f79edf6e46c5b671ae941efb09ba53ca8ac5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "c287f79edf6e46c5b671ae941efb09ba53ca8ac5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
2519979
pes2o/s2orc
v3-fos-license
Cryptosporidium hominis gene catalog: a resource for the selection of novel Cryptosporidium vaccine candidates Human cryptosporidiosis, caused primarily by Cryptosporidium hominis and a subset of Cryptosporidium parvum, is a major cause of moderate-to-severe diarrhea in children under 5 years of age in developing countries and can lead to nutritional stunting and death. Cryptosporidiosis is particularly severe and potentially lethal in immunocompromised hosts. Biological and technical challenges have impeded traditional vaccinology approaches to identify novel targets for the development of vaccines against C. hominis, the predominant species associated with human disease. We deemed that the existence of genomic resources for multiple species in the genus, including a much-improved genome assembly and annotation for C. hominis, makes a reverse vaccinology approach feasible. To this end, we sought to generate a searchable online resource, termed C. hominis gene catalog, which registers all C. hominis genes and their properties relevant for the identification and prioritization of candidate vaccine antigens, including physical attributes, properties related to antigenic potential and expression data. Using bioinformatic approaches, we identified ∼400 C. hominis genes containing properties typical of surface-exposed antigens, such as predicted glycosylphosphatidylinositol (GPI)-anchor motifs, multiple transmembrane motifs and/or signal peptides targeting the encoded protein to the secretory pathway. This set can be narrowed further, e.g. by focusing on potential GPI-anchored proteins lacking homologs in the human genome, but with homologs in the other Cryptosporidium species for which genomic data are available, and with low amino acid polymorphism. Additional selection criteria related to recombinant expression and purification include minimizing predicted post-translation modifications and potential disulfide bonds. Forty proteins satisfying these criteria were selected from 3745 proteins in the updated C. hominis annotation. The immunogenic potential of a few of these is currently being tested. Database URL: http://cryptogc.igs.umaryland.edu Introduction Although young child mortality has dropped impressively since the millennium, almost six million deaths still occur annually in developing countries, with diarrheal diseases remaining the second most common cause of death after pneumonia (1). The Global Enteric Multicenter Study (GEMS), an enormous case-control study that investigated the burden, etiology and consequences of moderate-to-serve diarrhea (MSD) in children < 5 years of age in four sites in sub-Saharan Africa and three in South Asia (global regions where collectively 80% of young child diarrhea deaths occur) incriminated Cryptosporidium as one of the four predominant pathogens overall associated with MSD and as the second most common pathogen during the first 2 years of life, after rotavirus (2). GEMS also found that Cryptosporidium MSD was associated with linear growth stunting the $60 days following the acute MSD episode and increased by 8.5-fold the risk of death over the $60-day follow-up compared with matched control children. Although Cryptosporidium, a chlorine-resistant pathogen, also occurs in association with sporadic and outbreak waterrelated transmission in industrialized countries, it is to address the burden of disease in developing countries that there have been calls to undertake vaccine development efforts. Two main species of the apicomplexan genus Cryptosporidium are associated with human disease. GEMS revealed that 80% of Cryptosporidium associated with cases were human-restricted Cryptosporidium hominis, while the Cryptosporidium parvum strains were also mainly anthroponotic genotypes. The majority of human infections in non-GEMS developing countries is attributed to C. hominis and, to a lesser degree, C. parvum (3)(4)(5)(6). Other Cryptosporidium species are found in all vertebrate groups, with a few occasionally isolated from humans with diarrhea (3). Vaccination remains one of the most successful and cost-effective methods of preventing the occurrence and spread of serious infectious diseases. The fact that only one parasitic vaccine has been licensed for human use (Mosquirix against Plasmodium falciparum malaria, approved only in 2015, for use in targeted groups) reflects the challenges associated with the design and development of effective anti-protozoal vaccines. Among the factors limiting the understanding of C. hominis biology and the development of anti-cryptosporidial vaccines has been the lack of a robust axenic in vitro culture system (7), although successful in vitro cultivation of C. parvum has recently been demonstrated (8). Reverse vaccinology takes advantage of annotated pathogen genomes to identify genes encoding proteins with properties predicted to induce a host immune response against the pathogen. This approach permits the rational selection of vaccine components which can be subsequently validated experimentally to determine if they elicit immune responses and confer protection (9)(10)(11). The reverse vaccinology approach was first used to successfully identify the four components of the Neisseria meningitidis B vaccine (Bexsero) (12)(13)(14), wherein the genome sequence of a virulent isolate (MC58) was used to predict candidate surface-exposed or exported proteins. Following a similar approach, Maione et al. (15) identified four potential vaccine antigens against Group B streptococcus and demonstrated that a multivalent vaccine formulation using these antigens can confer broad serotype-independent protection. Reverse vaccinology is also being applied to other pathogens for which not licensed vaccines or other mature candidates exist, including Porphyromonas gingivalis and Chlamydia pneumoniae (16). The reverse vaccinology approach is particularly promising for organisms that, like Cryptosporidium, are difficult to maintain under routine laboratory conditions (13,15,17,18). Advances in sequencing technologies and genome assembly and annotation methodologies have facilitated the generation of genomics resources for multiple species of Cryptosporidium (19). Cryptosporidium parvum (isolate IOWA II) was the first species with a published genome (20). The genome was found to be 9.1 Mbp in length, and its eight chromosomes assembled into 13 supercontigs, containing 3807 predicted protein-coding genes with an average length of 1795 base pairs (bp). At about the same time the genome of C. hominis (isolate TU502) was published (21). It was sequenced to a much lower depth of coverage because of limitations of biological material and technology available at the time. For example, the lack of conventional animal models to propagate this species limited the amount of DNA that could be generated for sequencing. Consequently, this assembly is comparatively more fragmented, with the likely eight chromosomes split among 1413 contigs, grouped into $240 scaffolds. Recently, we generated a much-improved annotated genome assembly for C. hominis, isolate TU502_2012 (22). Herein, we report a comprehensive functional annotation, and targeted manual structural validation, of this new C. hominis TU502_2012 gene set, with a view to generate a complete list of genes predicted to potentially be sporozoite, and most likely merozoite, surface-expressed. In addition, we developed a searchable online catalog of all C. hominis genes and their characteristics of interest in the context of vaccine development, including physical attributes, properties related to antigenic potential and expression data ( Figure 1). As an example of this approach, we identified a multitude of proteins that could be evaluated as protective immunogens. The first version of the annotation of the genomes of C. hominis TU502_2012, C. hominis UKH1, C. baileyi TAMU-09Q1 and C. meleagridis UKMEL1 will be released soon (22). Functional annotation The structural and functional attributes of the 3745 protein-coding genes in the updated C. hominis assembly were identified using a variety of approaches. These include BlastP (25) searches against the proteome of other Apicomplexa, using the weight matrix BLOSUM62 and an E-value cutoff of 1e À5 , HMMer version 3.0 (26) searches against the PFAM and TIGRfam databases of functional protein domains (27) and searches against the InterPro (28) and CDD (29) databases. Results from these analyses were then parsed using a custom script to assign product names, gene symbols, enzyme commission numbers and Gene Ontology terms, where available. Characterization of surface-expressed or secreted proteins and epitope identification The targets of protective antibodies on microbial pathogens are typically associated with the surface of the pathogen or the infected host cell. Accordingly, TargetP (30, 31) was used to identify proteins predicted to be targeted to the secretory pathway with high reliability (reliability Classes 1 or 2). Proteins were predicted to be glycosylphosphatidylinositol (GPI)-anchored using GPI-SOM (32), PredGPI (33) and FragAnchor (34). The presence of five or more transmembrane helices is a strong indicator of a transmembrane protein; the presence of these transmembrane motifs was determined with TMHMM (35,36). Prediction of antigens that may constitute robust immunogens was done by analysis of potential Major Histocompatibility Complex (MHC) Class I and MHC Class II epitopes with NetMHCpan and NetMHCIIpan, respectively (37-39). Manual curation of gene structure Gene structure was manually validated for all genes predicted to be secreted or membrane-associated (determined by the presence of predicted GPI anchors or of at least five transmembrane motifs). The manually curated gene structural components included the location of the methionine start codon and the location of all intron-exon boundaries. The following data was used as evidence: C. hominis strand-specific RNAseq data generated from the oocyst stage (GenBank: SRX481527), 'TopHat junctions' [the set of reads predicted by TopHat (40) to span introns], homologous proteins from other Cryptosporidium species aligned against the C. hominis assembly using GMAP (41) and CEGMA proteins, a set of highly conserved eukaryotic genes (42). Manual validation consisted of visual inspection of each gene model, comparison against all available evidence and editing when necessary to conform to that evidence. Web Apollo (43) was used to visualize all evidence tracks and to modify gene models as necessary. Protein physical attributes The proteins were characterized according to several physical properties, including predicted isoelectric point (44), molecular weight (44), numbers of cysteine residues (assumed to reflect potential disulfide bonds) or of potential glycosylation sites. We predicted two types of glycosylation sites, O-glycosylation and N-glycosylation sites, by use of the software NetNGlyc, NetOGlyc and GlycoEP (45)(46)(47). Homology searches C. parvum and human homologs were identified by running a BlastP search of C. hominis TU502_2012 proteins against the proteomes of C. parvum Iowa II (20) and human (48), respectively, with parameter values as described earlier. The presence of homologs of genes of interest was also determined in four other Cryptosporidium genomes, namely, C. parvum Iowa II, C. baileyi TAMU-09Q1, C. meleagridis UKMEL1 and C. muris RN66. We computed homology clusters of Cryptosporidium proteins using the pipeline described by Crabtree and collaborators (49), and used the Sybil comparative platform (49) to visualize and analyse the results. Identification of SNPs and small insertions/ deletions (indels) Sequence variants, in particular single nucleotide polymorphisms (SNPs) and small indels in C. hominis were identified based on the comparison of two strains: C. hominis TU502_2012 and C. hominis UKH1. In this case, the sequence reads of C. hominis UKH1 (SUB482088) were aligned to the new assembly of C. hominis, ChTU502_2012, using BWA (50). Sequence data was formatted using SAM tools (51) and Picard tools v.1.79 (http://broadinstitute. github.io/picard), and SNP variant calling and filtering using the Genome Analysis Toolkit GATK v2.2.5 (52). Identified variants were filtered according to the following parameter values: 1}. SNPs that passed the filter were attributed to non-coding or coding regions using VCFannotator (http://sourceforge.net/projects/vcfanno tator) using as reference the annotation of ChTU502_2012. Expression dataset Given the lack of C. hominis sporozoite RNAseq data, we used transcriptomic data from C. parvum. From CryptoDB (19), we extracted expression data representing transcriptomes of freshly excysted C. parvum sporozoites, as well as data for parasites collected 48-and 96-h post-infection in HCT-8 cells. These data were generated using SOLiD, paired end, strand-specific RNA sequencing (Hehl AB et al., unpublished data). In addition, we utilized amino acid data representing excysted sporozoite proteomes. These data originated from solubilized protein preparations analysed by 2D electrophoresis LC-MS/MS (53). Generation of a comprehensive set of putative antigens We recently completed the sequencing, assembly and annotation of the genome of C. hominis genome isolate TU502 from a DNA sample generated in 2012 at Tufts University, named C. hominis TU502_2012. The isolate is believed to be the same that was sequenced in 2004 (21), except for the fact that it has been maintained by serial propagation in pigs for an additional 8 years. This effort resulted in a much-improved draft genome assembly for C. hominis. The C. hominis TU502_2012 genome assembly, with 119 contigs, is much less fragmented than the 1413-contig 2004 assembly (21), with the largest contig now the length of a chromosome. In this more comprehensive genome assembly, the average length of protein-coding genes is 500 bp longer than in the original annotation (22). The additional gene length resulted in a 25% increase in the fraction of the genome that encodes for proteins (Table 1). Based on this new gene set, we identified potential vaccine proteins using two bioinformatic approaches ( Figure 2). In one approach, candidate antigens in C. hominis or C. parvum were identified from the literature (54)(55)(56)(57)(58)(59)(60)(61)(62)(63)(64)(65)(66), and their homologs were identified in the new C. hominis annotation. In a complementary approach, we used the complete C. hominis gene set to identify novel candidate antigens. The structure of all genes identified through either approach was manually validated (see Materials and Methods). Identification of putative antigens by homology to 'known' antigens The first approach we took was to manually curate the gene structure of all C. hominis TU502_2012 genes with homology to known or proposed surface antigens ( Figure 2). Potential antigens were identified from the literature. Using reverse vaccinology strategies to analyse the C. hominis TU502 (2004) genome (21), Manque et al. (66) identified potential antigens by focusing on proteins associated with the parasite surface, including those possessing multiple transmembrane motifs, signal peptides, GPI signal anchors and similarities with known pathogenic factors. Other studies have identified Cryptosporidium virulence factors using immunological and molecular methods. These virulence factors are predicted to be involved in processes such as adhesion, excystation, locomotion, invasion, membrane integrity, fatty acid metabolism and stress protection (54). Finally, some Cryptosporidium antigens were identified through a text search for 'antigen' in the CryptoDB database (www.cryptodb.org) (19). A total of 302 potential antigens were identified from these references. Of these, 132 proteins (44%) were reported as secreted, 185 (61%) as containing five or more transmembrane domains and 74 (24%) as containing GPIanchor motifs, with a few proteins possessing more than one of these attributes. We re-evaluated these assignments with new or improved methods and found that only 52 of the 74 genes are now predicted to have GPI-anchored domains. We manually curated the structure of all 302 genes in the new C. hominis genome assembly (Materials and Methods). In total, 94 of these genes needed to be corrected, resulting in more accurate gene structures than those published in 2004. Identification of novel vaccine candidates Vaccines that elicit antibody-mediated immunity are based on secreted proteins, including toxins, and/or on highly expressed, surface-exposed or membrane-associated proteins (13,15,67). We sought to complement the gene set above by utilizing a variety of bioinformatics tools to identify additional genes encoding proteins with these properties, and which might have been missed in previous studies due to incorrect or missing gene models in the 2004 annotation properties ( Figure 2B). Among the complete set of 3745 protein-coding genes from the improved semi-automated annotation of C. hominis (Table 1), we identified 105 new antigen candidates, 41 of which have five or more transmembrane domains, 37 with GPI-anchor motifs and 29 that are targeted to the secretory pathway. We confirmed that, relative to the original assembly, these 105 genes are either newly identified, genes with a considerably altered structure or genes newly predicted using new software. The structure of these 105 new candidates was manually curated as described earlier. were identified among the gene set from the new C. hominis TU502_2012 genome assembly. The structural annotation of these genes was then manually curated, and targeted analyses were conducted to identify genes encoding proteins with the desired properties. (B) The structural annotation of C. hominis TU502_2012 was improved using information from related species and several of gene finders. The resulting gene set was assigned functional annotation. This gene set was then screened from desired properties. The gene structure of antigen candidates was manually curated. A total of 407 potential antigens were identified using at least one approach: 209 of the 302 previously identified putative antigens were also detected using our bioinformatic screen ( Figure 3A); of the remaining 93 genes, approximately one-half have altered gene structures that may change the region containing signal peptides, which likely explains why they are no longer selected according to the criteria used in our screen. Rational selection of candidate vaccine proteins The two combined approaches resulted in a set of 407 manually curated, potential antigens. To prioritize these genes, we characterized them according to relevant polymorphic and physicochemical properties. These properties include the possibility that the encoded protein will undergo posttranslational modifications, suggestive of an intricate process of protein folding. In addition, we considered homology information, both across the Cryptosporidium genus and relative to the human proteome as cross-reactive antigens may produce undesired adverse effects upon vaccination. Antigens often evolve rapidly, as a result of the selective pressure imposed by the host's immune system (68,69). Therefore, a relatively high rate of non-synonymous polymorphism and evidence of balancing selection have been used as criteria to identify new vaccine antigens (70,71). However, evidence is now mounting that high rate of polymorphism in vaccine antigens contributes to vaccine evasion (72)(73)(74). To identify, and possibly eliminate, polymorphic loci from the pool of potential vaccine candidates, we estimated the number of SNPs between publicly available C. hominis isolates TU502_2014 and UKH1. A total of 230 protein-encoding genes have amino acid polymorphisms between these two isolates. In addition, we made use of publicly available gene expression data for C. parvum, to determine which genes are expressed during the sporozoite stage, since neutralizing antibodies are likely to target proteins expressed during this stage of development. Of the 3745 predicted protein-coding genes, 3597 are predicted to be expressed in the sporozoite stage, even though transcript abundance varies widely among genes. Several additional selection filters were created based on homology information. All proteins with detectable homology to the human proteome were identified. In addition, we determined the taxonomic distribution of each C. hominis gene across the genus. These filters allow the elimination of potential antigens that may induce crossreactions with human genes, and the rapid assessment of the potential taxonomic breadth of specific antigens. Since proteins are often expressed in bacterial systems, the number and type of post-translational modifications are important considerations when choosing adequate vaccine candidates. Glycosylation is a type of post-translational modification resulting from the addition of N-and O-linked oligosaccharides to proteins. It assists in protein structural folding, transport and other functions (75,76). Studies indicate that N-glycosylation of proteins is a rare event in apicomplexan parasites, even though it is an important post-translational modification in other eukaryotic phyla (77)(78)(79)(80)(81). For the full set of proteins, the median number of predicted N-and O-glycosylation sites per protein was 5 and 8, respectively, but both distributions were Figure 3. Selection of potential of Cryptosporidium vaccine candidates. (A) Overlap between set of potential antigens, one collected from the literature (purple) and the other generated using a bioinformatic screen for genes with predicted GPI-anchor motifs, secretion signals or at least five transmembrane motifs (orange). Of the total 407 potential antigens, roughly one-half were identified with both approaches. (B) Down-selection of genes to be used in immunogenicity experiments. The complete gene complement was first reduced by 90% to 407 candidates from (A), and a further 90% reduction resulted from the use of stricter criteria. highly skewed, with maximum values ! 100. For the subset of 407 potential antigens, the median number of predicted N-and O-glycosylation sites per protein was 5 and 3, respectively. The median number of cysteine residues per protein, which can also be modified post-translation, was 7, with a maximum number of 227. For the subset of 407 selected genes, the median number of cysteine residues was nine per protein with a maximum number of 151. In most cases, the properties significant for the selection of candidate antigens have a higher rate of occurrence in the subset of 407 genes predicted to encode potential antigens compared with the full dataset ( Table 2). Of these 407 genes, 33 were found to have amino acid polymorphism between the two C. hominis genomes and 216 had human homologs. Eliminating these, and further selecting genes with at most two predicted transmembrane motifs and genes predicted to be GPI-anchored, resulted in a list of 40 potential antigens, 39 of which have C. parvum homologs, that can be considered for further investigation as vaccine candidates ( Figure 3). These can be further down-selected based on properties relevant for protein expression and with consideration of the chosen expression system, such as optimal isoelectric point for biochemical purification or optimal molecular weight for expression. Cryptosporidium gene catalog We created a C. hominis gene catalog based on all the properties described earlier. The catalog is freely available online (http://cryptogc.igs.umaryland.edu). It contains all C. hominis genes and their characteristics, including physical attributes, properties related to antigenic potential and expression data (Figure 4). Users can sort or filter the genes based on each characteristic. For example, a query for proteins targeted to the secretory pathway, with no human homologs and at most 10 cysteine residues results in 14 hits ( Figure 5). A quick query also shows that the estimated molecular weight for C. hominis proteins varies between 6. 12 and 991.2 kDa, equivalent to 55-8756 amino acid residues. Three sets of genes readily available for download, both in nucleotide and amino acid sequence fasta format include: all genes, genes that encode predicted GPI-anchored proteins or those whose products are predicted to be secreted. In addition, users can download the nucleotide and amino acid sequences of genes that meet specific userdefined criteria ( Figure 5). The table of properties for all or a subset of filtered genes can also be downloaded in excel or comma separated values (CSV) format. Discussion The GEMS (2) was designed to measure the burden, identify the major etiologic agents and assess the consequences of moderate-to-severe diarrhea (MSD) in children < age 5 years in the developing world. One conclusion of the study was the recognition that targeting the top 4-5 ranked diarrheal pathogens with effective interventions could reduce considerably the global morbidity and mortality burden of MSD. Surprising to many was the finding that Cryptosporidium ranked second as the most important attributable pathogen associated with MSD in children below the age of 2 years. Whereas vaccines against the other three major pathogens either exist (rotavirus) or are undergoing clinical evaluation (enterotoxigenic Escherichia coli and shigellosis), efforts to develop a vaccine to protect humans against cryptosporidiosis have made little progress and no candidate has entered clinical trials. The advent of antiretroviral therapy and its widespread use in sub-Saharan Africa has markedly diminished the number of HIV-infected individuals that manifest overt immunodeficiency and as a result the frequency of cryptosporidiosis has in turn diminished along with interest and funding to combat this infection. GEMS' revelation of the importance of Cryptosporidium has renewed interest in developing preventive as well as improved therapeutic measures to control in infants and toddlers in developing countries, including advocacy for developing vaccines. Given the practical obstacles associated with laboratory study of this parasite (7), reverse vaccinology is an attractive option to identify and prioritize antigens that may prove useful for the development of a well-tolerated and effective vaccine to prevent cryptosporidiosis. With this in mind, our team has recently re-sequenced the TU502 isolate of C. hominis, assembled and annotated Figure 5. The ChGC interface. Key elements: (a) 'Help' button; (b) click on a column header to sort by that column; (c) 'columns' menu available in the drop-down menu on any column header is used to add hidden, or remove visible, columns; (d) 'Sort/Filter': multiple columns can be filtered to generate customized datasets of interest; (e) filtered datasets can be downloaded as an Excel or a CSV file, using these buttons. the genome, now designated TU502_2012 (22). The improved gene set, consisting of 3745 protein-coding genes, should provide the opportunity for new in silico analyses to identify potential immunogens. We are making this genomic database publicly available, with a view to stimulate additional investigators with expertise in reverse vaccinology to undertake research to develop Cryptosporidium vaccine candidates. Once C. hominis antigens of interest are identified, various vaccinology approaches can be adapted to assess their immunogenicity. Examples include assessment of the immune responses elicited in animal models or humans following immunization with protozoal antigens expressed in bacterial (82)(83)(84) or viral vectors (85)(86)(87), as virus-like particles (88,89), as nanoparticles (90) or fused to carrier proteins, as has been done with P. falciparum and Leishmania proteins (82)(83)(84)(85)(86)(87)(88)(89)(90). Since Cryptosporidium is an intestinal protozoan, oral as well as parenteral routes of administration of the candidate vaccines should be studied, with and without adjuvants. Recent progress with a welltolerated adjuvant for orally administered vaccines increases interest in a mucosal vaccine strategy (91). Recently, genome sequences of additional isolates of C. parvum and C. hominis have become publicly available in CryptoDB (19). As annotation information for these genomes becomes available, a comparative analysis among Cryptosporidium species and isolates may help identify new antigens that will prove to have diagnostic value, since species identification currently entirely depends on cumbersome molecular genetic tools. The database may also help in the development of improved diagnostics of Cryptosporidium infection that may allow immunoassays that can identify the prevalent Cryptosporidium species in populations and geographic areas. Improved assays for species and sub-species differentiation can help elucidate the reservoirs of Cryptosporidium, likely modes of transmission and geographic spread, all of which can help formulate specific control measures.
2018-04-03T02:32:44.787Z
2016-10-19T00:00:00.000
{ "year": 2016, "sha1": "3e3384401a4bbdc5c41e6157127a616bc727f801", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1093/database/baw137", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3e3384401a4bbdc5c41e6157127a616bc727f801", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine", "Computer Science" ] }
232142873
pes2o/s2orc
v3-fos-license
Human pluripotent stem cell-based organoids and cell platforms for modelling SARS-CoV-2 infection and drug discovery The coronavirus disease 2019 (COVID-19) global pandemic caused by the novel severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has affected over 200 countries and territories worldwide and resulted in more than 2.5 million deaths. In a pressing search for treatments and vaccines, research models based on human stem cells are emerging as crucial tools to investigate SARS-CoV-2 infection mechanisms and cellular responses across different tissues. Here, we provide an overview of the variety of human pluripotent stem cell-based platforms adopted in SARS-CoV-2 research, comprising monolayer cultures and organoids, which model the multitude of affected tissues in vitro. We highlight the strengths of these platforms, including their application to assess both the susceptible cell types and the pathogenesis of SARS-CoV-2. We describe their use to identify drug candidates for further investigation in addition to discussing their limitations in fully recapitulating COVID-19 pathophysiology. Overall, stem cell models are facilitating the understanding of SARS-CoV-2 and prove to be versatile platforms for studying infections. Introduction The number of confirmed COVID-19 cases worldwide has surpassed 100 million and is constantly growing. The majority of infected individuals experience only mild to moderate symptoms that do not require hospitalization (Wu and McGoogan, 2020) or are asymptomatic (Oran and Topol, 2020). The risk of developing severe infections increases with age and with the presence of preexisting medical conditions (Zhou et al., 2020a). Frequent symptoms in mild to moderately severe infections include fever, fatigue, and respiratory problems . However, gastric symptoms , such as nausea or diarrhea, and neurological symptoms (Giacomelli et al., 2020;Nalleballe et al., 2020), such as headaches, loss of smell or taste, and confusion are not infrequent even in mildly symptomatic patients. Individuals that require hospitalization often develop respiratory deterioration resulting in pneumonia or even in acute respiratory distress syndrome, which is the most prevalent cause of death (Ruan et al., 2020). Many reports of severe damage to other organs, such as the cardiovascular system Zheng et al., 2020), the gastrointestinal tract , the liver , the pancreas , the kidneys (Pei et al., 2020a) and the nervous system , indicate that SARS-CoV-2 infection might cause serious, or even lethal, injuries in organs other than the respiratory system. Indeed, several deaths have been documented due to heart failure , renal failure or multi-organ failure . The experimental tools currently available to investigate SARS-CoV-2 biology and COVID-19 pathophysiology include human biopsy samples, animal models, animal cell lines and different types of human cell and organoid platforms. Human biopsies are a very useful resource to understand the pathology of COVID-19 and to assess the validity and relevance of other model systems. However, biopsies are limited as a broader applied research tool due to the paucity of samples available and the short time they can be maintained ex vivo. So, a profusion of animal models has been used in COVID-19 studies, ranging from small animals, including transgenic mice expressing human ACE2 and syrian hamsters, to larger animals, such as ferrets, cats and non-human primates Chan et al., 2020;Jiang et al., 2020;Kim et al., 2020b;Munster et al., 2020;Rockx et al., 2020;Shi et al., 2020a). Animal-derived cells have also been extensively used to amplify and isolate SARS-CoV-2, investigate infection mechanisms and perform drug screening studies. So far, the majority of studies using non-human cell lines have relied on Vero cells (Harcourt et al., 2020;Matsuyama et al., 2020;Wang et al., 2020b;Zhou et al., 2020c), kidney epithelial cells isolated from an African green monkey. However, due to their evolutionary distance from humans, animal models and animal-derived cells cannot fully recapitulate characteristic features of human physiology and diseases. To address this limitation, several human cell lines have been used to study SARS-CoV-2 biology, including immortalized cell lines, cancer cell lines, and differentiated stem cells. Among immortalized and cancer human cell lines, Calu-3 and A549 (lung adenocarcinoma), Caco-2 (colorectal adenocarcinoma), HFL and MRC-5 (fetal lung fibroblasts), HEK293T (embryonic kidney), Huh7 (hepatocellular carcinoma), HeLa (cervical cancer), U251 (glioblastoma) and RD (rhabdomyosarcoma) have been widely employed, observing distinct susceptibilities to SARS-CoV-2 infection and viral replication rates in different cell types (Chu et al., 2020;Harcourt et al., 2020;Hoffmann et al., 2020;Kim et al., 2020a;Ou et al., 2020;Riva et al., 2020;Shang et al., 2020;Wang et al., 2020b). Immortalized and cancer human cell lines have been useful to study some aspects of SARS-CoV-2 infection and replication. However, they fail to recapitulate in vitro the diversity of cell types present in human organs. These cell lines also generally carry cancer-associated mutations in genes controlling cell cycle and proliferation (Blanco et al., 2009) and can have mutations in genes regulating the innate immune response (Hare et al., 2016). Therefore, immortalized and human cancer cell lines are limited in their ability to accurately model the cell type-specific susceptibility and response to SARS-CoV-2 infection. Human pluripotent stem cell (hPSC) have rapidly emerged as an alternative to animal models as well as to immortalized and cancer human cell lines, since they are human cells that have the ability to selfrenew indefinitely and differentiate into cells of the three germ layers. They avoid interspecies differences and can be used to obtain abundant samples of a variety of different cell types. Under precise differentiation conditions, hPSCs, including embryonic stem cells (hESCs) and induced pluripotent stem cells (iPSCs), can generate specific cell types in monolayer cultures. In addition, over the last few years numerous differentiation protocols have been developed to generate three-dimensional (3D) cultures, known as organoids, which more faithfully recapitulate human organs in vitro. Both hPSC-derived monolayer cultures and organoids have already been used to investigate host-virus interactions in different human cell types and tissues, including modelling respiratory infections, such as influenza, enteric infections, such as those due to norovirus and rotavirus, hepatic infections, such as hepatitis B and hepatitis C, infections of components of the immune systems, as in HIV and dengue virus studies, and both prenatal and postnatal brain infections, including those caused by Zika virus and herpes simplex virus 1. In the case of Zika virus, they have also been employed to identify antiviral drug candidates (Xu et al., 2016;Zhou et al., 2017). Several of these hPSC-based platforms have been adapted to study SARS-CoV-2 biology and COVID-19 pathophysiology ( Fig. 1 and Table 1). In this review, we describe how they are used to determine SARS-CoV-2 tropism, investigate infection mechanisms and identify potential treatments in different organs, highlighting their strengths compared to other model systems. We also address their limitations in fully recapitulating COVID-19 pathophysiology, while proposing potential improvements and new applications. hPSC-based platforms to study SARS-CoV-2 infection COVID-19 studies using hPSC-derived monolayer cultures and organoids have often employed similar approaches and observed common patterns, even in different cell types and tissues (Fig. 2). In terms of approaches, a widely adopted strategy to identify the cell types potentially susceptible to the virus has been to monitor the expression profiles of SARS-CoV-2 entry receptor angiotensin-converting enzyme 2 (ACE2). Many studies have also examined the expression of serine protease TMPRSS2, which cleaves SARS-CoV-2 Spike protein at two sites enabling the fusion of the cellular and viral membranes, and other putative priming proteases, such as Furin, TMPRSS4 and TMPRSS11E. The expression profiles of these key mediators of SARS-CoV-2 infection identified in vitro have usually been compared to those in primary Fig. 1. hPSC-based Models of SARS-CoV-2 Infection. Schematic representation of the hPSC-based monolayer cultures and organoids used to date to study SARS-CoV-2 tropism and COVID-19 pathophysiology across different organs. human tissues, confirming hPSC-derived cells and organoids as reliable models. Indeed, hPSC-derived cells expressing ACE2 and TMPRSS2 or other putative entry receptors and priming proteases become infected with SARS-CoV-2. A common pattern that has been observed across models of different tissues is the increased expression of genes involved in the innate immune response, as chemokines, interleukins and other cytokines upon SARS-CoV-2 infection ( Fig. 2 and Table 1). Another shared transcriptional signature is the reduced expression of genes related to metabolic activity and cell function, which is frequently accompanied by a time-dependent upregulation of apoptotic genes ( Fig. 2 and Table 1). Increased cell death after infection has indeed been confirmed from protein expression and cell counts. However, whether infected or neighboring cells are the most affected by cell death seems to depend on the tissue examined. Changes in cell physiology after infection have also been reported ( Fig. 2 and Table 1). Aside from these general approaches and patterns, organ-specific signatures have been described, which we review in the following sections. Overall, additional studies are still required to further assess the clinical relevance of these in vitro findings. Lung The lungs are the major target of SARS-CoV-2 and infected individuals frequently present with respiratory symptoms . HPSC-derived airway (hAWOs) (Pei et al., 2020b) and alveolar (hALOs) organoids (Dobrindt et al., 2020;Han et al., 2020;Huang et al., 2020b;Pei et al., 2020b;Samuel et al., 2020) as well as monolayer cultures of alveolar epithelial type 2 cells (hAT2) (Huang et al., 2020b) have been used to investigate SARS-CoV-2 tropism and the early phases of infection in the lungs. Independent studies using hPSC-derived lung organoids observed that ACE2 is mainly expressed in ciliated cells and in a subpopulation of hAT2 cells, while TMPRSS2 is expressed in the majority of cells (Dobrindt et al., 2020;Han et al., 2020;Pei et al., 2020b), in agreement with their expression in adult human lungs (Hou et al., 2020). Analysis of ACE2 expression in monolayer cultures of hAT2 produced similar findings (Huang et al., 2020b). Upon viral exposure, ciliated cells, club cells and a subpopulation of hAT2 cells become infected, while alveolar type 1 (AT1) cells, basal cells, goblet cells, proliferating cells and pulmonary neuroendocrine cells have few or no signs of infection in hPSC-derived lung organoids. These findings are consistent with data from lung autopsies of COVID-19 patients and primary lung airway organoids, air-liquid interface (ALI) cultures and AT2 alveolar organoids derived from lung biopsies (Katsura et al., 2020;Lamers et al., 2020;Purkayastha et al., 2020;Youk et al., 2020). The susceptibility to infection of specific pulmonary cell populations was also confirmed in monolayer cultures (Huang et al., 2020b;Mirabelli et al., 2020;Riva et al., 2020). Transcriptional profiling of infected hPSC-derived lung organoids and hAT2 cultures revealed an increased expression of genes associated with the activation of the immune response, as cytokines, chemokines, members of TNF signaling, IL-17 signaling and the NF-kB family Huang et al., 2020b;Pei et al., 2020b). On the other hand, genes associated with lipid metabolism were downregulated, along with the expression of ACE2 and TMPRSS2. A delayed and moderate activation of genes related to IFN signaling but not type I and III IFN genes was reported in hAT2 cultures four days post infection along with the progressive downregulation of hAT2 specific genes and upregulation of apoptotic genes (Huang et al., 2020b). A progressive increase in cell death, becoming evident from three days post infection, was further confirmed by immunostaining in lung organoids (Pei et al., 2020b) and hAT2 cultures (Huang et al., 2020b). Kidney Given the presence of renal complications in COVID-19 patients (Kunutsor and Laukkanen, 2020) and the detection of SARS-CoV-2 in the urine of infected individuals (Ling et al., 2020), hPSC-derived kidney organoids have also been used as a model of SARS-CoV-2 infection. Consistent with ACE2 expression profiles in human kidney biopsies, ACE2 is expressed in proximal tubular cells and podocytes, but not in mesenchymal, renal endothelial-like or proliferating cells in human kidney organoids. Exposure to SARS-CoV-2 resulted in infection of kidney organoids (Monteil et al., 2020). Gastrointestinal tract Colonic (hPSC-COs) and intestinal (PSC-HIOs) organoids derived from hPSCs have been used to investigate the cell type-specific susceptibility to SARS-CoV-2 infection in the gastrointestinal tract (Dobrindt et al., 2020;Han et al., 2020;Krüger et al., 2020). Transcriptional profiling of hPSC-COs indicates that ACE2 and TMPRSS2 are highly expressed in enterocytes and at lower levels in all the other cell types present in these organoids, including goblet cells, neuroendocrine cells, transit amplifying cells and stem cells . Expression of ACE2 and TMPRSS2 has also been confirmed by immunostaining in the majority of cell types composing PSC-HIOs, including enterocytes, enteroendocrine cells and Paneth cells, with the exception of goblet cells . Exposure to a SARS-CoV-2 pseudovirus tagged with luciferase infected all cell types present in hPSC-COs cultured in vitro and transplanted into mice, with ACE2-positive cells and enterocytes being the most affected cell types . Colonic and intestinal organoids cultured in vitro were also susceptible to infection by SARS-CoV-2 virus (Dobrindt et al., 2020;Han et al., 2020;Krüger et al., 2020), in agreement with data of primary intestinal organoids derived from human biopsies Zang et al., 2020;Zhou et al., 2020b). Differential gene expression analysis identified an increased expression of chemokines and other cytokine genes, transcripts related to the production of reactive oxygen species and nitric oxide, and genes involved in oxidative phosphorylation in infected hPSC-COs . Increased cell death post-infection was observed in both hPSC-COs and PSC-HIOs Krüger et al., 2020). Liver As hepatic dysfunction has also been observed in COVID-19 patients , hPSCs-derived liver organoids have been tested as models of SARS-CoV-2 infection. HPSC-derived liver organoids composed mainly of albumin-positive hepatocytes expressed ACE2 in the majority of cells and were permissive to SARS-CoV-2 pseudovirus infection , in line with findings in primary hepatocyte and ductal organoids derived from adult biopsies Yang et al., 2020). Pancreas In hPSC-derived pancreatic endocrine cultures, alpha and beta cells but not delta cells stained positive for ACE2 and were permissive to SARS-CoV-2 pseudovirus infection, both when cultured in vitro or transplanted in mice . These findings are in agreement with the reported expression profiles of key mediators of SARS-CoV-2 infection and susceptibility to viral entry of adult human pancreatic islets . Transcriptional analysis of SARS-CoV-2 infected hPSC-derived pancreatic endocrine cultures suggested an increased death rate of alpha and beta cells after infection, as chemokine genes and genes involved in the insulin resistance pathway were upregulated in infected samples while genes associated with metabolic activity, glucagon signaling, calcium signaling, and other pathways related to alpha and beta cells showed reduced expression. The increased cell death of alpha and beta cells after infection has been confirmed by immunostaining, further supporting that these cells are also targets of SARS-CoV-2 (Yang et al., 2020). Brain COVID-19 patients frequently present with neurological symptoms (Helms et al., 2020;Mao et al., 2020). The question whether these neurological manifestations are due to direct SARS-CoV-2 infection or to secondary damage of the nervous system has been extensively investigated since the beginning of the pandemic. To date, SARS-CoV-2 RNA transcripts have been detected in human brain autopsies (Paniz- Mondolfi et al., 2020;Puelles et al., 2020) and in the cerebrospinal fluid (Moriguchi et al., 2020;Virhammar et al., 2020) of few COVID-19 patients. However, several clinical studies reported mixed findings or did not detect SARS-CoV-2 presence in brain tissues (Schaller et al., 2020;Solomon et al., 2020). To experimentally investigate if SARS-CoV-2 can directly infect cells of the human brain and what are the effects of infection, monolayer cultures of hPSCs-derived neural progenitor cells (hNPCs), neurons, astrocytes and microglia as well as three-dimensional neurospheres and brain organoids have been used (Bullen et al., 2020;Dobrindt et al., 2020;Jacob et al., 2020;Mesci et al., 2020;Pellegrini et al., 2020;Ramani et al., 2020;Song et al., 2020;Yang et al., 2020;Zhang et al., 2020a). Neurospheres are 3D organoids composed of hNPCs that mimic in vitro the early stages of neurogenesis, while brain organoids model different brain regions at later stages of human neurodevelopment and are composed of hNPCs and more differentiated cells of the neuronal lineage, such as post-mitotic neurons and astrocytes. Independent studies using both hPSC-derived monolayer cultures and organoids have confirmed ACE2 expression at low to moderate levels in hPSC-derived cortical neurons, at low to moderate levels in astocytes and hNPCs both grown in monolayer cultures and organoids, and at higher levels in dopaminergic neurons and choroid plexus (ChP) organoids (Bullen et al., 2020;Dobrindt et al., 2020;Jacob et al., 2020;Mesci et al., 2020;Pellegrini et al., 2020;Ramani et al., 2020;Song et al., 2020;Yang et al., 2020;). SARS-CoV-2 pseudovirus robustly infected monolayer cultures of dopaminergic neurons and ChP organoids while few microglia and cortical neurons become infected (Pellegrini et al., 2020;Yang et al., 2020). Upon SARS-CoV-2 exposure, viral entry was observed in hNPCs and cortical neurons both grown in monolayer cultures or organoids, as indicated by immunostaining. However, the reported percentages of infected cortical neuronal cells vary across studies (Table 1). Also the efficiency of viral replication in hPSC-based cortical models appears controversial, with some studies reporting increased levels of viral RNA in infected cerebral organoids and their supernatants (Bullen et al., 2020;Zhang et al., 2020a), while others did not observe changes in the number of infected cells and in the levels of viral RNA in the supernatant detected at two and four days post infection (Ramani et al., 2020). These conflicting findings are likely due to differences in experimental conditions across studies, such as the MOI used, the timepoints examined and the adoption of hPSC-based models at different stages of differentiation (Table 1). Consensus is found that a subpopulation of ChP epithelial cells expressing ACE2 is particularly susceptible to SARS-CoV-2 infection and permissive to viral replication (Jacob et al., 2020;Pellegrini et al., 2020). After SARS-CoV-2 exposure, infected cells tended to form syncytia and the tight junctions between ChP epithelial cells became progressively disrupted resulting in loss of integrity of the blood-cerebrospinal fluid barrier (Jacob et al., 2020;Pellegrini et al., 2020), that might enable the entry of SARS-CoV-2 as well as immune cells and proinflammatory cytokines in the brain. Astrocytes and neurons in hippocampal, hypothalamic, and midbrain organoids were also susceptible to SARS-CoV-2 infection (Jacob et al., 2020). Increased cell death was observed in both infected and noninfected hNPCs, astrocytes and cortical neurons (Mesci et al., 2020;Ramani et al., 2020;Song et al., 2020;Zhang et al., 2020a) as well as in ChP epithelial cells (Jacob et al., 2020;Pellegrini et al., 2020), starting from three days post infection. Other reported consequences of infection were a reduction in the number of excitatory synapses in cortical neurons (Mesci et al., 2020), dysregulated localization and increased phosphorylation of Tau protein (Ramani et al., 2020), and transcriptional alterations indicating activation of proinflammatory cellular responses and metabolic processes (Jacob et al., 2020;Song et al., 2020). Eye SARS-CoV-2 tropism and mechanisms of infection have also been investigated using hPSC-derived whole-eye SEAM organoids, comprising cells of the cornea, iris, ciliary margin, lens, retina and retinal pigment epithelium (Makovoz et al., 2020). ACE2 was expressed in a large number of corneal cells, with a subset also coexpressing TMPRSS2 (Makovoz et al., 2020). Corneal cells with elevated expression of ACE2 also expressed high levels of other putative SARS-CoV-2 entry genes, such as TMPRSS11E, BSG (Basigin) and FURIN (Makovoz et al., 2020). Consistent with the expression of ACE2 and TMPRSS2 in subsets of corneal cells, especially in limbal cells, hPSCs-derived eye organoids were susceptible to SARS-CoV-2 infection. Expression of genes related to cell cycle and proinflammatory cytokine response, especially mediated by NFκB, were increased in infected samples. Similar findings were obtained from SARS-CoV-2 infection of human ocular biopsies, supporting the utility of using hPSCs-derived whole-eye organoids to study SARS-CoV-2 infection and test candidate drugs (Makovoz et al., 2020). hPSC-based platforms as tools to identify COVID-19 treatments The identification of drugs to treat COVID-19 and prevent SARS-CoV-2 infection is paramount in the response to the pandemic. Since the development of novel drugs can take 10 or more years (Van Norman, 2016), the repurposing of existing drugs with known pharmacokinetic and safety profiles is a rapid, attractive alternative. Several COVID-19 clinical trials and in silico or in vitro screenings of drugs that are commercially available, currently in clinical trials for other pathologies, or already characterized in preclinical studies, have already begun (Riva et al., 2020;Zhou et al., 2020d). These drugs either target SARS-CoV-2 directly, or act on human cells and the immune system. In this context, hPSC-based platforms are used to validate the efficacy of selected candidate drugs, or to identify compounds to repurpose through high-throughput screening of chemical libraries (Fig. 2). In addition to the potential clinical applications, analysis of the pathways modulated by the identified hit drugs can also improve the knowledge about COVID-19 pathophysiology. Lung Reflecting the prevalence of respiratory symptoms, several studies are using hPSCs-derived lung monolayer cultures and organoids to identify drugs for the treatment of COVID-19 Huang et al., 2020b;Mirabelli et al., 2020;Pei et al., 2020b;Riva et al., 2020;Samuel et al., 2020). The majority of studies are using hPSCs-derived lung cells and organoids to confirm the antiviral activity of candidate drugs identified based on literature review or through large-scale screenings using animal-derived cells, immortalized and cancer human cell lines. For instance, hiPSC-derived alveolar epithelial type 2 cells (hiAT2) have been used to confirm the antiviral activity of three promising drugs, including camostat, remdesivir, and E-64d (Huang et al., 2020b). Both camostat and remdesivir treatment successfully reduced the presence of viral transcripts in hiAT2s, further confirming the efficacy of these compounds in vitro. On the other hand, administration of the cathepsin B and L inhibitor E-64d was ineffective in hiAT2s. Adopting the same literature-based strategy for selecting candidate drugs, Pei et al. tested the antiviral activity of camostat, remdesivir, bestatin and neutralizing antibody CB6 in hESC-derived airway (hAWOs) and alveolar (hALOs) organoids, observing similar results (Pei et al., 2020b). Remdesivir was the most effective drug, as it significantly reduced the production of infectious virus and viral load in both hAWOs and hALOs, while camostat partially reduced the production of infectious virus only in hAWOs, and bestatin was ineffective in both types of lung organoids. Also neutralizing antibody CB6 significantly reduced the production of infectious viral particles in lung organoids (Pei et al., 2020b). Human iPSC-derived pneumocytes have also been used to confirm the antiviral activity of three drugs identified in a large-scale screening study using infected Vero E6 cells (Riva et al., 2020), including the cathepsin K inhibitor ONO-5334, the calpain and cathepsin B inhibitor MDL28170, and the PIKfyve kinase inhibitor apilimod that is used to treat autoimmune diseases and has also anticancer and antiviral properties. From another drug screening study that tested the antiviral activity of 1441 compounds in Vero E6 cells, administration of lactoferrin alone or in combination with other drugs such as remdesivir emerged as the most promising candidate treatments for further investigation (Mirabelli et al., 2020). These drugs inhibited SARS-CoV-2 infection in a dose-dependent manner in iPSC-derived alveolar epithelial type 2 cells (iAEC2s), confirming their suitability for further clinical studies. Human lung organoids (HLOs) have also been used to confirm the efficacy in reducing SARS-CoV-2 infection of antiandrogenic compounds, such as dutasteride, ketoconazole and finasteride, that had been identified as hit drugs from a combination of in vitro and in silico screenings (Samuel et al., 2020). Additionally, in a study from our group, hAWOs were adapted to a high-throughput screening platform . In this study, hAWOs infected with a SARS-CoV-2 pseudovirus were treated with the Prestwick chemical library, containing 1,280 approved drugs selected for their high chemical and pharmacological diversity. The screening identified three FDA-approved lead drugs, including imatinib, mycophenolic acid (MPA), and quinacrine dihydrochloride (QNHC) that were further investigated . Imatinib is an inhibitor of several tyrosine kinases used as an anticancer medication, and it is also able to inhibit in vitro the replication of SARS-CoV and MERS-CoV (Coleman et al., 2016). MPA is an immunosuppressant drug used for autoimmune diseases and to avoid organ rejection, and it is also able to inhibit the replication of several viruses (Chapuis et al., 2000;Cheng et al., 2015;Diamond et al., 2002). QNHC is an anti-malarial drug that has been used to treat intestinal infections and autoimmune diseases (Toubi et al., 2006). Treatment with imatinib, MPA or QNHC of hAWOs infected with a SARS-CoV-2 pseudovirus and SARS-CoV-2 virus significantly reduced viral replication and the number of infected cells in a dose-dependent manner . Additionally, these drugs were able to inhibit SARS-CoV-2 pseudovirus infection in hAWOs transplanted in mice, suggesting their efficacy also in an in vivo model . Overall, independent studies using hPSC-derived lung monolayer cultures and organoids confirmed the antiviral activity against SARS-CoV-2 in vitro of remdesivir, and to some extent camostat, and expanded the pool of candidate drugs to further pursue in clinical studies. Cardiovascular system Drug screening studies to identify treatments for COVID-19 have also adopted hPSC-based models of the cardiovascular system, including monolayer cultures of cardiomyocytes (hPSC-CM) (Bojkova et al., 2020;Garcia et al., 2020;Mills et al., 2020;Pérez-Bermejo et al., 2020;Samuel et al., 2020) and vascular organoids (Monteil et al., 2020). Cultures of hPSC-CM have been used to validate the antiviral activity of a selected protein kinase inhibitor identified from a drug screening assay based on Vero E6 cells . In this study, Gracia et al. evaluated the efficacy of a chemical library comprising 430 kinase antagonists undergoing clinical testing and identified 34 hit drugs, all acting on DNA-Damage Response, ABL-24 BCR/MAPK, or mTOR-PI3K-AKT pathways. Among these candidates, berzosertib was selected for further investigation in hiPSC-CM. In agreement with the reduction of infected cells observed in Vero-E6 cells, berzosertib treatment decreased the levels of infectious virus present in the supernatant of infected hiPSC-CM cultures. Berzosertib treatment also reduced the number of apoptotic cells and restored hiPSC-CM contractility, increasing the number of beats per minute to levels comparable to controls . Cultures of hPSC-CMs have also been adopted to test the efficacy of candidate drugs identified by literature search, such as E64d, Z-Phe-Tyr(tBu)-diazomethylketone (Z-FY-DK), CA-074, apilimod, bafilomycin, aprotinin and camostat (Pérez-Bermejo et al., 2020) or Nacetyl-L-leucyl-L-leucyl-Lmethionine (ALLM) and remdesivir (Bojkova et al., 2020). Treatment with the PIKfyve kinase inhibitor apilimod, the autophagy inhibitor bafilomycin and the viral RNA polymerase inhibitor remdesivir significantly reduced the number of infected cells. Treatment with cathepsin-B and -L inhibitors E64d and ALLM and cathepsin-L inhibitor Z-FY-DK also significantly decreased viral detection in infected cells while administration of cathepsin-B inhibitor CA-074 or TMPRSS2 inhibitors, such as aprotinin and camostat were ineffective, suggesting that in human cardiomyocytes SARS-CoV-2 uses cathepsin-L but not cathepsin-B or TMPRSS2 protease-mediated activation (Bojkova et al., 2020;Pérez-Bermejo et al., 2020). In another study, hPSC-CMs have been used to investigate the effects of drugs targeting proteins that mediate diastolic dysfunction induced by the "cytokine storm" also on SARS-CoV-2 viral replication (Mills et al., 2020). To identify candidate targets for pharmacological modulation, Mills et al. mimicked in vitro the cytokine storm induced by SARS-CoV-2 infection treating human cardiac organoids with combinations of inflammatory molecules. Analysis of inflamed cardiac organoids using phosphoproteomics in combination with single nuclei RNA-seq identified bromodomain protein 4 (BRD) as a promising target. Treatment of hPSC-CMs with BRD inhibitor INCB054329 at the same time of exposure to SARS-CoV-2 did not reduce viral replication or viral load while pre-treatment with INCB054329 before infection significantly reduced the viral load, decreased the number of infected cells and avoided sarcomere disorganisation, suggesting that BRD inhibitors are promising candidates for further investigation (Mills et al., 2020). Other studies have adopted the approach to reduce viral entry by either decreasing the expression of ACE2 and S (spike) priming proteases on host cells (Samuel et al., 2020) or targeting the virus directly using clinical-grade human recombinant soluble ACE2 (hrsACE2) (Bojkova et al., 2020;Monteil et al., 2020). To identify drugs able to decrease ACE2 expression, hESC-CMs were treated with FDA-approved drugs of the Selleckchem library and their ACE2 expression levels were monitored using high-throughput imaging (Samuel et al., 2020). To find additional compounds that could reduce ACE2 expression, the data obtained from this in vitro screening were used to train a deep learning model that was later applied for an in silico screening. The in silico screening identified several drugs able to reduce ACE2 expression that targeted proteins involved in androgen signaling. Lead drugs that block androgen signaling, such as dutasteride, spironolactone, camostat, ketoconazole and finasteride, significantly reduced the expression of ACE2 and TMPRSS2 in hESC-CMs. Additionally, preincubation with dutasteride significantly decreased entry of recombinant spike-RBD protein in hESC-CMs, while pretreatment with the androgen receptor agonist 5a-dihydrotestosterone significantly increased the internalization of spike-RBD protein. These findings suggest further exploration of drugs targeting androgen signaling as potential treatments for COVID-19 (Samuel et al., 2020). Using a different strategy, hPSC-CMs and capillary organoids have been used to test the potential use of clinical grade human recombinant soluble ACE2 (hrsACE2) to inhibit SARS-CoV-2 entry (Bojkova et al., 2020;Monteil et al., 2020). hrsACE2 has already been tested as a treatment for SARS-CoV-1 in clinical trials up to phase 2 (Khan et al., 2017). Treatment of hPSC-CMs with hrsACE2 significantly decreased spike protein expression in infected hPSC-CMs (Bojkova et al., 2020).. In agreement with this finding, vascular organoids infected with mixtures of SARS-CoV-2 and variable concentrations of hrsACE2 had significantly reduced levels of intracellular viral RNA (Monteil et al., 2020). The observed decrease in the amount of viral RNA was dose-dependent but incomplete even at the highest doses, suggesting that viral entry could be mediated by additional proteins or other mechanisms (Monteil et al., 2020). Nevertheless, the fact that hrsACE2 is able to significantly reduce SARS-CoV-2 cell entry during the early phases of the infection makes it a promising candidate treatment for COVID-19. Kidney The efficacy of hrsACE2 in reducing SARS-CoV-2 infection has also been validated in kidney organoids (Monteil et al., 2020). The observed efficacy of hrsACE2 in organoids modelling different organs encourages its further clinical investigation alone or in combination with other drugs. Gastrointestinal tract As the gastrointestinal tract is another target of SARS-CoV-2 infection, our group tested in hPSC-COs the antiviral efficacy of the three drugs identified as hit compounds in the hAWOs screening described previously . Imatinib, MPA and QNHC were able to significantly reduced viral replication and the number of infected cells in a dose-dependent manner also in hPSC-COs infected with SARS-CoV-2 , further suggesting their potential for future clinical trials. With a similar aim, Kruger et al. have used hPSC-derived intestinal organoids (PSC-HIOs) to evaluate the antiviral activity of three candidate drugs selected based on literature review , including remdesivir, famotidine (Freedberg et al., 2020) and EK1 (Xia et al., 2020;Xia et al., 2019). Their results indicate that remdesivir and EK1 can significantly decrease the number of SARS-CoV-2 infected cells, while the histamine-2 antagonist famotidine was ineffective . Overall, these studies seem to confirm the efficacy of remdesivir, imatinib, MPA and QNHC in inhibiting SARS-CoV-2 infection in vitro also in hPSC-based models of gastrointestinal tract and identify EK1 as a new promising drug for further evaluation. Brain Human cortical organoids have also been used in a drug repurposing study aimed to test the efficacy of sofosbuvir (Mesci et al., 2020). Sofosbuvir is an FDA-approved anti-hepatitis C drug that inhibits viral replication by binding to the RNA-dependent RNA polymerase active site (Elfiky, 2020a). Sofosbuvir is also effective against infections caused by other enveloped single-stranded, positive-sense RNA viruses, such as Zika virus, and it has been identified as a potential COVID-19 treatment by in silico studies (Elfiky, 2020b). Eight-week-old brain organoids treated with sofosbuvir after SARS-CoV-2 infection showed decreased viral accumulation and reduced cell death, as well as restored expression of the presynaptic protein vGLUT1, suggesting that sofosbuvir is a promising candidate to treat the neurological manifestations and damages caused by SARS-CoV-2 infection (Mesci et al., 2020). Current limitations While the hPSC-based platforms described here are useful human in vitro systems to study SARS-CoV-2 infection and identify candidate drugs, they are reductionist models that do not fully recapitulate every aspect of COVID-19 pathophysiology and should be interpreted with caution. Many hPSC-derived cell and organoid platforms are not able to generate all the cell types present in adult human organs. This is either because monolayer cultures and organoids are still immature compared to primary adult cells or because certain cell types elude the differentiation strategy. Ongoing efforts aimed at improving culture conditions are expanding the range of cell types that can be derived in vitro and the maturation stages that can be reached. The comparisons performed so far to benchmark hPSC-based platforms against primary human tissues support the notion that hPSC-derived cells and organoids are sufficiently mature to recapitulate several aspects of SARS-CoV-2 infection in different organs. However, primary human tissues and cellular models based on adult stem cells more accurately model certain features of COVID-19 pathophysiology, such as age-related responses to infection. Another limitation of the majority of hPSC-based platforms currently used in COVID-19 research is the lack of immune system components. Immune cells are emerging as crucial to many aspects of COVID-19 pathophysiology and disease outcome. The use of co-cultures with immune cells would enable one to study in vitro COVID-19 pathophysiology beyond the consequences of direct infection. However, even after the addition of immune cells, further work is needed to recapitulate the microenvironment and inter-organ communication that are present in vivo. Concluding remarks Notwithstanding these technical limitations, hPSC-based platforms have emerged as valuable tools to investigate several aspects of COVID-19 pathophysiology. As more studies will be performed, they will likely keep expanding the spectrum of tissues and cell types investigated and adopt more complex hPSC-based platforms. Co-cultures of tissuespecific hPSC-derived cells and organoids with hPSC-differentiated immune cells appear promising. These co-cultures would allow us to investigate the interactions between infected and immune cells, uncovering how immunomodulatory molecules released from infected cells affect immune components and how the immune response in turn impacts the infected tissue. They could also enable us to better evaluate how drugs are metabolized, increasing the faithfulness of in vitro drug screenings. To study the patient-specific responses to inflammation, an alternative to co-cultures with immune cells could be to treat tissue-specific cells and organoids with pro-inflammatory cytokines or antiinflammatory treatments before or after infection. Additionally, since hPSC-derived cells and organoids are able to mimic different stages of fetal development, they can be useful to understand how maternal inflammatory responses might affect the growth of the fetuses and the effects of prenatal exposure to SARS-CoV-2. As additional receptors and cofactors that mediate SARS-CoV-2 entry have been recently discovered, their expression will probably be investigated in hPSC-based models. Likely, future investigations will also test several MOI in the same study and will monitor the long-term effects of SARS-CoV-2 infection and drug responses, expanding our understanding of infection progression and disease outcomes. We expect additional drug screening studies to compare the performance of the most promising drugs identified in different studies. Future drug screening studies could also more systematically identify molecules that worsen SARS-CoV-2 infection. Finally, hPSC-based platforms can be used to understand why the response to SARS-CoV-2 infection varies widely between individuals (Dobrindt et al., 2020) (Fig. 2). As hiPSCs can be generated from individuals with different genetic backgrounds and underlying medical conditions, collections of hiPSC lines may be used to complement ongoing efforts investigating the genetic variants associated with susceptibility to infection and severity of COVID-19. HiPSCs-derived cells and organoids may further be employed in drug screening studies to evaluate the patient-specific responses to each drug, potentially helping to identify personalized therapies for COVID-19. Patient-specific iPSCs may also be helpful to assess drug safety in individuals with preexisting medical conditions, avoiding additional damage to compromised organs. As more studies adopt and advance hPSC-based platforms to investigate COVID-19 pathophysiology, they will facilitate a better understanding of infection mechanisms and expedite the identification of candidate treatments, complementing the findings obtained from primary human tissues and animal models. Declaration of interests The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Shuibing Chen reports financial support was provided by National Institute of Diabetes and Digestive and Kidney. Shuibing Chen reports financial support was provided by Bill and Melinda Gates Foundation.
2021-02-24T14:07:15.188Z
2021-02-20T00:00:00.000
{ "year": 2021, "sha1": "ffa2105e42558ee22ae891cb5db69f5666e634f5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.scr.2021.102207", "oa_status": "GOLD", "pdf_src": "ElsevierCorona", "pdf_hash": "f210bcd6166ca60f5ec7e642cc790cbce4aca2e7", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
220486749
pes2o/s2orc
v3-fos-license
18-year long monitoring of the evolution of H2O vapor in the stratosphere of Jupiter with the Odin space telescope Comet Shoemaker-Levy 9 impacted Jupiter in July 1994, leaving its stratosphere with several new species, among them water vapor (H2O). With the aid of a photochemical model H2O can be used as a dynamical tracer in the jovian stratosphere. In this paper, we aim at constraining vertical eddy diffusion (Kzz) at the levels where H2O resides. We monitored the H2O disk-averaged emission at 556.936 GHz with the Odin space telescope between 2002 and 2019, covering nearly two decades. We analyzed the data with a combination of 1D photochemical and radiative transfer models to constrain vertical eddy diffusion in the stratosphere of Jupiter. The Odin observations show us that the emission of H2O has an almost linear decrease of about 40% between 2002 and 2019.We can only reproduce our time series if we increase the magnitude of Kzz in the pressure range where H2O diffuses downward from 2002 to 2019, i.e. from ~0.2 mbar to ~5 mbar. However, this modified Kzz is incompatible with hydrocarbon observations. We find that, even if allowance is made for the initially large abundances of H2O and CO at the impact latitudes, the photochemical conversion of H2O to CO2 is not sufficient to explain the progressive decline of the H2O line emission, suggestive of additional loss mechanisms. The Kzz we derived from the Odin observations of H2O can only be viewed as an upper limit in the ~0.2 mbar to ~5 mbar pressure range. The incompatibility between the interpretations made from H2O and hydrocarbon observations probably results from 1D modeling limitations. Meridional variability of H2O, most probably at auroral latitudes, would need to be assessed and compared with that of hydrocarbons to quantify the role of auroral chemistry in the temporal evolution of the H2O abundance since the SL9 impacts. Modeling the temporal evolution of SL9 species with a 2D model would be the next natural step. Introduction From the first observations of water (H 2 O) in the stratospheres of giant planets (Feuchtgruber et al. 1997), the existence of external sources of material to these planets, such as rings, icy satellites, interplanetary dust particles (IDP), and cometary impacts, was demonstrated. Indeed, H 2 O cannot be transported from the tropospheres to the stratospheres due to a cold trap at the tropopause of all these planets. Regarding the nature of the external sources, it is now demonstrated that Enceladus plays a major role in delivering H 2 O to Saturn's stratosphere (Waite et al. 2006;Hansen et al. 2006;Porco et al. 2006;Hartogh et al. 2011;Cavalié et al. 2019), while an ancient comet impact is the favored hypothesis in the case of Neptune for carbon monoxide (CO), hydrogen cyanide (HCN) and carbon monosulfide (CS) (Lellouch et al. 2005(Lellouch et al. , 2010Hesman et al. 2007;Luszcz-Cook & de Pater 2013;Moreno et al. 2017). At Uranus, the situation remains unclear (Cavalié et al. 2014;Moses & Poppe 2017). In July 1994, astronomers witnessed the first extraterrestrial comet impact when the Shoemaker-Levy 9 comet hit Jupiter. Several fragment impacts were observed around −44 • latitude (Schulz et al. 1995;Sault et al. 1997;Griffith et al. 2004), which delivered several new species, including H 2 O (Lellouch et al. 1995;Bjoraker et al. 1996). Piecing together several observations of H 2 O vapor in the infrared and submillimeter with the Infrared Space Observatory (ISO), the Submillimeter Wave Astronomy Satellite (SWAS), Odin and Herschel, it was established that Jupiter's stratospheric H 2 O comes from the SL9 comet impacts (Bergin et al. 2000;Lellouch et al. 2002;Cavalié et al. 2008aCavalié et al. , 2012Cavalié et al. , 2013. Cavalié et al. (2012) used the monitoring of the H 2 O emissions to try and constrain the vertical eddy mixing in Jupiter's Article number, page 1 of 9 arXiv:2007.05415v1 [astro-ph.EP] 10 Jul 2020 A&A proofs: manuscript no. main stratosphere. Their sample of Odin observations only covered 2002 to 2009 and did not allow them to unambiguously demonstrate that the line emission was decreasing with time, as was expected from the comet impact scenario. Fortunately, the Odin space telescope is still in operation and has continued ever since to regularly monitor the H 2 O emission from the stratosphere of Jupiter. In this paper, we extend the monitoring presented in Cavalié et al. (2012) by adding new observations from 2010 to 2019, hence doubling the time baseline. While H 2 O is not as chemically stable as e.g. HCN (Moreno & Marten 2006;Cavalié et al. 2013) and can, in principle, not be used to constrain horizontal diffusion without a robust chemistry and diffusion model, we assume oxygen chemistry is now sufficiently well-known after recent progress (Dobrijevic et al. 2014(Dobrijevic et al. , 2016(Dobrijevic et al. , 2020Loison et al. 2017) and use H 2 O nonetheless as a tracer to constrain vertical diffusion in Jupiter's stratosphere, similarly to HCN, CO and carbon dioxide (CO 2 ) in Moreno et al. (2003), Griffith et al. (2004), and Lellouch et al. (2002Lellouch et al. ( , 2006. Our work therefore assumes H 2 O has small meridional variability by the time of our first observation in 2002, i.e. of the order of that measured by Moreno et al. (2007) in HCN and Cavalié et al. (2013) in H 2 O (a factor of 2-3). With nearly two decades of data, we can probe the layers from the level where H 2 O was originally deposited by the comet to its current location by following its downward diffusion with our spectroscopic observations. We present the Odin observations made between 2002 and 2019 in Section 2. We introduce the photochemical and radiative transfer models that we used in this study in Section 3. Results of both the photochemical model and the analysis of Odin observations are given in Section 4, followed by a discussion in Section 5 on the eddy diffusion profile. We give our conclusion in Section 6. Observations Odin (Nordh et al. 2003) is a Swedish-led space telescope of 1.1m in diameter. It was launched into polar orbit in 2001, at an altitude of 600 km. It observes in the submillimeter domain in the frequency bands of 486-504 GHz and 541-581 GHz. The observations of the H 2 O (1 10 -1 01 ) line at 556.936 GHz in Jupiter's stratosphere used in this paper were made with the Submillimeter and Millimeter Radiometer ) and the Acousto-Optical Spectrometer (Lecacheux et al. 1998) using the Dicke switching observation mode. This mode is the standard Odin observation mode Hjalmarson et al. 2003). It enables integrating on a target and a reference position on the sky by using a Dicke mirror. This enables compensating for short-term gain fluctuations. In addition, a few orbits are integrated on the sky 15' away from the source to remove other effects not corrected by the Dicke switching technique, like ripple continuum and continuum spillover from the main beam. A first monitoring of Jupiter's stratospheric H 2 O emission at 557GHz was already carried out by Odin over the 2002-2009 period (Cavalié et al. 2012). We have obtained additional data between 2010 and 2019, on the following dates: 2010/11/20, 2012/02/17, 2012/02/24, 2012/10/05, 2013/03/01, 2013/10/04, 2013/10/27, 2014/04/04, 2014/10/17, 2015/04/19, 2016/12/16, 2018/02/02, 2019/02/22, and 2019/10/09. We thus double the time coverage of the Odin monitoring. For each observation date, we have accumulated on average 9 orbits of integration time, for the 9 orbits we have 6 (× ∼1h integration) ON Jupiter and 3 OFF at 15' to remove the residual background that we get in the Dicke switch scheme. Each observation was reduced with the same method as in Biver et al. (2005) and Cavalié et al. (2012). Resid-ual continuum baselines were removed using a normalized Lomb periodogram (see Fig. 1 top) to produce the baseline-subtracted spectra we analyzed in what follows (see Fig. 1 bottom). Odin's primary beam is about 126 at 557 GHz, whereas the apparent size of Jupiter is about 35 as Odin observes when Jupiter is in quadrature. We have thus obtained disk-averaged spectra. Even though the temporal evolution of the disk-averaged H 2 O vertical distribution following the SL9 impacts implies that two different dates should correspond to two different vertical profiles, we chose when possible to average the observations by groups of two or three not too far apart in time to increase the signal-to-noise ratio (S/N). All 2012 observations have been averaged into a single observation and we link this observation to an equivalent date of 2012/05/21 for our modeling. This has a very limited impact on the line shape, given that the line is already substantially smeared by the rapid rotation of the planet (12.5 km/s at the equator). The 10 spectra that span the 2002-2019 time period and that we used in our analysis are shown in Fig. 2. Given the limited sensitivity per spectral channel of our observations, there is very Article number, page 2 of 9 limited vertical information that can be directly retrieved from the line profile. The main information then resides in the line area. In addition, the line width is mainly controlled by the rapid rotation of the planet, so that the line amplitude (l) remains the only diagnostic for temporal variability. Because observations were carried out at different Odin-Jupiter distances, there is nonnegligible variability in the beam filling factor. To get rid of this variability and only keep the variability caused by the evolution of the water abundance, we divided the spectra by their observed antenna temperature continuum (c) to produce and subsequently analyze line-to-continuum ratio (l/c) spectra. We computed the l/c by averaging the peak of the line over a range of ±5 km/s and the continuum excluding the central ±50 km/s. It has the benefit of cancelling out the variable beam dilution effect that results from the variable Jupiter-Odin distance from one another date and that impacts similarly the observed line amplitude and continuum. The evolution of the l/c of the Odin observations between 2002 and 2019 is presented in Fig. 3. We note that longterm stability of Odin's hot calibrator is better than 2% and is accounted for in the total power calibration scheme. It has no effect on the temporal evolution of the l/c. In addition, any detector sensitivity changes over the course of this monitoring would have similar effects on both continuum and line amplitude. So the temporal evolution seen on the l/c in Fig. 3 is only caused by changes in the H 2 O abundance. Models In this section, we present the models used to reproduce the decrease in the H 2 O l/c at 557 GHz observed by the Odin space telescope between 2002 and 2019. These calculations were carried out with a 1D time-dependent photochemical model to simulate the H 2 O disk-averaged mole fraction vertical profile in the atmosphere of Jupiter after the SL9 impact at each observation date, and a radiative transfer code to simulate the Odin spectra. We first present the photochemical model, then the radiative transfer model, and finally our modeling strategy. Photochemical model The 1D time-dependent photochemical model used in the present study is adapted from the recent model developed for Neptune by Dobrijevic et al. (2020), which couples ion and neutral hydrocarbon and oxygen species. The ion-neutral chemical scheme remains unchanged (see Dobrijevic et al. 2020 for details). In the following sections, we only outline the parameters specific to Jupiter used in this model. Boundary conditions In the first step of the 1D photochemical modeling, we assumed a background flux of H 2 O, CO and CO 2 supplied by a constant flux of IDP with influx rates Φ i at the top boundary given by Moses & Poppe (2017): We also account for the internal source of CO with a tropospheric mole fraction of 1 ppm . Unlike previous photochemical models, we did not include a downward flux of atomic hydrogen at the upper boundary to account for additional photochemical production of H in the higher atmosphere. We assumed that photo-ionization and subsequent ionic chemistry were responsible for this source previously added to the models. All other species were assumed to have zero-flux boundary conditions at the top of the model atmosphere (corresponding to a pressure of about 10 −6 mbar). At the lower boundary (1 bar), we set the mole fraction of He, CH 4 and H 2 respectively to y He = 0.136, y CH 4 = 1.81 × 10 −3 and y H 2 = 1.0 − y He − y CH 4 (see Hue et al. 2018 for details). All other compounds have a downward flux given by the maximum diffusion velocity v = K zz /H where K zz is the eddy diffusion coefficient and H the atmospheric scale height at the lower boundary. Temperature and vertical transport The pressure-temperature profile used in the present study for all observation dates is shown in Fig. 4. Details on this profile can be found in Hue et al. (2018). We chose to use this disk-averaged temperature profile throughout the 18-year observation period of Odin, because Jupiter barely shows disk-averaged seasonal variability (Hue et al. 2018). In addition, Cavalié et al. (2012) already showed that reasonable disk-averaged stratospheric temperature variations could not explain the H 2 O l/c evolution in the 2002-2009 period. Since the l/c decrease has continued ever since, the disk-averaged stratospheric temperature would have had to drop continuously by ≥10 K over the 2002-2019 period. Even though such variability can be seen locally, such disk-averaged variability is contradicted by observations (Greathouse et al. 2016). Our baseline K zz eddy diffusion profile (Model A in what follows) is the Model C of Moses et al. (2005). This profile ensures having a CH 4 mole fraction profile in agreement with observations of Greathouse et al. (2010) around the homopause. To fit the temporal evolution of the H 2 O emission seen by Odin, we had to adjust this profile in the pressure range probed by the H 2 O line. More details are given in Section 3.3. The resulting eddy profile (Model B hereafter) shown in Fig. 4 and has a simple expression given by: Radiative transfer model We applied the radiative transfer model described in Cavalié et al. (2008bCavalié et al. ( , 2019 and used the temperature profile as well as the output mole fraction profiles of the photochemical model. Both are therefore applied uniformly in latitude and longitude over the jovian disk. Details regarding Jovian continuum opacity, spectroscopic data and the effect of the jovian rapid rotation can be found in Cavalié et al. (2008a). We adopt the following broadening parameters (pressure-broadening coefficient γ and its temperature dependence n) for H 2 O, NH 3 and PH 3 : (Dick et al. 2009;Fletcher et al. 2007;Levy et al. 1993Levy et al. , 1994. The final spectra are smoothed to a resolution of 10 MHz. Odin's pointing has been checked twice a year and has remained stable within a few arcsec since its launch. However, larger pointing errors can occur when Odin is pointing to Jupiter close to occultation by the Earth (i.e. at the beginning and at the end of observations during an orbit). Only one star-tracker can then be used for the platform pointing stability, the other one pointing at the Earth. It results in a significant decrease of the pointing performance. For each Odin observation, we therefore used radiative transfer simulations to fit any east-west pointing error. Despite the large Odin beam-size with respect to the size Article number, page 3 of 9 A&A proofs: manuscript no. main The 2017.56 spectrum, which has the largest pointing error, is also unsurprisingly the one with the lowest quality fit. The pointing offset also marginally affects the l/c and can be best seen Moses et al. 2005). The orange dots depict the results obtained with the vertical profiles of Model A after rescaling their respective column densities to the temporal evolution modeled by Lellouch et al. (2002) with their chemistry-2D transport model. . The CH 4 homopause occurs where the K zz profile crosses the CH 4 molecular diffusion coefficient profile (blue dashed lines). The K zz value derived by Greathouse et al. (2010) at this level is shown for comparison. Our nominal K zz is unconstrained from our H 2 O observations for pressures higher than ∼5 mbar. in the results obtained using the Moses et al. (2005) eddy mixing profile (red triangle in Fig. 3), where some small jumps in the l/c are present (e.g. compare the 2017 point to the surrounding 2014 and 2019 ones). The l/c is also affected in principle by north-south pointing errors, especially if the thermal field is not meridionally uniform. However, we have no means to constrain such error. Modeling procedure The photochemical model was used in two subsequent steps, for a given K zz profile. First, we ran our model with the background oxygen flux until the steady state was reached for all the species. The results of this steady state 1 then served as a baseline for the second step of the modeling. In this second step, we treated the cometary impact in a classical way (Moreno et al. 2003;Cavalié et al. 2008a). We have considered a sporadic cometary supply of H 2 O in July 1994 with two parameters: the initial mole fraction of H 2 O y 0 deposited above a pressure level p 0 . This level was measured by e.g. Moreno et al. (2003) and found to be 0.2±0.1 mbar. We thus fixed p 0 to 0.2 mbar in our study. The value of y 0 was then found by chi-square minimization and usually found close to the values reported by Cavalié et al. (2008aCavalié et al. ( , 2012. We also added a CO component with a constant mole fraction of 2.5×10 −6 for p < p 0 at the start of our simulations, in agreement with Bézard et al. (2002) and Lellouch et al. (2002), to account for H 2 O-CO chemistry. The model was then run for integration times corresponding to the time intervals between the comet impacts and the Odin observation dates. The abundance profiles were extracted for each Odin observation date. We then simulated the H 2 O line at 556.936 GHz line for each date and compared the resulting spectra with the observations by using the χ 2 method. We started with K zz Model A, and adjusted it subsequently to obtain Model B by cycling the whole procedure until a good fit of all the H 2 O lines was obtained. Fig. 3 shows that the decrease of the l/c, only hinted in the first half of the monitoring (Cavalié et al. 2012), is now demonstrated, with a decrease of ∼40% between 2002 and 2019. This is evidence that the vertical profile of H 2 O has evolved within this time range and we thus used it to constrain vertical transport from our modeling. Results We first estimated the level of residence of H 2 O as a function of time with forward radiative transfer simulations using parametrized vertical profiles in which the H 2 O mole fraction is set constant above a cut-off pressure level. Despite the limited S/N of our observations, we were able to estimate these levels as a function of time. The most noticeable result is that we see the downward diffusion of H 2 O as the cut-off level evolves from ∼0.2 mbar to ∼5 mbar over the 2002-2019 monitoring period. This is the pressure range in which we could constrain K zz . For each K zz profile we tested, we explored a range of y 0 values (with p 0 always fixed to 0.2 mbar) and generated the H 2 O vertical profile for each Odin observation between 2002 and 2019. We then compared the lines resulting from these profiles with the observations in terms of l/c, and searched for acceptable fits (using a reduced χ 2 test) of the temporal evolution of the H 2 O l/c at 557 GHz. The best-fit value of y 0 were usually found close to the values of Cavalié et al. (2008aCavalié et al. ( , 2012, which were in agreement with previous ISO observations . We first noted that there is no (y 0 ,p 0 ) combination that enables fitting the Odin l/c for all dates when using the K zz profile derived by Moses et al. (2005) (our Model A), even though main hydrocarbon observations 2 are reproduced (see Fig. 5). The red 1 A model with only neutral chemistry was also run to study the effect of the ionic chemistry on the photochemistry of Jupiter and to confirm what Dobrijevic et al. (2020) found for Neptune. Indeed, the ionneutral coupling affects the production of many species in Neptune's atmosphere. In particular, it increases the production of aromatics and strongly affects the chemistry of oxygen species. We find similar effects in Jupiter. 2 Results for the other species are not depicted in the paper, but can be obtained upon request. points in Fig. 3 show, for instance, the results obtained using our nominal parameters for y 0 and p 0 (see below). In the first years after the impacts, we find that a small fraction of H 2 O (and CO) is converted into CO 2 , as shown by Lellouch et al. (2002) and also previously found by Moses (1996) for impact sites. The main difference between the two studies is that our model is a 1D, globally-averaged, model with complete and up-to-date chemistry, while in their study of the evolution of H 2 O and CO 2 , Lellouch et al. (2002) used a simplified chemistry model but a latitude-dependent model describing the spatial evolution of the SL9-generated compounds due to meridional eddy mixing. The initial disk-averaged H 2 O and CO abundances are thus lower in our Model A than in those in the narrow latitudinal band in Lellouch et al. (2002). This is also true for our Model B (see hereafter). Given our assumed initial CO and H 2 O values, we find a loss of 5% of the H 2 O column between the impacts and 2019, and only 1% between 2002 and 2019 (see Fig. 6), does not translate into a proportional decrease of the spectral line l/c. We note, however, that the loss in the first years following the impacts is likely to be underestimated in this model (and so would the production of CO 2 ), because the H 2 O and CO abundances were ∼10 times higher in the latitudes around the impacts . A simple 1D simulation with such abundances (Model A', initial CO mole fraction of 2.5×10 −5 above 0.2 mbar, i.e. 10 times more than in Model A) until 1997 (i.e., the date of the ISO observations of Lellouch et al. 2002) leads to a loss of 31% of the H 2 O column (vs. only 3% in Model A). However, even if the initial loss is indeed underestimated in our Model A, the slope of the l/c between 2002 and 2019 would not be significantly altered between Model A and a model that would start with the conditions of Model A' and continue with disk-averaged abundances at the start of our Odin campaign (still assuming that the factor of 2-3 horizontal variability seen in H 2 O in 2009 by Cavalié et al. 2013 is small enough that it can be neglected). The slope might even be flatter given that more H 2 O would have been consumed in the first place. After 2002, the loss of H 2 O would then be even slower. Actually, if we take the vertical profiles of Model A and scale their respective column densities to match the temporal evolution of the chemistry-2D transport model of Lellouch et al. (2002) (from their Fig. 12), we find an intermediate case shown in Fig. 3. However, this model falls short by a factor of 4 to reproduce the temporal decrease of the l/c observed between 2002 and 2019, as it only produces a decrease of the l/c of ∼10%. Fig. 6. Column densities as a function of time after the SL9 impacts (1994) for CO, H 2 O, and CO 2 . Model A results in the light green, cyan and orange curves, while Model B results in the dark green, dark blue and red ones. The offset between the two models results from the different background column resulting from the IDP source (see Section 3.1.1). The period covering the Odin monitoring (2002-2019) is highlighted in grey. By increasing the magnitude of K zz in the millibar and submillibar pressure ranges, for instance from 1.4×10 4 cm 2 s −1 to 5.2×10 4 cm 2 s −1 at 1 mbar and from 7.8×10 4 cm 2 s −1 to 1.5×10 5 cm 2 s −1 at 0.1 mbar, we could accelerate the decrease of the l/c to before the start of the Odin monitoring. With this K zz Model B, shown in Fig. 4, we were able to fit the pattern of the l/c temporal evolution within error bars. Fig. 3 shows our best results (green crosses, blue squares, and pink stars). These results show that the initial disk-averaged H 2 O mole fraction deposited by SL9 above the 0.2 mbar pressure level was likely in the range [y 0 = 1.0 × 10 −7 , y 0 = 1.2 × 10 −7 ]. In Model B, we find that the column of CO 2 tops at 0.1% of the total O column (which again must be underestimated) and starts decreasing after 2×10 8 s (∼6 years after the impacts), when the production of CO 2 by the CO+OH reaction becomes less efficient than CO 2 photolysis. The production and loss mechanisms for oxygen species as of 2019 are summarized in Fig. 7. It essentially shows that H 2 O is efficiently recycled and is only lost due to condensation. CO 2 is lost to CO and H 2 O via photolysis. The evolution of the H 2 O abundance profile according to our Model B (with y 0 = 1.1×10 −7 above a pressure level p 0 = 0.2 mbar) is shown in Fig. 8 at the time of the SL9 impacts and of each Odin observation. We also show a prediction for 2030 when JUICE (Jupiter Icy Moons Explorer) will start observing Jupiter's atmosphere. We note that the simulation gives us a decrease of the H 2 O abundance as a function of time for pressures lower than ∼5 mbar between 2002 and 2019, while it tends to increase at higher pressures because of vertical mixing. We finally verified the agreement of Model B at the steady state for the main hydrocarbons. Fig. 5 (top) shows that our CH 4 profile remains in good agreement with the Greathouse et al. (2010) observations, which is not surprising since model A and B share a common homopause. However, the C 2 H 6 profile (Fig. 5 bottom) is incompatible with the observations, questioning the validity of K zz Model B. This profile is not a unique solution and properly deriving the error bars on its vertical profile would require a full retrieval, which is beyond the scope of this paper. However, we performed several tests to look for other (K zz , y 0 , p 0 ) combinations to explain the observed l/c evolution and found our K zz Model B to be quite robust as there is no solution that enables fitting both the H 2 O temporal evolution and the hydrocarbon vertical profiles simultaneously. There is thus a contradiction between the Odin H 2 O monitoring observations and the hydrocarbon observations in terms of vertical mixing, when interpreted with a 1D time-dependent photochemical model. Discussion In a 1D photochemical model, vertical transport is dominated by molecular diffusion at altitudes above the homopause and eddy mixing below this limit. The K zz profile is a free parameter of 1D photochemical models. The best way to constrain this parameter is to compare the model results with observational data of some particular species. In the case of Titan, an inert species like argon (Ar) is very useful for this purpose. For the giant planets, the situation is more difficult. The homopause can be constrained using CH 4 observations in the upper atmosphere since its profile is driven by molecular diffusion. Below the homopause, the K zz profile is usually constrained from a comparison between observations and model results for the main hydrocarbons (C 2 H 2 and C 2 H 6 ). Unfortunately, this is an imprecise methodology since model results have strong uncertainties (see for instance Dobrijevic et al. 2010 for Neptune andDobrijevic et al. 2011 for Saturn), which can be much larger than uncertainties on observational data. In a recent photochemical model of Titan, Loison et al. (2019) used H 2 O and HCN to constrain K zz in the lower stratosphere. One of the reasons is that the chemical processes that drive their abundances are expected to be simpler than for hydrocarbons and the model uncertainties caused by chemical rates therefore more limited. In the case of Neptune, Dobrijevic et al. (2020) showed that uncertainties on model results for H 2 O are very low compared to other oxygen species and hydrocarbons and this species is therefore currently the best tracer of the vertical diffusion in the stratosphere of Neptune, assuming its chemistry is well-known. We considered in the present paper that this was also the case for Jupiter (and the other giant planets). The delivery of H 2 O, among other species like HCN, CO and carbon sulfide (CS), to Jupiter's stratosphere by comet SL9 in 1994 (Lellouch et al. 1995) further enhances the interest of using these species as tracers for horizontal and vertical dynamics in this atmosphere, provided that either they are chemically stable over the time considered or their chemistry is properly modeled. For instance, Moreno et al. (2003), Griffith et al. (2004) and Lellouch et al. (2006) used HCN, CO and CO 2 to constrain longitudinal and mostly meridional diffusion in the years following the impacts, even though CO 2 is not chemically stable . In the present study, we used nearly two decades of H 2 O disk-averaged emission monitoring with Odin and a 1D photochemical model to constrain vertical diffusion in Jupiter's stratosphere. Not only does the modeling of the H 2 O vertical profile suffer less from chemical rate uncertainties than hydrocarbons (Dobrijevic et al. 2020), but the progressive downward diffusion of H 2 O from its initial deposition level (p 0 in our model) enabled us to probe K zz at various altitudes as a function of time. This will remain true for the years to come, until the bulk of H 2 O eventually reaches its condensation level at ∼30 mbar. When assuming Model A for K zz , we cannot fit the ∼40% decrease observed on the l/c, even if we find a global decrease of the H 2 O column of 5% between the impacts and 2019. H 2 O is too efficiently recycled for its profiles to reproduce the time series of Odin observations. We can only fit this time series with the 1D model if we alter K zz to Model B. While the initial loss of H 2 O is caused by the build-up of the CO 2 column, condensation becomes the main loss factor after ∼2×10 8 s (about 6 years after the impacts) and enables fitting the H 2 O observations. CO 2 also starts being lost to H 2 O and subsequent condensation of H 2 O. While the Odin time series of H 2 O observations can be fitted with our 1D model and K zz Model B, the resulting C 2 H x profiles are inconsistent with numerous observations, even when accounting for the joint error bar of the observations and the photochemical model itself. This tends to demonstrate that our 1D model cannot fit jointly C 2 H x and the H 2 O observation time series. K zz Model A probably remains the best disk-averaged estimate of K zz in Jupiter's stratosphere. In this context, Model B can only be seen as an upper limit. H 2 O must then have an additional loss process. We propose as a promising candidate that the regions of enhanced loss of H 2 O are the auroral regions of Jupiter, by means of ion-neutral chemistry. Dobrijevic et al. (2020) showed that ion-neutral chemistry affected the abundances of oxygen species in Neptune's atmosphere, even without including magnetospheric ions and electrons. With energetic electrons precipitating down to the submillibar level under Jupiter's aurorae (Gérard et al. 2014), ion-neutral chemistry could be the cause for an enhanced loss of H 2 O, possibly producing excess CO 2 . This could explain the peak in the CO 2 meridional distribution seen 6 years after the SL9 impacts only at the south pole by Lellouch et al. (2006), as SL9-originating CO and H 2 O had not yet reached the northern polar region (Moreno et al. 2003;Cavalié et al. 2013). It must be noted that unexpected distributions of hydrocarbons have already been found in Jupiter's auroral regions. Kunde et al. (2004) and Nixon et al. (2007) found that, contrary to C 2 H 2 , the zonal mean of C 2 H 6 did not follow the mean insolation and peaked at polar latitudes. Hue et al. (2018) demonstrated that this discrepancy between two species that share a similar neutral chemistry cannot be explained either by neutral chemistry or by a combination of advective and diffusive transport. In turn, they proposed ion-neutral chemistry in the auroral region as a mechanism to bring the zonal mean of C 2 H 6 out equilibrium with the solar insolation. More recently, Sinclair et al. (2018) measured the longitudinal variability of the main C 2 H x species at northern and southern auroral latitudes. They found that C 2 H 2 and C 2 H 4 Fig. 7. Simple schematic of the chemical network of oxygen species based on integrated chemical loss term over altitude. This illustrates the fate of oxygen species from the external input (IDPs and/or comet) of H 2 O, CO and CO 2 in the atmosphere of Jupiter. For each species, the main loss processes are given. Photolysis is represented by hν and for reactions, the other reactant is given as a label. The percentage of the total integrated loss term over altitude is given with the altitude at which this process is maximum. Blue: sub-chemical scheme of H 2 O-related species. Black: sub-chemical scheme of CO-related species. Green: sub-chemical scheme of CO 2 -related species. Percentages change slightly depending on the amount of water in the atmosphere (i.e. before and after the comet impact), but the whole scheme stays the same. Values given here correspond to the state of the chemistry just after the influx of H 2 O due to the impact. were significantly enhanced at millibar and submillibar pressures under the aurora, while C 2 H 6 remained fairly constant. In addition, Sinclair et al. (2019) found that heavier hydrocarbons like C 6 H 6 were also enhanced under the aurorae. This points to a richer chemistry than that seen at lower latitudes, increasing the production of several hydrocarbons and ultimately producing Jupiter's aerosols (Zhang et al. 2013(Zhang et al. , 2016Giles et al. 2019). Such a richer chemistry could also apply to H 2 O and other oxygen species. At this point, however, this remains speculative and requires modeling the auroral chemistry under Jovian conditions with and without SL9 material. Conclusion In this paper, we presented disk-averaged observations of H 2 O vapor in the stratosphere of Jupiter carried out with the Odin space telescope. This temporal monitoring of the H 2 O line at 557 GHz spans over nearly two decades, starting in 2002, i.e. 8 years after its delivery by comet SL9. We demonstrated that the line-to-continuum ratio has been decreasing as a function of time by ∼40% on this period. Such a trend results from the evolution of the H 2 O disk-averaged vertical profile and we used it to study the chemistry and dynamics of the Jovian atmosphere. We thus used our observations to constrain K zz in the levels where H 2 O resided at the time of our observations, i.e. between ∼0.2 and ∼5 mbar. Using a combination of photochemical and radiative transfer modeling, we showed that the K zz profile of Moses et al. (2005) could not reproduce the observations. We had to increase the magnitude of K zz by a factor of ∼2 at 0.1 mbar and ∼4 at 1 mbar to fit the full set of Odin observations. However, this K zz profile makes the C 2 H 6 profile fall outside observational and photochemical model error bars and is thus not acceptable. As a result, 1D time-dependent photochemical models cannot reproduce both the main hydrocarbon profiles and the temporal evolution of the disk-averaged H 2 O vertical profile. A possible explanation is that these species still vary locally more sharply as a function of latitude than the factor of 2-3 indicated by the low spatial resolution observations of Cavalié et al. (2013) (these variations cannot be studied with 1D models, by definition). Sinclair et al. (2018Sinclair et al. ( , 2019 have already demonstrated that the auroral regions of Jupiter harbor chemistry influencing the hydrocarbons that is not seen elsewhere on the planet. The same may be also be true for H 2 O, but disk-resolved observations with more resolution that did Herschel in 2009-2010 would be required to test this hypothesis, possibly with the James Webb Space Telescope Norwood et al. (2016). In the meantime, the continuation of the monitoring of the Jovian stratospheric H 2 O emission with Odin will help prepare future observations to be carried out by the Jupiter Icy Moons Explorer (JUICE). The study that we have presented in this paper will help to prepare the JUICE mission which will study the Jupiter and its moons in the 2030s. One instrument of its payload, the Submil- Fig. 8. Evolution of the H 2 O abundance profile in the stratosphere of Jupiter for y 0 = 1.1 × 10 −7 above a pressure level p 0 = 0.2 mbar (SL9 parameters) and K zz Model B. These profiles are obtained from the photochemical model and a comparison with the observations. The red dotted abundance profile represents the initial profile of H 2 O at the time of the SL9 comet impacts in 1994. Each solid curve represents the abundance profile of H 2 O at dates corresponding to Odin observations. The black dashed profile represents the abundance of H 2 O that we predict for 2030. limetre Wave Instrument (SWI; Hartogh et al. 2013) will observe the same H 2 O line as the one observed by Odin to map the zonal winds in the stratosphere of Jupiter from high resolution spectroscopy at high spatial resolution. The continuation of the Odin monitoring is thus crucial to refine our estimates of the H 2 O abundance and vertical profile for the 2030s and thus optimize the SWI observation program.
2020-07-13T01:00:29.505Z
2020-07-10T00:00:00.000
{ "year": 2020, "sha1": "9524400f007435515f9b682053a25bf94f9d5cb5", "oa_license": "CCBY", "oa_url": "https://www.aanda.org/articles/aa/pdf/2020/09/aa38188-20.pdf", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "b64c7438b056cbef44cae2d2e95a3680b00115ad", "s2fieldsofstudy": [ "Environmental Science", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
54914006
pes2o/s2orc
v3-fos-license
Welcoming outsiders : The nascent Jesus community as a locus of hospitality and equality ( Mk 9 : 33 – 42 ; 10 : 2 – 16 ) Copyright: © 2014. The Authors. Licensee: AOSIS OpenJournals. This work is licensed under the Creative Commons Attribution License. The recent global economic crisis left millions of people destitute without formal work and further alienated the poor from the rich. As a remedy, modern Neoliberalism proposes that the poor must hope and steadily work their way up the economic ladder. What is the solution to such unbridgeable social and economic chasm? This article used the contemporary situation of economic inequality to imagine events during the first century, during Jesus’ time, whereby the rich increasingly amassed wealth to the disadvantage of the poor majority. In this article, Mark 9:33–42 and 10:10–16 was used to explore how Jesus developed an alternative economic system − one that contrasted itself in every respect from that of the hierarchical and patriarchal Roman Empire. This article argued that Jesus formed communities that directly responded to the economic challenges faced by the landless and the homeless majority by creating an alternative economy based on love and hospitality. This was done by proposing that Mark 9:33–42 and 10:2–16 are amongst the passages where the two rival economies were contrasted by way of two different household economies. Firstly, the economic system outside the house that typified the hierarchical Roman economy, and secondly, the economic system inside the house that referred to Jesus’ alternative system whereby he taught his disciples to welcome the homeless, the landless and the poor. Before developing this further, the plausible social context of the stories was attended to. Introduction: Conflicting social ideologies The two stories in Mark 9:33-46 and Mark 10:2-16 indicate that, amongst other traditions about Jesus, the Markan community shared and remembered stories where Jesus placed a child amongst his disciples.The context in which such stories were told, is difficult to establish, though in agreement with Joanna Dewey (2004).Because of reference to households and the strong criticism of the political status quo, it is possible to suggest that the stories may have been retold at homes as a censure or disapproval of the prevailing social system. Scan this QR code with your smart phone or mobile device to read online. Read online: perplexed upon seeing a child in their midst and possibly interpreted the action as a violation of social hierarchies, power and belonging.This passage has never been immune to different interpretations, and one dominant perspective suggests that Jesus used the story as a metaphor to teach humility and hospitality (Orton 2003;Patte 1983;Gundry-Volf 2000).This interpretation is mostly emphasised in churches, ascribing identity to Jesus' followers as pious and humble.Arguably, this interpretation focuses more on the comparative meaning derived from the metaphor of the child.The validity of this perspective makes sense if children were indeed regarded pious, humble and obedient.Reider Aasgaard (2007), who has done extensive research on children in antiquity, noted that children occupied the lowest social strata, were marginalised and represented the opposite construction of power and prowess during this milieu (Aasgaard ibid).This article will not pursue Aasgaard's perspective, but is interested in how the 'child' metaphor functioned as a means to destabilise discursive constructions of power, space and hierarchy in this ancient context.Using social scientific perspectives and ideological criticism, this article argues that the story is concerned with space, place and boundaries (Elliott 2003).This will be done by establishing the social settings and power dynamics that are deconstructed by these narratives. Using this perspective in this mini-drama, the story seemingly reveals delicate issues regarding who belonged inside and who remained outside.If a child who, according to the hegemonic ancient constructions and representations of sex and gender, was regarded as a social misfit was placed in the midst of male disciples, this article suggests that the story goes beyond the metaphor of humility.Arguably, it deconstructs notions of belonging, place and power, making it plausible to read the story as an ideological critic aimed at deconstructing, established social hierarchies which sidelined the poor and the weak (Spitaler 2011;Crossan 1991).In the story, the disciples were not economically rich, but being free men made them occupy a better social class than children, women and slaves.Narrated and acted out within a strong patriarchal community, conscious of hierarchy, Jesus' gesture may be understood as deconstructing the socially constructed meanings of power, place and social identity which agrees with Dominic Crossan's (ibid) assertion that, by accepting a child who embodied symbols of social and economic vulnerability, Jesus taught that the kingdom of God belonged to the poor.This perspective is difficult to sustain before attending to the following questions pertaining the social context of the story: 1. What is the context of the stories? 2. What caused social fractures?3. Through this action, did Jesus envision an alternative kingdom with different moral and ethical values than the prevailing kingdom? This article will contribute in exploring how the story was remembered as a narrative that deconstructed hierarchy and inequality, focusing on how the story created alternative social space in which the poor were welcomed and nurtured.Arguably, these stories seemingly grapple with complex questions regarding the exclusion of outsiders.Instead of abandoning the poor, can society radically reorganise itself to accommodate the poor?Using the child as metaphor, Jesus challenged the hegemonic social boundaries and established a new system based on love, hospitality and care for the marginalised.The question of the plausible social context of the stories will now be attended to. The context: Disrupted livelihoods The meaning of the stories is inconclusive without the broader economic and social background that forms the backdrop to the narratives. Location As simple as it sounds, the debate around the location and the identity of Mark's community is inconclusive.Markan scholars are in a tussle over whether Rome, a traditionally preferred location, or Galilee was the location of the community.Four reasons are presented in support of Rome as the location: 1.The tradition started with Papias, Bishop of Hierapolis during the 2nd century, who asserted that Mark, Peter's disciple, composed the gospel whilst in Rome.Guthrie (1990) and Collins (2007) further argue that it is inconceivable that the author composed the book whilst in Palestine, given the numerous internal literary inconsistencies present within the book.For example, reference to Dalmanutha (Mk 8:10), which is an unknown location, reference to Gerasenes as extending to the sea of Galilee (Mk 5:1), the description of Bethsaida as a village (Mk 8:26), as well as the incorrect description of Herod's family (Mk 6:17) which causes readers to doubt the writer's proximity to and knowledge of Palestine.4. Furthermore, the existence of Latin words, for example legion (Mk 5:9) and praetorian (Mk 15:16), supports claims that the audience understood Latin, a language spoken in Rome − thus placing the community's location in Rome.Equally, throughout the gospel the author took time to explain to his audience some Aramaic words, the language spoken in Palestine, such as Boanerges (Mk 3:17) and talitha cum (Mk 5:41) which may explain that the audience had little knowledge of Aramaic and that they resided outside Palestine (Guthrie ibid). On the other hand, strong arguments are presented in support of Galilee as the possible location.This article respects views that support Rome as the location, but deem those in support of Galilee seem more plausible: 1. Mark is the only gospel that took considerable time and space to detail the destruction of the Jerusalem temple and its impact on both the Jews and Christians − an explanation that makes sense if the community was close to Palestine (Mk 13:1ff.).2. In addition, the narrative structure of Mark's gospel situates most of Jesus' activities in Nazareth, Capernaum and Gadara which are places located in the northern part of Palestine.Mark clearly showed Galilee as the location where Jesus and his disciples gathered after his resurrection in contrast to Jerusalem which was portrayed as the hostile venue where Jesus was killed.Throughout the gospel, Galilee was represented as a new emerging community after Jesus' death and resurrection (Myers 1991;Horsley 2005;Roskam 2004;Marxsen 1969;Kelber 1979). Timing Whilst the anti-imperial sentiments within the gospel equally support Rome, the context is more plausible if located in Galilee after 70 CE.During the second half of the 1st century, the region was engulfed in political turmoil, partly caused by Nero's incompetence, giving opportunity for sporadic revolt throughout the empire (Tacitus 1964).The army generals revolted against the Empire which threw the region and the entire Empire into general political decay.For a while, the Empire spent considerable effort and resources fighting sporadic revolts, for example the revolt in Gaul under Vindex in 68 CE and the revolt by the Celts, led by Civilis, in 69 CE (Josephus 1968).Throughout the Empire, political instability inspired other ethnic groups to seek political freedom from the Romans. In Palestine, especially in Galilee, the Jews were amongst the various ethnic groups that revolted against the Roman imperial rule in demand of political autonomy.A Jewish historian, Flavius Josephus (1968), though writing from the perspective of the Empire, documented that religious grievances were the main causes underpinning the Jewish revolt.In response, the Romans destroyed the temple in 70 CE, but the people's resolve did not waiver.Instead, it exacerbated their hatred against the Empire.Amongst other reasons, the Jews were enraged by the temple tax that was redirected towards the rebuilding of the temple of the Roman god, Jupiter.After the temple destruction, the Jews mostly feared that the Romans would convert the Jerusalem temple into a pagan sanctuary.In response, some prophetic movements amongst the Jews, for example in Cyrenaica, rallied people in the wilderness − promising them deliverance and miracles like those of Moses (Theissen 1991). Challenges on their livelihood Besides political instability, the Galileans also faced mammoth economic challenges.Generally, the Galileans were rural peasants and land was central to their subsistence livelihood.They lived as country people in various villages scattered throughout the province (Freyne 2000b), maintaining their roots in the land and surviving through subsistence farming.The influence of Roman Imperialism coincided with the development of urbanisation and monetisation of the economy which severely affected their livelihoods. Commercialisation of land and tenancy fragmented kinship bonds and family values.A majority of city dwellers, for example in Tiberius, were retired Roman veterans and officials who served the interest of the Empire.Largely, this explains the peasants' resentment towards the elite whom they accused of buying land and forcing the peasants into feudal tenants. Amongst other issues, Rome demanded huge amounts of tribute from peasants which negatively affected their livelihoods.Each province within the Empire was required to pay a stipulated amount of tribute and tax.Galilee was required to pay approximately 200 talents from all the territories of Herod Antipas (Freyne 2000b).The stipulated revenues were paid after every two years and a quarter of the harvest was handed over to Rome.Due to the rapid expansion of the city's population, especially in Rome, frequent food shortages and high tributes were inevitable. Tributes were paid to Rome in the form of grain such as wheat and other agricultural produce (Freyne ibid).Corn was demanded directly from the peasants or could be harvested from imperial estates that were established in the provinces at the expense of the peasants. 2The decree to pay tributes by means of wheat to Caesar remained in effect for almost the rest of the 1st century. The growing demand for food, due to urbanisation, led to the creation of large estates owned by feudal Lords (Kloppenborg 2006;Oakman 2008).Agricultural specialisation shifted production from small household consumption to estate farming, resulting in peasants being driven from their land to marginal lands or they were assimilated as tenants on farms.Sean Freyne (2000a) commented saying that: In an agrarian economy, specialisation would mean a shift in land owning patterns, from small families running farms to large estates in which the tenants cultivate the estate.They often work for an absentee landowner under a manager, receiving a subsistent living in return for their labour.(p.34) To achieve this, the elite amassed fertile lands through forceful expropriation or default in payment of taxes by small holders.In some instances, the Empire grabbed land from the peasants to create estate farming for large-scale production.The peasants were displaced from their land in Gaba, Trachonitis and from fertile lands in the great plain, 2.Frayne says that imperial corn confiscated from the peasants was stored in upper Galilee.Josephus reports that the peasants wanted to break in and steal the imperial grain.Josephus also reports that corn was stored in lower Galilee at Gaba.The corn was collected from the peasants and belonged to Queen Bernice, wife of King Agrippa II. as well as in lower and upper Galilee to resettle veterans (Freyne 2000a). As the need for more land for commercial cultivation grew, land displacement became widespread (Horsley 2001). From the time of Herod the Great and Herod Antipas, land ownership was increasingly concentrated in the hands of royal estates.The rich controlled most of the land and owned a large number of slaves.The majority of the peasants were displaced from their ancestral land, resulting in land alienation and tenancy being a common feature, especially amongst peasants. The impact of land displacement household and kinship The repercussions were evidently seen.Intensive production made the traditional family system, based on kinship and reciprocity, collapse (Guijarro 1997).When the peasants lost their ancestral land, they also lost their livelihood and their right to subsist. Due to land displacement, a large percentage of peasants lived as labourers or tenants on estates owned by absentee landowners.The majority of tenants were peasants who had lost their land (Kloppenborg 2006).On such farms, the absentee farmer rented his farm to tenants who would cultivate the farm on his behalf.The landlord always benefited from such arrangement since he would get half or two-thirds of the harvested crops.In the event of a poor harvest, the tenant suffered arrears from unpaid rent.One poor harvest would put the tenant into arrears which would take several years to repay (Kloppenborg ibid).In many instances, there were conflict and forced eviction when the tenant failed to pay back the debt.This sometimes resulted in the tenant losing all he had, including the right to subsist.The parable of the vineyard (Mk 12:1-9) illustrated the use of violence by the landowner when evicting the tenants from his farm (Kloppenborg ibid;Horsley 1997). Debt was a commonly used mechanism to dispose of the peasants from their land (Oakman 2008:74).Such measures shattered the peasants' livelihood and their ability to continue with their subsistent lives.Many peasants became landless and homeless due to debt (Rajak 1983).In some instances, there was the widespread rise of banditry in the countryside, especially during the time of Governor Felix (52-60 CE; Rhoads 1976). Commentaries on the gospel of Mark concur that the stories reveal a socially and politically divided society.In his detailed commentary on Mark's gospel, Ched Myers (1991) A closer scrutiny shows that the two passages are related to each other which made some scholars suggest that the stories might have been duplicated (Evans 2001).It can be argued that, since they were told in two different geographic locations, these stories were separate.The first story happened in a Capernaum village (Καὶ ἦλθον εἰς Καφαρναούμ; Mk 9:33), whilst the second story happened across the river Jordan at an undisclosed place (Mk 10:10), thus giving a progression of Jesus' counter household ideology from Capernaum to places across the Jordan.The first story happened in the village of Capernaum which was well-known for its booming fishing industry, but also for oppression and poverty amongst the poor (Reed 1999).Since the story was told from the perspective of the poor, we can imagine that the prevalence of poverty made the story resonate with many people in the village. The second story happened across the river Jordan − regions known for their fertile soils and flat lands where, according to Sean Freyne (2000b), the peasants were forcefully removed from their lands to resettle Roman veterans.It is likely that the people from these regions might have suffered land dispossession and tenancy.Instead of reading the stories as narratives that teach humility, this article suggests that the stories deconstruct established notions of power, hierarchy and space.These two separate incidents testify that the Jesus movement sought to transform social hierarchies around Nazareth and across the river Jordan. Interestingly, a common feature can be noted in the way the stories were told, for example both stories happened within the house (Mk 9:33; Καὶ ἐν τῇ οἰκία γενόμενος).This is significant, because in antiquity, a household was the primary economic institution, connected to the larger society through kinship ties (Moxnes 2003).Halvor Moxnes (ibid) elaborated that, in antiquity, the term household included the father, mother, children, relatives and slaves − a different conception from our modern understanding of a home which is a private environment with a husband, a mother and their children or a nuclear family.Elisabeth Malbon (1986) made an interesting study regarding Mark's portrayal of a house, saying that in Mark's gospel a house is significant, because 'the actions enclosed by a house parallel those enclosed by a synagogue, which are, healing, teaching, preaching, and controversy'.In Mark, a house normally included the crowd that followed Jesus and the homeless people (Mk 3:20).Malbon (ibid) further argues that Jesus performed activities that were normally done in the synagogue or temple, but now they were being carried out in the house, thereby making the house a new social space for a new emerging community.In Mark's gospel, we hear that Jesus entered the house to teach and to heal (Mk 1:33; 2:1, 2).In Mark's gospel, the synagogue is being replaced by the house where a new community has a new gathering place (Malbon ibid:282). Importantly, stories happened inside the house − juxtaposing the teaching inside the house to the system outside the house.Arguably, by placing the child amongst the disciples, Jesus redefined belonging, space and power by revealing the story's progression from outside the house to inside the house. What happened prior going into the house (Mk 9:33) is significant.Outside the house, the disciples were discussing who was greater amongst themselves ('γὰρ διελέχθησαν ἐν τῇ ὁδῷ τίς μείζων').Why was this discussion a problem and how did it militate Jesus' understanding of belonging?Outside the house was the system of μείζων (Mk 9:34) -a hierarchical, feudal arrangement whereby the rulers oppressed those under them.It is a hierarchical social structure similar to the Roman households headed by the paterfamilias.In reality, the discussion of μείζων, created an unwelcomed hierarchical, patriarchal society whereby the elite and the privileged continued to enjoy wealth whilst the economically poor were being marginalised.Explicitly, the rulers possessed land, slaves and wealth, whilst the poor had their labour to sell.Besides being a political arrangement, this was a normative pattern in households whereby the paterfamilias, the father figure, was the domineering figure of the institution. Using a house as a demarcating social structure, the story contrasts this system with Jesus' vision of space and social relations, established on the principle of equality (οἱ δύο εἰς σάρκα μίαν; Mk 10:8).Jesus' vision of an alternative household was demonstrated when Jesus brought a child in their midst which deconstructed social relations based on age, gender and place.In terms of age, a child was not supposed to sit together with male elders, because children were perceived as immature (Aasgaard 2007).Similarly, in terms of gender and place, a child in the midst of men would be regarded as a misfit (Aasgaard ibid).Far from being a motivation for humility, this story was politically loaded and contrasts the social system outside the house which, in essence, represented the Roman Empire against the emerging system inside the house. The story shifted inside the house, being introduced by John's concern: Ἔφη αὐτῷ ὁ Ἰωάννης• διδάσκαλε, εἴδομεν τινα ἐν τῷ ὀνόματι σου ἐκβάλλοντα δαιμόνια καὶ ἐκωλύομεν αὐτὸν, ὅτι οὐκ ἠκολούθει ἡμῖν (Mk 9:38).John's question implicitly advocated for closed social boundaries which would continue to privilege a small group of people − in this case, the disciples.Vernon Robbins (1983) agrees that the disciples wanted to remain in privileged positions.Does this tell us more about Mark's own group?In essence, in this story, Mark retold the story about Jesus that happened approximately 40 years ago.Plausibly, one can assume that Mark was not simply reminding his community about Jesus' stories.Instead, as a theologian, he was reinterpreting and reconstituting Jesus to answer penitent questions about his community.This study concurs with Koester (1978) that a possibility of internal social disunity existed when some members seemingly entertained the system of social hierarchy within the community.Revising, and possibly reinterpreting history, Mark evokes stories about Jesus to reorganise his community − expanding its social boundaries and, in the process, reprimanding hierarchical and exclusive tendencies.The dire warning against making one of the little ones stumble, would make sense if it was understood in the context of warning the community against disintegration (Myers 1991). Expanding the territory Similar to the first story, when Jesus crossed over the other side of the river Jordan, he went into the house (Mk 10:10; καὶ εἰς τὴν οἰκίαν).However, unlike the first story, the specific location of the story is less known.The possibility is that, from the village of Capernaum, Jesus crossed the river Jordan to Bethabara.In this region, the peasants had experienced land dispossession caused by the resettlement of Roman veterans in the region (Freyne 2000b).This political situation formed the system outside the house.Similar to the first story, the system outside the house was contrasted to the system inside the house.The central character in both stories was a child, who can arguably be understood as representing the homeless and the landless. Whilst in the first story the system of μείζων causes social and economic stratification, the focus in the second story is on the household − evidenced by the strong admonition against divorce (Mk 10:2-16).It is plausible to argue that the land displacement that took place outside the house had a direct impact on family bonds and, in particular, marriage.Landlessness and tenancy broke family ties and put pressure on marriages.In Mark 10:2ff.Jesus responded to the problem of divorce which painted a bigger social problem behind the story.If a household was a microcosm of society, then divorce preludes the challenges faced by the larger society.Amongst the peasants, marriage had social and economic benefits − it strengthened kinship ties and secured resources within subsistent households.Consequently, divorce shattered kinship ties and made people, especially children, vulnerable (Moxnes 2003). Divorce and family fissures came in different forms.In addition to land displacement, Jesus' followers faced the problem of being rejected by their relatives.After studying the macarisms in the gospel of Matthew, Jerome H. Neyrey (1995) explained that some macarisms talk about the expulsion of persons who were disowned by their families (Mt 5:11, Lk 6:22).Neyrey (ibid:129) suggests that tensions within families erupted when some family members chose to identify with Jesus which resulted in them being expelled from the household and kinship.Those who were expelled were labelled as rebellious and deviants − a reference applicable to most of Jesus' followers who had been expelled and labelled as deviant (Talbott 2008).Dire consequences awaited, expelled family members.They would suffer economic and potential destitution since they were cut off from the household which was the source of survival, unity and identity (Talbott 2008), thus finding it hard to survive (Neyrey ibid).Expelled members were accused of bringing shame and disrepute to their own families.The deviants were accused of crossing social boundaries − from showing allegiance to their own families to supporting Jesus' movement.Such deviants were seen as outsiders by their own families, hence they were denied access to the resources for survival from the household (Guijarro 2004).In reality, this was perilous since identity and honour were derived from being a member of a household or a clan.Family and wealth, especially land, were an expression of honour (Neyrey ibid). 3In addition to losing one's immediate family, disowned members lost social standing which is one's place in the community (Neyrey ibid).Banned families were seen as shamed and their reputation was destroyed.They were completely shamed in the eyes of their neighbours.They would receive the same treatment from the rest of society which refused to engage in social and economic interaction with them (Neyrey ibid).The rest of the community would revile and despise them.There would be no business deals and marriage arrangements with such outcasts.They would not be able to maintain their social standing, obligations and status (Neyrey ibid). The Markan story might refer to families that had suffered land dispossession in addition to being rejected by their own families due to following Jesus.The motif of unity and oneness within the household became the subject when Jesus and his disciples entered the house (oikos).Upon entering the house, the disciples' refusal to allow the children to come to Jesus triggered the discussion.Implicitly, refusing the weak to fellowship implied refusing them equality and dignity.Like the previous story, issues of power and hierarchy had a bearing on survival and belonging.Citing the creation narrative, Jesus responded that God created a man and a woman as the basis of a household and that the two are one flesh (καὶ ἔσονται οἱ δύο εἰς σάρκα μίαν• ὥστε οὐκέτι εἰσὶν δύο ἀλλὰ μἰα σάρξ).The emphasis here is on oneness and unity within the household as a social and economic institution. Creating an alternative new social order From the two related stories, a house in Mark's gospel is a symbolic space for an alternative social order (Malbon 1986).Elliott (2003) poignantly stated that Jesus assembled people in homes from which he established an egalitarian movement.The common theme between the two stories is that a house offered a canopy of collective self-identification − a place of acceptance and belonging − and in the process posed as a critique against the Roman patriarchal household.This is illustrated by Jesus who said: 'Do not forbid him; for no one who does a mighty work in my name will be able soon after to speak evil of me' (Mk 9:39).Arguably, Jesus' response blurred the demarcation between exclusivity and inclusivity by broadening the social boundary to include outsiders. In both stories, this article suggests that the metaphor of a child emphasised vulnerability of the landless and homeless.As a radical reordering of space, power and belonging within the house, Jesus' followers represented a fictive kinship group that responded to the needs of the fellow members by giving mutual support (Talbott 2008).In terms of ethos, the stories reprimanded tendencies of exclusivity and distanced the community from following hierarchical structures of Roman households, and as Ched Myers (1991) argues, the stories were retold to entrench a radical status reversal of the kingdom. Conclusion From the discussion, we can glean that Jesus' ministry did not operate in a vacuum, but it interacted with the prevailing social issues.The article revealed that Jesus formed communities that responded to the economic challenges faced by the homeless and landless peasants.Jesus' gesture of welcoming a child inside the house captures the moral ethos of the nascent Jesus movement − that of hospitality and close fictive kinship.Such moral virtues can be understood as a direct response to external social and political pressures confronted by the community. located the stories during a pivotal period in Jesus' ministry.Myers divided the first chapters of Mark's gospel (ch.1-7) as Jesus' campaigns in Galilee against 'evil' through exorcisms and healing.These campaigns were an indirect repudiation of the effects of the Roman presence in the region.Myers suggests that, after the heavy rebuttal of Roman imperialism, Jesus set to reorder social relations.
2018-12-11T05:31:07.016Z
2014-05-15T00:00:00.000
{ "year": 2014, "sha1": "e5fbc93f123417dc5735ad9abfb182a89dff08ca", "oa_license": "CCBY", "oa_url": "https://indieskriflig.org.za/index.php/skriflig/article/download/1379/2431", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e5fbc93f123417dc5735ad9abfb182a89dff08ca", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [ "Sociology" ] }
199146023
pes2o/s2orc
v3-fos-license
Application of Social Robots for Symptom Control in Institutionalized Elderly Patients with Dementia Citation: Barata AN, Martins HG, Mendes RV (2015) Application of Social Robots for Symptom Control in Institutionalized Elderly Patients with Dementia. Int J Robot Eng 2:002 Copyright: © 2017 Barata AN, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. *Corresponding author: Ana Nunes Barata, Family Doctor, MSc in Palliative and Hospice Care, student of the Postgraduate Degree in Geriatrics, UCSP Buraca, R. Luís de Camões 5, 2720-344 Buraca, Amadora, Portugal, Tel: +351-21-472-5530, E-mail: ana.barata@csreboleira.min-saude.pt Received: January 18, 2017: Accepted: March 27, 2017: Published: March 29, 2017 VIBGYOR ISSN: 2631-5106 Introduction Robotics has developed into an educational and technological field. Efforts have been made since then to create a robot that is able to actively interact with the human being on a social level. open up new horizons and, consequently, find new fields where they may be applied. This includes the possibility of doing research on human-robot interactions such as studying robots as facilitators for social interaction, as well as to learn of their direct impact on the cognitive function and the behavior of human beings [1][2][3][4]. The current demographics show that the world's population is rapidly growing older, mainly because of the global drop of birth rates. Comparing data from fifty years ago, populations live on average twenty years longer, what increases the prevalence of no communicable diseases. As such, the need for long term care rises exponentially, as many elderly lose the ability to live independently and look after themselves [5]. Dementia is known to be a disease where mainly the cognitive function is affected. Therefore, it has a direct impact on the patient's communication skills, which makes them more prone to a depressive humor. Consequently, patients tend to isolate themselves, developing feelings of loneliness which will further aggravate their mental well-being. Considering that it is a progressive and life-limiting disorder, there has been an increased use of complementary therapies in order to improve symptom management (such as agitation and depression) and to attempt to slow down the progression of the disease by stimulating the cognitive function and motor activities. Robotics has found here an opportunity for its implementation as a means to alleviate patients' symptoms and boost cognitive and motor activities [6]. Complementary therapies wish to stimulate, promote social interaction and the change of routines in patients, being some of the most frequently used forms of complementary therapy the animal-assisted therapy, music therapy and art therapy. Animal-assisted therapy is one of the therapies shown to have one of the most remarkable effects [7]. This can be understood thanks to the millennial relationship between human beings and other animals-the first fossil evidence showing an association between Homo erectus and a canine-like species dates from half a million years ago [8]. There are two theories to justify this relationship: the biophilia hypothesis and the social support hypotheses. The first justifies that the human being has an inapt tendency to take care and to feel attraction for other animals or living beings. The social support hypotheses claims that pets are per se already a social support [8]. However, it's difficult to study the therapeutic interventions with a pet due to hygienic, allergenic or institutional reasons. Progress has been made to study the effects of complementary therapy with zoomorphic robots which have shown to have similar effects to animal-assisted therapy, such as increased ability for social interaction, decreased levels of agitation, aggressiveness and depressive symptoms [9][10][11][12][13]. These results have been measured by analytical tests, imaging and electrophysiological monitoring methods [14][15][16]. Results have also been observed in terms of increased nutritional intake and less need for medication and medical visits. In terms of effects on the cognitive function, the evidence is still not clear. In this context, there is the possibility to do research with social robots that take on the form of animals. There are many different models available including the mass-commercialized robot known as Furby © . This robot was developed for children and it was created to interact with children by touch, voice and movement [17]. So, it's not farfetched to imagine its use in elderly people that may be weakened due to the effects of ageing or a disease. Furby © have some emotional features that are mostly expressed by sound and movements (e.g.: its voice gets louder when it's happier. There is also an increase of ear and torso movements). The robot has few facial expressions and there is an attempt to express them through the LCD eyes. With the support of the Innovation and Research Area of the Department of Planning and Organizational Development of the Shared Services of Ministry of Health, a project with Furby © was developed. We designed an observation in order to understand how a widely commercialized pet robot, developed for children, could stimulate, as a context modifier, social interaction and improve the quality of life of institutionalized patients with dementia. The observation ran at a senior living community in Sesimbra, Portugal and the participants were selected randomly (all participants had some level of cognitive impairment). In order to assess the degree of the cognitive impairment, the selected participants' cognitive function was accessed prior to the intervention with the robot. For this purpose, the Short Portable Mental Status Questionnaire was used. A grounded theory approach was used in order to analyze the collected data. It was of the authors' concern that a set of human and physical conditions were met in order to keep the patients' dignity. A set of three questions was systematically used to assess patients' memory and behavior: "Do you remember him?", "What is his name?", "Would you like to pet him?" All moments were video-recorded. When viewed, interviews were transcribed and the behaviors codified according to the most common patterns ("smiling", "motor activity" and "spontaneous speech"). The total number of participants was six elderly patients who lived at the same institution, either on a daily basis (4 participants) or just during the week (2 participants). For one week, each participant interacted individually with Furby © , on a daily basis. After this week, there was a time gap of four days where participants did not interact and did not have any mention of Furby © . On the fifth day, participants were exposed to Furby © again, on an individual level. The time of the interaction with Furby © wasn't defined so as to best adapt to the patients' attention span. Written consent was requested from the institution where participants were living at, as well as from the patients and/or their legal representative. This observation was approved by the investigator's institutional review board. Analyzing the findings, it was possible to see that Furby © demonstrated to alter patients' behaviors (considering both patients with moderate and severe dementia). Taking into account that almost all participants smiled while interacting with the robot, this may be interpreted as a sign of their comfort and well-being. Interacting with Furby © also increased motor activity as elderly patients would move their hands and arms in order to comply to simple tasks such as stroking or holding it. Furthermore, it also facilitated that patients spoke spontaneously (by telling it stories and asking it questions) and also promoted the interpretative thinking, as some participants tried to understand and translate into words some of the sounds produced by Furby © . The findings also suggest that Furby © also showed an improvement in the whole institution-from the third day on, other residents would stand up from their chairs and come closer to the participant whenever the interaction with Furby © was taking place. We also recorded moments where elderly people would share emotions, comment and laugh whenever the robot was placed between them. Furthermore, some residents would even wave at Furby © , asking for to come closer whenever they spotted it in the room. We also observed that Furby © spiked the curiosity of the healthcare professionals working at the institution. They showed intention of interacting with it, although most of them ended up describing it as "annoying". As limitations of this observation, we cannot overlook the Hawthorne effect, term that is used to describe the change of behavior due to the attention that a participant is receiving [18]. Patients interacted both with the robot and the therapist for the first time, but thanks to the recordings, it was possible to focus our observation on the participants' interactions with the robot. By including a four day break, the researchers also wished to diminish the Hawthorne effect. Another limitation was regarding the time that was spent with the robot-in order not to alter and interfere with the patients' well-being, the authors had to adapt the time of interaction with Furby © according to the patients' momentary state of mind. As such, it was not possible to establish an advisable time for exposition. Furthermore, Furby © 's design also posed a limitation. Sometimes, it was difficult for patients to grasp and hold it as it's a relatively small robot (this was clearer in bed-ridden patients due to muscular atrophy). Furthermore, it was also difficult to control Furby © 's sounds and personality. Furby © has a rapidly switching personality, so the therapist needed to always pre-set it to its "happy" personality before giving it to the participant. Even so, this personality startled some of the participants as it included loud sounds and sudden movements with the ears and body. This study wished to study how a widely commercialized zoomorphic robot, developed for children, may also be used as a therapeutic robot for elderly patients with dementia. The findings suggest that Furby © has some positive results in elderly patients with dementia, who are living at an institution. It has shown results both in patients, who were living as permanent residents and in those living in a day center setting, being a means to facilitate the expression of positive emotions as well as by stimulating motor and cognitive activity. However, Furby © has shown to have some limitations in terms of its design, namely in patients that were bedridden and in terms of controlling its personality and sounds. Demographics are showing that the world's population is rapidly growing older, which consequently equals to a higher prevalence of no communicable diseases. Dementia is one of the diseases that is closely linked with ageing and where complementary therapies have shown to have an important role in terms of improving patients' well-being. As the application of zoomorphic robots is inspired in animal-assisted therapy, there is a strong possibility for it to improve and ease some symptoms related to dementia.
2019-08-02T22:40:11.579Z
2017-12-31T00:00:00.000
{ "year": 2017, "sha1": "69221711cd5a326764f9b723d6c667ecc5b99e28", "oa_license": "CCBY", "oa_url": "https://www.vibgyorpublishers.org/content/ijre/ijre-2-002.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b4b63aa9fe1d4fcc2f82a4f6f8dd50f15fcbb8ac", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245353753
pes2o/s2orc
v3-fos-license
Finite volume simulations of particle-laden viscoelastic fluid flows: application to hydraulic fracture processes Accurately resolving the coupled momentum transfer between the liquid and solid phases of complex fluids is a fundamental problem in multiphase transport processes, such as hydraulic fracture operations. Specifically we need to characterize the dependence of the normalized average fluid–particle force ⟨F⟩\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\langle F \rangle$$\end{document} on the volume fraction of the dispersed solid phase and on the rheology of the complex fluid matrix, parameterized through the Weissenberg number Wi measuring the relative magnitude of elastic to viscous stresses in the fluid. Here we use direct numerical simulations (DNS) to study the creeping flow (Re≪1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Re\ll 1$$\end{document}) of viscoelastic fluids through static random arrays of monodisperse spherical particles using a finite volume Navier–Stokes/Cauchy momentum solver. The numerical study consists of N=150\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N=150$$\end{document} different systems, in which the normalized average fluid–particle force ⟨F⟩\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\langle F \rangle$$\end{document} is obtained as a function of the volume fraction ϕ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\phi$$\end{document}(0<ϕ≤0.2)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(0 < \phi \le 0.2)$$\end{document} of the dispersed solid phase and the Weissenberg number Wi(0≤Wi≤4)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(0 \le Wi \le 4)$$\end{document}. From these predictions a closure law ⟨F(ϕ,Wi)⟩\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\langle F(\phi,Wi) \rangle$$\end{document} for the drag force is derived for the quasi-linear Oldroyd-B viscoelastic fluid model (with fixed retardation ratio β=0.5\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta = 0.5$$\end{document}) which is, on average, within 5.7%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.7\%$$\end{document} of the DNS results. In addition, a flow solver able to couple Eulerian and Lagrangian phases (in which the particulate phase is modeled by the discrete particle method (DPM)) is developed, which incorporates the viscoelastic nature of the continuum phase and the closed-form drag law. Two case studies were simulated using this solver, to assess the accuracy and robustness of the newly developed approach for handling particle-laden viscoelastic flow configurations with O(105-106)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O(10^5-10^6)$$\end{document} rigid spheres that are representative of hydraulic fracture operations. Three-dimensional settling processes in a Newtonian fluid and in a quasi-linear Oldroyd-B viscoelastic fluid are both investigated using a rectangular channel and an annular pipe domain. Good agreement is obtained for the particle distribution measured in a Newtonian fluid, when comparing numerical results with experimental data. For the cases in which the continuous fluid phase is viscoelastic we compute the evolution in the velocity fields and predicted particle distributions are presented at different elasticity numbers 0≤El≤30\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$0\le El \le 30$$\end{document} (where El=Wi/Re\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$El=Wi/Re$$\end{document}) and for different suspension particle volume fractions. Introduction Understanding the force balance that governs the migration of rigid particles suspended in a viscoelastic fluid is fundamental to a wide range of engineering and technology applications. Examples include polymer processing of highly-filled viscoelastic melts and elastomers [1], the processing of semi-solid conductive flow battery slurries [2], the flow-induced migration of circulating cancer cells in biopolymeric media such as blood [3], magma eruption dynamics [4], and hydraulic fracturing operations using solids-filled muds, slurries and foams [5,6]. In many ways, developing robust and accurate tools to simulate such behaviors may be viewed as an unsolved grand challenge in the dynamics of complex fluids, involving the effects of nonlinear material rheology, fluid inertia, elasticity, flow-unsteadiness plus many-body interactions. In a fluid with Newtonian or non-Newtonian rheology, the presence of a cloud of particles dramatically changes the transmission of stress between both phases (fluid and particles), specifically in terms of the rate at which the constituents of the mixture exchange momentum, known as hindrance effect [7,8] . When particles are suspended in a viscoelastic fluid (e.g. a polymer solution or polymer melt), the problem becomes even more challenging, because the fluid may shear-thin or shear-thicken as well as exhibit viscoelasticity and yield stress attributes [9,10]. Therefore, understanding and predicting both the bulk/macro-scale and particle-level response of these complex multiphase suspensions remains an open problem. As a first step, quantifying the momentum exchange between the constituent phases (i.e. a viscoelastic fluid matrix and a suspended phase of rigid spherical particles) remains a challenging and important problem to be solved in non-Newtonian fluid dynamics. Because of the linear response between stress and deformation rate, the hydrodynamic behavior of rigid spheres in a Newtonian fluid has received considerable attention since the pioneering work of Stokes [11] (see for example the monographs by Happel and Brenner [12], Kim and Karrila [13] and Guazzelli and Morris [14]). In the limit of infinite dilution, and when inertial effects can be neglected, the drag force, F d , exerted by the fluid on the solid object takes the Stokes-Einstein form F d = 6πaη 0 u, where a is the radius of the spherical particle, η 0 is the fluid shear viscosity and u is the superficial fluid velocity, defined as the fluid velocity averaged over the total volume of the system [15]. Higher order corrections to the Stokes-Einstein drag acting on a single particle arising from the presence of neighboring particles have been evaluated in terms of an expansion in the particle volume fraction φ. For small packing fractions (φ < 0.10), the first few terms can be worked out analytically [16], but for larger packing fractions, the drag force has to be estimated from approximate theoretical methods [17,18], or from empirical data via experimental measurements [19]. Additionally, numerical simulations [15,20,21] can provide data to derive drag force expressions for the creeping flow of random arrays of spheres surrounded by a Newtonian fluid. For the creeping flow of a sphere through an unbounded viscoelastic fluid, measurements of the changes to the force acting on the sphere are typically represented in terms of a dimensionless drag correction factor X(W i), which is the ratio of the measured drag coefficient compared to the well-known Stokes drag, X(W i) ≡ C D (W i)/C D (W i = 0) = C D (W i)/(24/Re) where Re and W i are the dimensionless Reynolds and Weissenberg numbers, respectively. For the inertialess flow of a viscoelastic fluid, perturbation solutions predict the departure of the drag from the Stokes result to be quadratic in the Weissenberg number for spheres [22], and linear in the case of long rod-like particles [23]. Recently, two reviews comparing experimental data with computations for non-Brownian suspension rheology with non-Newtonian matrices have been published [9,10]. Tanner [10] compared and contrasted inelastic fluids with rate-dependent viscosity, materials with a yield stress, as well as viscoelastic fluids -highlighting the need for rheological modeling improvement, possibly with multiple relaxation times. Additionally, he concludes that several aspects of suspension rheology, such as roughness, ionic forces, particle shape, and polydispersity all need to be addressed. Finally, Tanner [10] also reported experimental results for steady viscometric flows, unsteady shear flows and uniaxial elongational flows. However, good agreement between computation and experiment is scarce, because there are, as yet, few computational studies which allow careful comparison with experimental data, further emphasizing that progress in rheological modelling and improved computational methods are needed. In a recent perspective, Shaqfeh [9] notes that the foundations for the development of suspension mechanics in viscoelastic fluids, as well as the development of computational methods to accurately simulate with particle-level non-Brownian suspensions, have been established. Nevertheless, numerous unanswered questions remain, including the rheological behavior of these suspensions for different matrix fluid rheologies, particle shapes, deformability, flow histories, etc. All these questions can be addressed, in principle, by employing theoretical/computational frameworks to systematically explore the coupling between the kinematics and momentum distribution of the fluid phase and the resulting evolution of the dispersed particulate phase. Jain and Shaqfeh [24] performed 3D transient simulations of the bulk shear rheology of particle suspensions in Boger fluids for a range of W i ≤ 6 and finite strains and calculated the per-particle extra viscosity of the suspension. They categorize the per-particle viscosity calculations as contributions from either the particle-induced fluid stress (PIFS) or stresslet contributions. It was concluded that in the dilute limit, the PIFS increases monotonically with shear strain; however, the stresslet contribution shows a non-monotonic evolution to steady state at large W i. The total combined per-particle viscosity contribution, however, shows a monotonic evolution to steady state. Additionally, Jain and Shaqfeh [24] performed multiple-particle simulations using the IB method to examine the effect of particleparticle hydrodynamic interactions on the per-particle viscosity calculation. It was concluded from transient immersed boundary simulations that the steady values of per-particle viscosity increase with φ, but the per-particle contribution to the primary normal coefficient was independent of φ (up to 10% particle volume fraction) at the two values of Weissenberg number investigated (W i = 3 and 6). The nonlinear interactions of fluid inertia with viscosity and elasticity cause unexpected phenomena (e.g. negative wakes, shear-induced migration/chaining) in the dynamics of particles suspended in a viscoelastic matrix [25,26,27]. These interactions may be expected to change the evolution in the viscoelastic drag correction factor with Weissenberg number. Extensive research efforts over the past 20 years have been directed at the elucidation of the role of fluid rheology and wall effects on the drag of a sphere and the wake developed behind it. Excellent reviews are available in the literature [28,29,30]. There have been a number of computational studies investigating the effect of fluid rheology on the motion of the sphere; however, a common limitation is that results are not convergent for Weissenberg numbers beyond a certain critical limiting value (i.e. typically W i c ≈ 2 or 3) [28]. In the work we extend viscoelastic suspension flow calculations up to W i = 4, by employing the log-conformation approach [31,32,33,34]. Our previous work [35] proposed, for the first time, a closure model for the viscoelastic drag coefficient of a single sphere translating in a quasi-linear viscoelastic fluid, which can be well described by the Oldroyd-B constitutive equation. In the present work, we extend the proposed model in order to be able to describe moderate volume fraction viscoelastic suspensions (φ ≤ 0.2), which are commonly encountered in a wide range of industrial operations. We focus on non-colloidal suspensions with Newtonian and viscoelastic fluid matrices, and the net effect of other particles in the flow is studied by computing an effective average drag force acting on a particle [36]. For Newtonian matrices, Brinkman [17] presented a modification of Darcy's equation in porous media, in which the viscous force exerted on a dense suspension of rigid particles by the Newtonian fluid is calculated. The main idea is that, from the point of view of a single particle, the other distributed particles effectively act as a porous or Darcy medium. Durlofsky and Brady [37] performed numerical simulations to compare against the predictions of Brinkman's model; however, they only obtained good agreement with the analytical solution of Brinkman for very small volume fractions up to φ = 5%, emphasizing the importance of including the configurational effects of more distant particles. The reason that the Brinkman approach starts to break down at what appears to be a rather low volume fraction is related to the fact that at φ = 5% a characteristic inter-particle spacing is only slightly larger than four particle radii; so that neighboring particles are in fact quite close together and hydrodynamically interact. For dense random arrays of spheres (φ ≥ 40%) the empirical Carman-Kozeny (CK) relation [19] is found to describe well the drag force exerted by the fluid flow in the dispersed phase. The idea behind the CK relation is that the suspended medium can be considered as a system of tortuous channels, from which the pressure drop across the porous medium is calculated using the Darcy equation [38]. In the present work, we report results for volume fractions of φ = 4%, 8%, 12%, 16% and 20%, representative of semi-dilute non-colloidal suspension behaviour [39]. Additionally, we note that fully-resolved particle-laden viscoelastic solvers [40,41] are presently only able to directly resolve O(10 3 ) particles [42], which limits their application to large industrial case studies [43]. To overcome this limitation we implement an Eulerian-Lagrangian viscoelastic solver (DP M viscoelastic), which employs the closure drag model that we develop here for moderately dense suspensions with a viscoelastic matrix fluid to quantify the momentum exchange between the two constituent phases (a moderate volume fraction of rigid spherical particles and a non-shear thinning viscoelastic matrix fluid). The present paper describes the simulation method employed to measure the drag force on randomly-dispersed particle arrays immersed in viscoelastic fluids that can be described by the quasi-linear Oldroyd-B constitutive equation, which predicts constant values of the shear viscosity and first normal stress coefficient. The extension of this work to more complex viscoelastic matrix-based fluids, for example models predicting shear-thinning fluid behavior (such as the Giesekus model), is also currently being studied. In this case the magnitude of the stresses is typically smaller and easier to resolve computationally, but the dimensionality of the problem is higher due to the additional nonlinear model parameter(s) required to characterize the shear-thinning. To accomplish the required developments, an open-source library, OpenFOAM [44], was modified to be able to calculate the average drag force acting on random particle arrays. The latter information is then used to formulate a new drag force correlation for the creeping flow of an Oldroyd-B fluid through randomly distributed arrays of spherical particles, with solid volume fractions 0 < φ ≤ 0.2, over a range of Weissenberg numbers (W i ≤ 4). To ensure stability at high W i, the polymer stress contribution is computed using the log-conformation formulation [31,32,33,34]. Finally, to the best of the authors' knowledge, the current work presents for the first time an Eulerian-Lagrangian solver, in which the fluid continuum phase has viscoelastic rheology and the dynamics of the particulate phase are computed following a discrete particle method (DPM). The momentum transfer between both phases is calculated using the drag force law proposed in the present work (see Section 4.2). The newly-developed DP M viscoelastic solver is employed to predict particle settling effects in rectangular channels (which mimics a vertical fissure such as those encountered in gas/oil extractions during hydraulic fracturing operation) and in an annular pipe (a model of pumped transport of a suspension along a drill string during horizontal drilling operations). The paper is organized in the following manner: in Section 2 we present the governing equations and numerical method used to compute the solution of the appropriate balance equations for the viscoelastic fluid flows considered in this work. Section 3 provides the details of the simulation methodology used to compute the drag force values exerted by the fluid on the particulate phase. In Section 4 these results are verified for Stokes flow of a particle suspension dispersed in a viscous Newtonian fluid. The results are then used to derive a new closure law for the average fluid-particle drag force acting on random arrays of spheres immersed in an Oldroyd-B fluid. Section 5 is dedicated to description of the development of the DP M viscoelastic solver for up-scaled three-dimensional simulations of particle-laden viscoelastic flows, in which the dispersed phase is modeled by the discrete particle method. Additionally, we illustrate the capability of the newly-developed code to solve challenging physical problems, specifically two canonical proppant transport problems, which are commonly encountered in hydraulic fracturing operations. Finally, in Section 6, we summarize the main conclusions of this work. Governing equations Following the work of Faroughi et al. [35], as a first step, we consider the problem of moderately dense (0 < φ ≤ 0.2) suspensions constituted from a continuum viscoelastic matrix fluid and a monodisperse static random array of rigid spheres. For the continuum fluid phase the familiar Oldroyd-B constitutive model was chosen, representing an elastic fluid with a constant shear viscosity, which has been shown by Dai and Tanner [45] to fairly well describe the response of highly elastic Boger fluid suspensions in steady shear and uniaxial elongation. By adopting the Oldroyd-B model, we confine the dimensionality of the viscoelastic fluid calculations to only two degrees of freedom (i.e. the relaxation and retardation times, or equivalently). However, the consideration of moderately dense suspensions increases the dimensionality of the problem to four degrees of freedom, due to the addition of two more variables; the particle volume fraction present in the suspension and the number of random particle configurations studied (to obtain statistical significance in the DNS results). Thus, for the current study, we have also fixed the retardation time of the fluid at β = η S /(η S + η P ) = η S /η 0 = 1/2, where η 0 is the total matrix fluid viscosity, with η S and η P being the solvent and polymeric viscosities, respectively. Within these constraints, the drag correction expression developed in this study will form a foundation for higher-dimensional parameterizations, which should also consider a range of solvent viscosities as well the effect of more complex fluid rheology (e.g. shear-thinning) on random arrays of particles in viscoelastic fluids, by using machine learning algorithms such as convolutional neural networks to capture the non-linear effects of all constitutive parameters on the resulting drag coefficient expressions acting on the particle arrays. The dimensionless conservation equations governing transient, incompressible and isothermal laminar flow of an Oldroyd-B fluid are given by where the following dimensionless quantities are used with L and U being the characteristic length and velocity values, respectively, x the position vector, u the velocity vector, t the time, p the pressure and τ P the polymeric contribution to the extra-stress tensor. As mentioned above, the retardation ratio is fixed with the value of β = 0.5 for all the calculations performed in this work. For the present problem with L = a, where a is the radius of a single suspended particle, and U the average fluid velocity at the inlet of the channel, we define the Reynolds and Weissenberg numbers as follows, where ρ is the fluid density and λ is the relaxation time. Notice that for the case of a Newtonian fluid flow λ = 0 and η 0 = η S . To ensure computational stability over a wide range of fluid elasticities, including suspension flows at high Weissenberg number, we incorporate the log-conformation approach for calculating the polymeric extra-stress tensor. In the present work, we follow the implementation of the log-conformation approach in the OpenFOAM computational library [44], presented in Habla et al. [33] and Pimenta and Alves [34]. Details on the mathematical for-mulation behind the log-conformation approach can be found in the original works of Fattal and Kupferman [31,32]. Numerical method The equations presented in Section 2.1 are discretized using the finite-volume method (FVM) implemented in the OpenFOAM framework [44]. Pressure-velocity coupling was accomplished using segregated methods, in which the continuity equation is used to formulate an equation for the pressure, using a semi-discretized form of Eq. (1) [46]. The resulting equation set is solved by a segregated approach, using the SIMPLEC (Semi-Implicit Method for Pressure-Linked Equations-Consistent) algorithm [47], which does not require under-relaxation of pressure and velocity (except for non-orthogonal grids, where the pressure needs to be under-relaxed [34]). Additionally, the computational cost per iteration of this algorithm is lower than in the PISO (Pressure-Implicit Split Operator) algorithm [48], because the pressure equation is only solved once per cycle. The coupling between stress and velocity fields is established using a special second-order derivative of the velocity field in the explicit diffusive term added by the iBSD (improved both-sides diffusion) technique [49]. The velocity gradient is calculated using a second-order accurate least-squares approach, and the diffusive term in the momentum balance is discretized using second-order accurate linear interpolation. For non-orthogonal meshes the minimum correction approach is used, as explained in Jasak [50], in order to retain second-order accuracy. The advective terms in the momentum and constitutive equations are discretized using the high-resolution scheme CUBISTA [51] following a component-wise and deferred correction approach, enhancing the numerical stability. The time derivatives are discretized with the bounded second-order implicit Crank-Nicolson scheme [52]. Here, a Poisson-type equation for the pressure field is solved with a conjugate gradient method with a Cholesky preconditioner, and the linear systems of equations for the velocity and stress are solved using BiCGstab with Incomplete Lower-Upper (ILU) preconditioning [53,54,55]. The absolute tolerance for pressure, velocity and stress fields was set as 10 −10 . The simulations are performed including transient terms, but the time marching is used only for relaxation purposes as we will just be looking for the steady-state solution, i.e., when the drag coefficient ceased to vary in the third decimal place. Simulation methodology In order to develop our computational methodology, we address only non-colloidal suspensions with viscoelastic matrices, and focus on both dilute and moderately dense suspensions. Following Housiadas and Tanner [36], we consider the effect of other particles in the flow by assuming that from the point of view of a single particle at any instant, the remaining particles act effectively like a porous medium [17]. is as close as possible to the desired packing fraction, where n s is the number of spheres enclosed in the square duct (whose volume is LH 2 ) and a is the sphere radius. For the purpose of simulating the proppant transport phenomena that occur during hydraulic fracturing operations, we consider in this work moderately dense suspensions where the particle volume fraction range is 0 < φ ≤ 0.2 [5,43,56]. Following Hill et al. [20] we note that the system to be studied must include a sufficient number of spheres, n s , to minimize artifacts and statistical oscillations coming from the finite size of the computational domain. In practice, n s is chosen to be large enough to avoid periodic artifacts (typically 24 ≤ n s ≤ 122, see Table 1 in Section 4.1), and statistical uncertainty is reduced by ensemble averaging the results from n c random sphere configurations (in this work n c = 5 was found to be sufficiently large to obtain a standard error for the average below 5% of its actual value, as shown in Section 4, Tables 1 and 2). The numerical model employed in this work was comprehensively tested against a similar computational challenge (see Faroughi et al. [35] for the bounded and unbounded flow past a single sphere in a fluid described by the Oldroyd-B constitutive model). The meshes employed in this work have the same level of mesh refinement as the most refined mesh (M1) used by Faroughi et al. [35], which resulted from a grid refinement study. In all cases, the magnitude of the fluid velocity imposed at the inflow was such that the Reynolds number based on the particle diameter D, Eq. (5a), was equal to Re D = 0.05, representative of creeping flow conditions. At this point we should note that there exists some ambiguity in the literature [15] on the proper definition of the drag force, specifically, if the pressure gradient term should contribute to the drag force or not. It is known that the two definitions differ by a factor (1 − φ), i.e., the relation between the total average force that the fluid exerts on each particle, F t , and the drag force, F d , which results from the friction between the particle and the fluid at the surface of the particle, is in some literature (Hill et al. [20]), the total force on the particle is defined as the drag force. In this work, the results will be presented in terms of an average dimensionless drag force, F , which is defined as where u = (1 − φ)U is the superficial fluid velocity, F t,i is the average drag force on the random array of spheres computed as F t,i = 1 ns ns i=0 F t,i , where F t,i is the drag force on sphere i from an ensemble of n s spheres, and e x is the unit vector in the x-direction. The denominator on the right-hand side of Eq. (6) is the Stokes drag force, obtained in the limit of infinite dilution and when inertial effects can be neglected [11]. In this work, the uncertainty in the computed average force, referred as the standard error, is calculated from Note that the factor n c − 1 in the denominator on the right-hand side of Eq. (7) corrects for the fact that there are n c − 1 degrees of freedom, since the average is used to calculate the variance in the numerator [57]. This is important since the number of random configurations, n c , used to calculate the average of F is small. Results : simulation of flow in random particle arrays Verification: Stokes flow of suspensions with Newtonian fluid matrices In this section the creeping flow of random arrays of spheres surrounded by a Newtonian fluid is studied. This way the simulation methodology presented in Section 3 can be verified against results found in the literature. One of the earliest drag force models for describing Stokes flow through an array of spherical particles is the Carman [19] relation, This relation is only valid for dense arrays ((1 − φ) 1), which can be seen by the fact that it does not have the correct limit F → 1 for φ → 0. For the limit of dilute systems, Kim and Russel [18] derived a closed-form expression for F : The computational results obtained by Hill et al. [20], using Lattice-Boltzmann simulations, were found to be in very good agreement with Kim and Russel's [18] drag force expression (Eq. (9)) for dilute arrays of particles (φ ≤ 0.1). Subsequently, several expressions have been developed to find an accurate drag force model that is valid over the full solid fraction range. Using a modification of the Darcy equation [38], Brinkman [17] derived the well-known drag force model, Koch and Sangani [21] proposed the following expression for the drag force: which for low solid volume fraction is equal to Eq. (9) to O(φ ln φ), whereas for large solid volume fractions is the drag force given by the Carman expression Eq. (8). Finally, van der Hoef et al. [15] presented a best fit to simulation data obtained using a Lattice-Boltzmann method (for φ ≤ 0.6), which takes the following simple form: which is the Carman expression with a correction term for the limiting case of φ → 0. Recently, Faroughi and Huber [7] predicted the correction to the drag coefficient for monosized spherical particles as: where φ m denotes the maximum random close packing fraction, which, for monosized spherical particles, we take to be φ m ≈ 0.637 [58], and β is a geometrical proportionality constant, which is related to the shape of the streamtube in the real flow field. The value of β = 0.65 provides the best fit to numerical simulations shown in Fig. 2 (8)), Kim and Russel [18] (Eq. (9)), Brinkman [17] (Eq. (10)), Koch and Sangani [21] (Eq. (11)), van der Hoef et al. [15] (Eq. (12)) and Faroughi and Huber [7] (Eq. (13)) expressions are also represented in Fig. 2 for comparison. For all the simulated particle volume fractions, the dimensionless drag force F obtained from our numerical algorithm is similar Table 1 for φ = 0.2). For a more detailed discussion regarding the excluded volume provided by impenetrable channel side-walls refer to Appendix A. The sphere volume fractions, the number of spheres used on each simulation, the number of mesh points, the dimensionless average drag force F on the spheres and the respective standard errors are listed in Table 1. In all cases, the standard errors of F , which measure the statistical accuracy achieved by averaging the results with five random configurations, were below 3.5% of the average. In Fig. 3 we show normalized axial velocity contours obtained from the numerical simulation of random particle arrays in a channel filled with a Newtonian fluid. As can be seen from the velocity contours, as we increase the particle volume fraction more of the fluid is forced to flow through the tortuous paths in the interstitial spaces between the spheres, rather than as a continuous fluid stream that is mildly perturbed by widely separated spheres. This is also visible by the higher magnitude of the fluid velocities near the channel walls where the fluid is squeezed. Notice that for the higher particle volume fraction employed φ = 0.2 we have also conducted simulations where particles can be located in the excluded volume region provided by the rigid bounding walls (φ * = 0.2), and the dimensionless average drag force F remains similar (approximately 2% higher) to the case with impenetrable side walls where the particles are located only in the central portion of the channel beyond an excluded volume region of thickness a that is adjacent to the bounding side walls region (see Fig. 2 and Table 1). Placing spheres uniformly throughout the entire domain results in a more uniform velocity profile across the channel cross-section. The dimensionless average drag force F (φ, W i) on the spheres (see Eq. (6)) and the respective standard errors are listed in Table 2 for the different kinematic conditions stated above. In all cases, the standard errors of F (φ, W i) , which measure the statistical accuracy achieved by averaging the results with five random configurations, were below 4.7% of the average drag force. Additionally, we show in Fig. 4 The numerical results presented in Figure 5(b) show that this rescaled drag force can be considered independent from any statistically-significant trend with Weissenberg number for the range of solid volume fractions computed in this work; i.e, when we normalize the dimensionless average drag force we compute in our simulations by F 0 (W i) it helps to collapse F (φ, W i) at any given value of φ ≤ 0.20. In the last section we obtained a good agreement between the numerical results for the average drag force exerted on an ensemble of particles immersed on a Newtonian fluid with the model given by van der Hoef et al. [15], we thus propose fitting an equation of the same form to the computational results obtained in a viscoelastic fluid, i.e., where F 0 (W i) is the infinitely dilute (φ → 0) result for the drag force presented in Faroughi et al. [35] for ζ = 0.5 (cf. Eq. (14)). Notice that we could also have used a similar expression for the correction to the drag coefficient as that given by Faroughi and Huber [7] in Eq. (13) to fit our viscoelastic results, however, due to its additional simplicity we have here chosen the functional form of the van der Hoef et al. In Fig. 6 and Fig. 7 we show contours of the dimensionless first normal stress difference, defined as (τ xx − τ yy )/(η P U/a), obtained from the numerical simulations. The first normal stress difference is mainly generated near the no-slip surfaces and in the wake of each of the spheres. Notice that increasing the fluid elasticity (increasing W i number), i.e. from the left to the right panels in Fig. 6, promotes a strong elastic wake and an increase in the magnitude of the first normal stress difference. Only by use of the log-conformation approach for computing the polymeric extra-stress tensor components were we able to stabilize the numerical algorithm. Additionally, increasing the particle volume fraction, i.e., from top to bottom panels, increases the magnitude of the first normal stress difference that is generated on the front stagnation point of the particles. Finally, from Fig. 7 we see that the magnitude of the first normal stress difference near the rear stagnation point is much larger than the one developed upstream near the front stagnation point, and this difference becomes progressively larger for higher W i. Fig. 7(b)). Proppant transport during the hydraulic-fracture process In this section we develop a computational framework, based on the Eulerian-Lagrangian formulation [60], capable of numerically describing, as a proof-of-concept, proppant transport in a viscoelastic matrix-based fluid that can be characterized by the Oldroyd-B constitutive model. The newly-developed algorithm takes into account the effect of the particle volume fraction (the Lagrangian phase) on the viscoelastic fluid phase (the Eulerian phase). For this purpose, we extend the formulation presented in Fernandes et al. [60] (and references therein) to be able to take into account the viscoelastic behavior of the fluid. Consider the motion of an incompressible viscoelastic fluid phase in the presence of a secondary particulate phase, which is governed by the volume-averaged continuity equation and Cauchy momentum equation where f is the fluid porosity field satisfying f = 1 − φ, U f is the fluid velocity, P is the modified pressure (p/ρ f , with p being the dynamic pressure and ρ f the fluid density), g is the gravity acceleration vector and the fluid-phase stress tensor τ f is given by: where τ P is the polymeric extra-stress tensor computed using the Oldroyd-B viscoelastic matrix-based constitutive model given by Eq. (3). The two-way coupling between the fluid phase and particles is enforced via the source term S p in the momentum balance equations, Eq. (17), of the fluid-phase. Because the fluid drag force, F d,i , acting on each particle i is known (see Section 4.2), then according to Newton's third law of motion the source term is computed as a volumetric fluid-particle interaction force given by: where V cell is the volume of a computational cell, and N p is the number of particles located in that cell. In this work, we consider two different formulations to describe the contact between particles, the spring-dashpot model [61] and a Multi-Phase Particle In Cell (MPPIC) model [62]. The former allows us to explicitly handle each contact between two particles and, therefore, is very computationally intensive. The latter can be used to represent the particle collisions on average without resolving particle-particle interactions individually. In the MPPIC method the particle-particle interactions are computed by models which utilize mean values calculated on the Eulerian mesh [63]. For that purpose, in the present work we have employed a collision damping model to represent the mean loss in kinetic energy which occurs as particles collide, and helps to produce physically realistic scattering behaviour [63]. Finally, a collision isotropy model is also employed to spread the particles uniformly across cells [63]. Two case studies were performed to validate the newly-developed DP M viscoelastic solver. In the first case, we study proppant transport and sedimentation in a long conduit of rectangular cross section, a typical geometry to study flow in hydraulic fracturing [43]. In the second case, we study the segregation phenomena which occurs in cement casing for horizontal wells. Both case studies were performed using Newtonian and viscoelastic carrier fluids. For the first case, particle collisions are modelled using the MPPIC model in order to handle O(10 6 ) particles, and for the latter, the Hertzian spring-dashpot model [64] is employed with a total of 125, 000 particles. The following sections present comparisons between the numerical results obtained for the aforementioned case studies and results found in the scientific literature. Rectangular channel flow Despite many advances in hydrocarbon reservoir modeling and technologies [65,66,67,68,69] especially for unconventional resource development, the efficiency of hydrocarbon recovery in shale reservoirs is still very low [70]. One of the leading issues causing this inefficiency is the lack of proper proppant placement in the fracture networks. Proppant emplacement within fractures directly impacts productivity because it controls both short-and long-term conductivities of the fractured wells [71]. Proppant particles must be carried over large distances to ensure successful placement, which requires a spatially homogeneous distribution of particles [6]. However, flows of non-Brownian particles, such as proppant, often result in non-homogeneous patterns, in which particle sedimentation is commonly observed [43]. The Table 3 gives the pressure drop results corresponding to each of the mesh refinement levels employed at the prescribed inlet flow rate Q. Additionally, Table 3 As noted by Meeker et al. [43], the onset of nonlinear behavior is most likely to be caused by some deformation of the poly dimethylsiloxane elastomer at higher pressures. Therefore, the flow rate of our numerical tests is set at Q = 100 cm 3 /h in the following studies with suspensions. This results in an average fluid velocity U = Q/HW = 5.6 × 10 −3 m/s, which corresponds to Re W = 0.06832, confirming that we are in the creeping flow regime. We also note that U is much greater than the average particle sedimentation velocity U Stokes = 3 × 10 −5 m/s, and thus, our simulations are performed under favorable transport conditions with minimal settling at the entrance. In fact, the slope of the trajectory of a sedimenting particle being transported at this average speed, U Stokes /U ≈ 5×10 −3 , is similar to the ratio of channel height to length H/L, meaning that most of the initially suspended particles entering the channel should settle as they approach the channel exit. This is particularly noticeable in the second and third channel observation sections, in which the sediment height increases markedly. Then, the sediment height ceases to grow further quite abruptly, although the particle suspension continues to flow through the channel. This steady-state behavior persists for the duration of the flow and represents a balance between sedimentation and shear-driven resuspension. Upon cessation of flow we immediately observe a collapse in the dense phase height, i.e. the particle phase settles further to form a more compact final sedimented state. Figure 11 shows the contours of the fluid porosity distribution . This value changes significantly based on the shape of particles [74] and the size ratio between particles in the pack [75,8]. Contours of f 0.4 thus correspond to effectively solid deposits of particles. Again we can conclude that the steady-state sediment height during suspension flow increases with the initial suspension volume fraction, and that following the cessation of flow the sediment bed compactifies and reduces in height. To quantitatively define the steady-state sediment height, h, under flow and the static sediment height, h 0 , once the flow of the suspension has ceased, we compute the average fluid porosity¯ f over a local section as¯ and define appropriate characteristic values for¯ f to quantify h and h 0 . For particular flow conditions of a non-Brownian suspension flowing at Q = 100 cm 3 /h, Meeker et al. [43] observed the buildup of a dense but flowing sediment that rapidly reaches a steady-state height h. The existence of this steady-state flowing sediment implies that the proppant flux leaving the channel equals that entering the channel, and thus, an "efficient" proppant transport occurs. Knowing this fact, we define the criteria to compute h as¯ f = 1 − φ i (see Fig. 12(a)). Because the flow is at a low Reynolds number (Re W = 0.06832), the relevant mechanism of sediment transport must be viscous resuspension (flow of an "expanded" sediment at an equilibrium height, and its subsequent "collapse" once the flow ceases [76,77]). To quantify h 0 , we quote the work of Meeker et al. [43] stating that for quiescent conditions the packing volume fraction when water is the suspending fluid is φ p ≈ 0.58, which is close to φ m . However, when the 85:15 w/w % glycerol/water mixture (viscosity ≈ 0.1 Pa.s) was employed as the suspending fluid, the packing fraction decreased to φ p ≈ 0.5. Following this, we consider the criterion to compute h 0 as¯ c = 0.5 to represent a dense/compact suspension bed (see Fig. 12(a)). An example of these computations is shown in Fig. 12(b), which depicts the evolution of Fig. 13(a)) at El = 0, when analyzing the particle distribution at x = L/3 (top left images), we notice that there is a significant sedimentation layer where the particle velocity is zero. In the middle and top zones of the channel, both the matrix fluid and particles flow smoothly and axially along the channel. The particles continue to slowly sediment and eventually join the deposited layer with (U x ) p → 0. For the quasi-linear Oldroyd-B viscoelastic fluid ( Fig. 13(b)) at El = 30, when analyzing the particle distribution at x = L/3 (top right images), we notice that the distribution of particle velocities is almost uniform along the channel height, and only a thin layer of sedimentation is observed. This is in contrast to the Newtonian fluid behavior. At x/L = 2/3, the behavior of the particles and fluid constituents is not substantially changed from that at x/L = 1/3, and, therefore, there is no significant particle settling zone along the channel floor for an elastic fluid with El = 30. Annular pipe flow Particle segregation in pumped concrete is one of the big challenges encountered when creating casing for horizontal drilled wells [78,79]. In this case, particulate solids tend to segregate axially across the pipe due to differences in the size, density, shape and other properties of the constituent phases. The corresponding increase in the percentage of cementitious particles in the bottom part of the casing increases the chance of shrinkage and formation of cracks in the upper portion of the cemented casing. These cracks, often large in size, can easily transport hydrocarbons and other toxic chemicals into the formation which is a concern. Tuning the rheology of the conveying fluids systematically by considering the hindrance effect (i.e., reduction in the relative settling velocity of a particle due to the presence of other particles) can help minimize this issue. Here we study numerically the particle segregation in a simplified annular pipe geometry. The setup used to study the particle segregation is shown schematically in Fig. 15. The channel interior, which is used to mimic an horizontal well, has an annular cross section with inner The initial setup for this computational study is shown schematically in Fig. 16(a). A total of 125,000 particles, representing 1% of the annular cavity volume, is used in this case study. The particles are distributed evenly throughout the stagnant fluid at time zero (see Fig. 16(a)). The particle positions at time t = 0 are generated using a nearest neighbor algorithm. Gravity is applied vertically across the thin annular geometry (g = −ge z ), breaking the azimuthal symmetry and mimicking the onset of concrete particle settlement right after injection is stopped. The goal here is to capture the settling dynamics and test our numerical code to reproduce those dynamics. The code can be then used to analyze different rheological tuning mechanisms to minimize settling over the required time-scale for the concrete to harden. For these two elasticity numbers the particle distributions are similar, with a settling zone and avalanche zones at the bottom and lateral walls of the annular pipe domain, respectively. Additionally, in the settling zone, a backflow of particles occurs due to fluid displaced by the sedimentation and net accumulation of particles in this region. At the north pole (point N in Fig. 17) of the inner cylinder wall the particles have a backflow velocity, which makes them bounce and slide along the inner cylinder wall. Subsequently, the particles approach the most unsteady settling zone of the annular pipe channel, where a mixture of fluid backflow and which indicates a migration of the particles to the avalanche zone. In fact, from the particle velocity distributions, we see that the stronger migration of the particles to the avalanche zone at El = 0.1 causes an increase in the suspension bed height when comparing to the higher elastic case with El = 5. In fact, the calculated final packed bed height for the cases where El = 0 or El = 0.1 is 4.5 mm and for El = 5 is 3.5 mm. Conclusions Direct numerical simulations (DNS) of random arrays of spherical particles immersed in Newtonian and constant-viscosity viscoelastic fluids were performed using a finite-volume method. The overall procedure solves the equations of motion coupled with the viscoelastic Oldroyd-B constitutive equation using a log-conformation approach, with a SIMPLEC (Semi-Implicit Method for Pressure-Linked Equations-Consistent) method. The drag forces on individual particles were calculated with the aim of providing an approximate closed-form model to describe numerical simulation data obtained for the unbounded flow of Newtonian and Oldroyd-B fluid past random arrays of spheres. This expression can then be integrated into a Eulerian-Lagrangian solver that enables coupled simulations of the fluid flow and particle migration over a wide range of kinematic conditions. For this purpose, the DNS consisted of a total of 150 different configurations, in which the average fluid-particle drag force is obtained for solid volume fractions φ (0 < φ ≤ 0.2) and Weissenberg number W i (0 ≤ W i ≤ 4). The proposed DNS methodology was first tested and verified for the creeping flow of random arrays of spheres immersed in a Newtonian fluid. It was found that the numerical results obtained agree with the Lattice-Boltzmann results of Hill et al. [20] and can be described by the best-fit model of van der Hoef et al. [15]. Statistical accuracy was achieved by averaging the DNS results at each value of φ over five random configurations, resulting in errors below 3.5% of the average drag force. Subsequently, the same DNS methodology was used to per- employed together with a viscoelastic constitutive equation to describe the fluid flow, and a discrete particle method is used to update the particle movements. This approach guarantees the coupling between the dynamics of the continuous fluid and the discrete solid phases, by imposing a two-way coupling between the two phases. The coupling is provided by momentum transfer through the drag force expression proposed here, which is exerted by the fluid on the solid particles. Additionally, we consider two different formulations to describe the contact between particles, the Hertzian spring-dashpot and Multi-Phase Particle In Cell (MPPIC) models. As a proof-of-concept, the newly-developed algorithm was assessed for accuracy in two case studies. First, we studied the proppant transport and sedimentation during pumping (a phenomenon typical of hydraulic fracturing operations) in a long channel of rectangular cross section. For the case in which the fluid matrix is Newtonian, the resulting axial distribution of particle sedimentation profiles was compared with experimental data available in the literature for suspensions formulated with Newtonian matrix fluids and different initial particle volume fraction, and good agreement was obtained. Subsequently, the DP M viscoelastic solver was tested on the same problem using an Oldroyd-B fluid. Analysis of the particle distribution and fluid velocity profiles at an elasticity number of El = 30, showed that fluid elasticity inhibits the rate of particle settling and prevents the formation of a dense sedimented layer along the floor of the channel. Subsequently, segregation phenomena which occurs when pumping a casing material along horizontal wells was also studied in an annular pipe domain. Numerical simulations using a Newtonian fluid were performed, and we were able to capture the avalanche and dome build-up effects observed in experimental observations of the particle distributions [79]. Additionally, a viscoelastic fluid was also employed at two different elasticity numbers El = 0.1 and 5. The particles were found to sediment with two markedly contrasting zones, a highly disordered and unsteady region where a mixture of fluid backflow and gravity-induced settling velocities are present and a sedimented zone where particles are closely packed together and the fluid velocity is almost zero. It was found that the stronger migration of the particles to the avalanche zone at El = 0.1 cause an increase in the suspension bed height when comparing to the higher elastic case with El = 5. In summary, the DNS computational methodology presented here allows us to construct Appendix A In the DNS study presented in section 4, we considered two different domain configurations, one with spheres having centroids in a wall region of thickness a around all four lateral edges of the flow domain and another in which the sphere centroids are excluded from this wall region. We refer to these cases as the no-excluded volume and excluded volume configurations, respectively. As shown in Fig. 18(a) when spheres are allowed to be located in the wall region (blue color), i.e., when their centroid is located less than one radius from the wall, then the boundary acts as a perfectly periodic wall. In the opposite case the boundary walls exclude the spheres and act like rigid stress free periodic walls. In fact, based on Fig. 18(b), we can calculate the probability of a single sphere being located in the wall region. Assuming a square cross-section with a width of 8a, the total cross-sectional area is 64a 2 . Regarding the blue annular area, i.e., the region which excludes the spheres near the wall, the area is of (64a 2 − 36a 2 = 28a 2 ). Hence, the probability of a randomly placed sphere being located in the excluded region is equal to 28a 2 /64a 2 = 0.4375 and the overall/fraction area of spheres (of volume fraction φ) in this region can be as large as 0.4375φ. Therefore, as φ increases it is important that when particles are randomly distributed in the domain they should be allowed to be placed with centroids near the walls. In Fig. 19, we show contours of the velocity magnitude in the transverse y − z plane for configurations with an excluded volume and no-excluded volume region. From the distribution of velocity magnitude contours, it can be seen that in the excluded volume configuration (i.e., Fig. 19(a)), the larger local concentration of rigid impenetrable spheres in the middle of the square channel push the strongest fluid flow ownwards towards the walls causing a stagnant region near the channel center. On the other side, in the no-excluded volume configuration (i.e., Fig. 19(b)), the fluid flow is more evenly distributed across the entire channel. This affects the average drag force exerted on the spheres as shown in Table 1 and Fig. 3. : Steady flow field around one representative random array of particles in a channel filled with Newtonian fluid. Contours of the velocity magnitude field u (with the inflow direction pointed out of the plane of the page) are represented for particle volume fraction φ = 0.2, in the y − z plane with (a) excluded volume near the walls and (b) a no-excluded volume configuration. In the latter case the velocity field is more evenly distributed across the entire cross-section of the domain.
2021-12-22T02:15:35.215Z
2021-12-20T00:00:00.000
{ "year": 2022, "sha1": "06753c0616456552620187d372e318da214f8bc9", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-1009381/latest.pdf", "oa_status": "GREEN", "pdf_src": "ArXiv", "pdf_hash": "06753c0616456552620187d372e318da214f8bc9", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
254339828
pes2o/s2orc
v3-fos-license
A Case Study of Factors That Affect Secondary School Mathematics Achievement: Teacher-Parent Support, Stress Levels, and Students’ Well-Being Psychology is one of the numerous factors that influences students’ mathematics achievement, but studies on the influence of psychology on student mathematics achievement are still limited. This study analyzes key factors affecting mathematics achievement through teacher-parent support, stress, and students’ well-being in learning mathematics. Data was collected via online questionnaires. Participants of the study are 531 students studying at five secondary schools in Bandung, Indonesia. The data were analyzed using the structural equations modeling approach using SMART-PLS 3.0 software. The results showed that interest in learning was the most significant factor affecting students’ mathematics achievement. Moreover, teachers have a more substantial effect than parents’ support, which does not significantly reduce the students’ stress levels. The academic and emotional support of teachers and parents reduces students’ stress levels while increasing their feelings and interest in learning mathematics. This study provides essential results for school teachers and parents to improve students’ mathematics achievement at the secondary school level. Introduction Students' achievement is defined by the extent to which predetermined learning goals are obtained, and it is usually measured through test scores and ongoing assessments [1]. Several preliminary studies used the Grade Point Average (GPA) to analyze students' academic achievements [2,3], while this study interpreted it as an indicator of the knowledge and understanding level of the mathematics material. It is a complex score influenced by learning media, environment, teaching methods, parental support, and personal factors [4,5]. The learning approach teachers use toward mathematics achievement has also been explored [6,7], along with the relationship between parenting style and students' achievement [8,9]. Most studies only used a simple linear relationship to analyze its effect on students' achievement [10,11]. Meanwhile, this study developed a new model from a psychological perspective to analyze factors strongly related to students' mathematics achievements by adding predictors of well-being and stress levels. Psychological factors that influence students' mathematics achievements emotionally and academically are supported from parents and teachers. These factors definitely affect their well-being [12,13], interest in learning [14,15], and mathematics achievement [16]. The way teachers and parents support the students is a psychological construct that represents their standard strategies for teaching children [17,18]. This support is a phenomenon that is recognized and analyzed professionally to determine its effect on the students' positive and negative behavior, subjective well-being, and learning achievement. Unfortunately, parents and teachers in Indonesia are unaware of the importance of providing academic assistance to students [19,20], hence, the majority depend more on learning models [21,22]. Mathematics mastery, inseparable from everyday activities, plays an essential role in human life [23]. However, its achievements in Indonesia are still far from expectations, as Indonesia is ranked 63 out of 70 countries according to the 2015 PISA [24]. The situation is even more worrying when students are afraid of this subject with the idea that it is difficult [25]. Therefore, the Indonesian government implemented numerous strategies to increase students' interest in learning mathematics and acquire more achievements [22,26]. These include increasing technology-based learning media and teachers training to improve pedagogical and technological skills [27,28]. A few program extension plans have also been implemented to encourage teachers and parents to provide emotional and academic support [29,30]. Therefore, this study aims to investigate the predictors that affect students' mathematics achievements from a psychological perspective. It also examines the predictors of parents and teachers support for students' well-being, interest in learning mathematics, stress levels, and achievements. Finally, this research has the expectations to contribute to providing theoretical and practical implications. Theoretically, this study will help increase knowledge and literature on research related to student mathematics achievement, especially from the aspects of parent and teacher support, stress levels, and student well-being. Practically, the results of this study can be used by teachers and parents to improve student mathematics achievement at the secondary school level. Literature Review This section discusses various theories underlying the study and the formulation of proposed hypotheses. It starts with an elaboration of teacher-parent support in academic and emotional matters, followed by an explanation of stress and well-being theory related to mathematics learning. Teacher-Parent Support Model Empirical studies have examined the relationship between teacher and parent support and student well-being. According to preliminary studies, teacher and parent support have numerous benefits that significantly affect student achievement and emotions [29,31,32]. Meanwhile, limited studies have combined their support regarding student stress, interest in learning, and achievement. This study showed that regardless of where the support comes from, it will always positively affect overall student well-being. However, a model is needed to determine students' well-being, interest in learning, and achievement at the secondary school level. Teachers' Academic and Emotional Support Several existing studies show that teachers' support for students has a high relationship with psychological well-being. Ma et al. also stated that teachers' support can foster student academic achievement and enjoyment [33]. Abdullah et al. reported that during the pandemic, teachers' emotional and academic supports have a significant determination on the learning performances of undergraduate students [18]. Therefore, it can be concluded that teachers' academic support also plays an important role in student emotions. Most students in Indonesia stay at school from 7 a.m. to 5 p.m., where they are accompanied and supported by teachers. Therefore, implementing the role of teachers as mentors to assist students academically and emotionally through fair treatment and provision of rewards helps to increase their well-being and reduce stress. It is important to investigate the novel relationship between teachers' academic and emotional support of whether or not determine mathematics learning achievements. Parents' Support Studies on parents' support generally analyze the relationship between parents and students' psychological well-being [34,35]. A study by Geng et al. (2022) and Yuill and Martin (2016) found that parental support, directly and indirectly, affects students' physical health. Mata [29,36] stated that parental support for students at K-9 levels significantly affected their motivation and achievement. The interview results with low-socioeconomicstatus children illustrated that support from parents is essential [8]. In detail, students' physical changes can be explained by the amount of support provided by the parents. This implies that physical complaints increase in children who lack parental support and vice versa. On the other hand, students who lack parental support from childhood experience health problems and depression as they approach adulthood [37,38]. Similarly, several studies have been conducted on parental support and its relationship to students' problems, perceived stress, well-being, achievement, and burnout [17]. In the context of this study, most Indonesian parents work hard to earn money, thereby leaving their children with their grandparents, older siblings, or teachers. These circumstances make parents unable to understand what children feel and need. Therefore, whether parent support has a direct effect on improving students' well-being, decreasing stress, and increasing interest in learning mathematics and achievement needs to be examined. Stress and Well-Being and Mathematics Learning There is increased stress for secondary school students in Indonesia due to demands from parents, schools, and teachers and achieving the best results. Moreover, students' difficulty in carrying out school assignments, exams, task deadlines, and others, also cause stress [39,40]. Stress is the body's response to environmental pressures or demands that can have positive or negative effects on a person (Bajaj et al., 2022; Choi Young-Jun and Hyosung, 2021) [41,42]. Some external factors of demand are friends, situations, learning environment, and people around students [18,43]. Stress is a natural feeling that helps individuals to deal with problems or challenges. Thoughts, motivations, and goals are internal factors. As a result of stress, a person responds physiologically and psychologically to various demands [44]. Most parents in Indonesia expect their children to have good mathematics achievements, while few assist. Several studies show that the higher the level of stress experienced by a person, the lower their achievement and well-being [45,46]. Interest in Learning Mathematics Interest in learning plays a vital role in mathematics teaching activities [47,48]. When people feel pressured to do something, such as in the context of students learning and doing exercises to develop their mathematical knowledge, their interest increases. Interest is divided into two senses, namely situational and individual [14]. Situational interest is an affectionate response caused by environmental stimuli, such as technology-based learning media unfamiliar to students, and does not last long [49]. Individual interest arises from one's perception and knowledge of content, which extends the response rate. Several factors have a relationship with students' interest in learning mathematics. The first is confidence, which is the most important factor, where students should believe that the effort made is capable of improving their mathematical abilities [50,51]. The second is depression, which is a major cause of a lack of interest in learning [52,53]. The third is fear of failure, which is ineffective and causes irregularity in learning and working. The last is an unsupportive environment and a lack of facilities, which prevents students from learning efficiently. It can be concluded that many factors affect students' interest when learning mathematics. Therefore, an empirical study is needed to prove these potential factors. According to preliminary studies, the level of interest affects students' learning motivation [15,54], self-efficacy [55], self-regulation, and overall outcomes [56,57]. Students need to learn and analyze the relationship between stress and teacher and parent support, especially in mathematics. This study also analyzes how interest in learning mathematics as a mediator affects students' mathematics achievement. The research model was constructed based on the literature review shown in Figure 1. It comprises three dependent variables, namely, parents and teachers' academic and teachers' emotional support. These three variables directly influence students' well-being, interest in learning, stress, and mathematics achievement. The independent variable is students' mathematics achievement. an unsupportive environment and a lack of facilities, which prevents students from learning efficiently. It can be concluded that many factors affect students′ interest when learning mathematics. Therefore, an empirical study is needed to prove these potential factors. According to preliminary studies, the level of interest affects students' learning motivation [15,54], self-efficacy [55], self-regulation, and overall outcomes [56,57]. Students need to learn and analyze the relationship between stress and teacher and parent support, especially in mathematics. This study also analyzes how interest in learning mathematics as a mediator affects students′ mathematics achievement. The research model was constructed based on the literature review shown in Figure 1. It comprises three dependent variables, namely, parents and teachers' academic and teachers′ emotional support. These three variables directly influence students′ well-being, interest in learning, stress, and mathematics achievement. The independent variable is students′ mathematics achievement. Purpose of the Study This study aims to determine the relationship between teacher-parent support, stress levels, and student well-being on students′ mathematics achievement. Based on the study objectives, the research hypotheses can be stated as follows: Hypothesis 1 (H1). Parents emotional support significantly and positively affects students′ wellbeing. Hypothesis 3 (H3). Parents emotional support significantly and positively affects students′ mathematics achievements. Hypothesis 4 (H4). Parents' emotional support has a significant and positive effect on interest in learning mathematics. Purpose of the Study This study aims to determine the relationship between teacher-parent support, stress levels, and student well-being on students' mathematics achievement. Based on the study objectives, the research hypotheses can be stated as follows: Hypothesis 1 (H1). Parents emotional support significantly and positively affects students' well-being. Hypothesis 4 (H4). Parents' emotional support has a significant and positive effect on interest in learning mathematics. Hypothesis 8 (H8). Teachers' academic support significantly and positively affects an interest in learning mathematics. Hypothesis 9 (H9). Parents' support has a significant and positive effect on students' well-being. Hypothesis 10 (H10). Parents' support has a significant and negative effect on students' stress. Hypothesis 12 (H12). Parents' support significantly and positively affects an interest in learning mathematics. Hypothesis 13 (H13). Stress has a significant and negative effect on students' well-being. Hypothesis 14 (H14). Stress has a significant and negative effect on students' mathematics achievements. Hypothesis 15 (H15). Stress has a significant and negative effect on interest in learning mathematics. Hypothesis 16 (H16). Interest in learning mathematics significantly and positively affects students' achievements. Hypothesis 17 (H17). Well-being has a significant and positive effect on students' mathematics achievements. Hypothesis 18 (H18). Well-being has a significant and positive effect on students' interest in learning mathematics. Methodology This research includes quantitative research, using correlational research methods with survey questionnaires. Correlational research is widely used by researchers for testing two or more variables without the researcher controlling any of them [58,59], with data collected from several secondary school students in Bandung, Indonesia, on mathematics achievement variables. Data was collected from 543 respondents from August to September 2022. After obtaining approval from the teachers at the school, an online questionnaire was given to students. Before its distribution, the contents were explained to the students, who were expected to fill it out honestly for confidential purposes without coercion. Respondent data were only used for research purposes, and students were not mandated to fill out the online questionnaire. After the initial analysis, only 531 respondents comprising 174 male and 357 female students completed the questionnaire with valid data for statistical processing. Of the 531 respondents, 99 were 7th grade students, while the remaining 159 and 293 were 8th and 9th grade students. Furthermore, 220 and 311 students came from private and public junior high schools, as shown in Table 1. Instruments The instrument in this study was an online questionnaire divided into two parts. The first contains the basic information about participants, and the second is associated with the questions related to factors that can affect students' mathematics achievement at the secondary school level. The questionnaire items use a 5-point Likert scale from 1 strongly disagree to 5 strongly agree, which indicates how much students agree with the statements. The original questionnaire has seven latent variables obtained from the literature review. The latent variables include three items on interest in learning variables, 4 on mathematics learning achievement, 4 on well-being, 3 on teachers' emotional support, 3 on teachers' academic support, and 4 on parents' support variables, culminating in 23 items (see Appendix A). Data Analysis The data were analyzed using SPSS 22 and smart PLS 3.0 software, which is suitable for testing hypotheses and helping new research models. Furthermore, The partial least square method structural equation model (PLS-SEM), which is a nonparametric approach, can be used to test many variables and path relationships simultaneously [60,61]. More specifically, it is used to visualize and explain the variance that exists in endogenous variables. This software is widely used to test theories that are predicted from the results of a literature review [61]. Several studies consider the PLS-SEM technique more flexible and accurate for quantifying measurement models [62,63]. This software makes it easy for respondents to be processed without considering the number of samples and the data normality and heterogeneity [64,65]. According to [66], with the PLS-SEM approach, the minimum sample should not be less than 52. In the first step, this study used SPSS software to analyze the statistical data from respondents descriptively. Descriptive statistics is a crucial step in quantitative research used to describe and summarize all respondent information in detail. Meanwhile, SMART PLS is used for data processing and distribution by looking at construct measurements, discriminant validity, and structural relationships between constructs [63]. Data reliability is also used to determine whether the questionnaire items measure the same construct. According to [67], the CR value should be greater than 0.7 to get satisfactory results. Furthermore, Ref. [68] stated that a reliable indicator's statistical reliability value should be greater than 0.6. At the convergent validity analysis stage, external factor loadings should be greater than 0.5 with an AVE value above 0.5. Furthermore, Hair et al. [66] stated that questionnaire items are good estimators when the outer loading is greater than 0.5. Then, the HTMT value was tested to determine the correlation between constructs and analyze discriminant validity [69]. Previous studies suggested the HTMT value should not be higher than 0.9, with better performance at less than 0.85. Results This study was conducted to determine whether mathematics teachers' and parents' support affects secondary school students' achievement and well-being. This section divides the data processing results using smart PLS software into several parts. The first descriptively analyzes the statistics of the research data and then tests the measurement model, and the last evaluates the structural model to determine the relationship between latent variables. Descriptive Statistics The descriptive statistics data in Table 2 shows that the lowest and highest items are 2.793 and 4.333, with an overall average above 3.2. Furthermore, the lowest kurtosis value of −0.910 is owned by the stress questionnaire item 2, while the highest at 1.500 is possessed by the test questionnaire item 1. According to several studies, the kurtosis and skewness values range between −7 to 7 and −2 to 2, respectively [70,71]. Table 2 also shows that the lowest and highest values of −1.024 and 0.467 are owned by items PS2 and stress 3. Therefore, all data items used in this study have acceptable skewness and kurtosis values. Measurement Model Results The first step in the measurement model analysis is to analyze the content validity. The questionnaire of this research (provided in Appendix A) was developed from the literature, and it has relatively good content validity. Furthermore, convergent validity was analyzed by evaluating the loading values, CR, AVE, and Cronbach alpha. Two questionnaire items were excluded because they have a loading factor of less than 0.7, namely, PS1 (0.69) and TAS2 (0.36). Table 3 shows a loading factor with CA and CR values more than 0.7 and AVE above 0.5 according to the recommended standard [72]. By [73] analyzing the Rho-A value in Table 3, it is inevitable that the construct in this study does not have a problem with composite reliability. Discriminant Validity Discriminant validity can be tested in two ways. The first is to evaluate the Fornell and Larcker values by determining the AVE square root in each latent variable [68]. The result shows that the bolded diagonal should be greater than the value of the latent variable owned by other constructs shown in Table 4. Several studies suggest that the discriminant validity test needs to be strengthened by looking at the HTMT value [74,75], which is considered to have a better benchmark. The HTMT value in Table 5 shows that all constructs are less than 0.90, which shows that the model meets the requirements of good discriminant validity. The next stage is to check whether each item has a collinearity problem by analyzing the VIF value [76]. The recommended VIF value is less than 5 [77], and the highest obtained in this study was 3.073. Therefore, it can be ascertained that none of the items have collinearity-related problems. Measurement R2 and Q2 The coefficient determination value (Table 6), commonly known as R2, is used as a reference to assess whether a model can explain an event properly [78]. The main objective of this research is to determine the effects of a mathematics teachers' and parents' support, stress, and well-being on students' mathematics achievements. The determination coefficient values of 0.25, 0.5, and 0.7 are the limits that describe the quality of the model: weak, medium, and strong [79,80]. It also explains factors related to students' mathematics achievement up to 56.4 percent. At the same time, this model can also explain 57.6 percent of the factors influencing students' interest in learning mathematics, with a fairly strong determination coefficient. Model Fit The fit model in PLS-SEM can be analyzed from the Standardized Root Mean Square Residual (SRMR) and Normed Fit Index (NFI) values [81,82]. SRMR shows differences between relationships, which is considered a good fit measure in research models using PLS-SEM [62]. Values of 0.1 and 0.09 are recommended as good SRMR, while an NFI value below 1 is defined as a good fit [83]. The results of the fit model in this study can be seen in Table 7, which shows that this model has a good fit and meets the recommended fit model criteria. Discussion and Implications This study was conducted to determine the factors that psychologically influence stu-dents′ mathematics achievement. It also examines whether mathematics teachers' and parents' support directly influences students′ achievement and indirectly affects stress, well-being, and learning interest. The research model was developed, modified, and evaluated using empirical data from the existing teacher-parent support model [18]. These findings may help to explain the role of teachers and parents in students′ stress levels, well-being, interest, and mathematics achievement. The path model is divided into three types of effects, direct, indirect, and combined effects. According to [85], effect sizes of 0.1, 0.3, and above 0.5 are considered small, medium, and large. Table 9 shows the standardized direct, indirect, and total effects in detail, with their significance calculated using the 5000 resamplings bootstrapping technique. The most dominant factors in increasing students' interest in learning mathematics are their well-being during teaching activities and teachers' academic support, with a total effect of 0.531 and 0.340. This proves that it is important to evaluate the well-being of students and teachers. Furthermore, it should be noted that the stress level students feel when learning mathematics also greatly affects their interest by −0.316. Furthermore, teachers' academic and emotional support significantly reduces stress levels when students learn mathematics. Finally, the factors with the highest total effect are students' interest in learning mathematics, followed by the feeling of well-being, teachers' academic support, and parents' support. Their effect values are 0.446, 0.435, 0.344, and 0.235, respectively. The stress factor has an effect of −0.268 on students' mathematics achievement. Discussion and Implications This study was conducted to determine the factors that psychologically influence students' mathematics achievement. It also examines whether mathematics teachers' and parents' support directly influences students' achievement and indirectly affects stress, wellbeing, and learning interest. The research model was developed, modified, and evaluated using empirical data from the existing teacher-parent support model [18]. These findings may help to explain the role of teachers and parents in students' stress levels, well-being, interest, and mathematics achievement. The result showed that students' well-being was not affected by teachers' emotional support (H1) but by parents' (H5) and teachers' academic (H9) support. This explains why students' feelings of well-being in Indonesia are still fully dependent on their parents. The study showed that 1 out of 3 students (39%) with low well-being rarely or never talk about their problems and possess a low level of communication to share their feelings daily. This study provides new knowledge which enables parents to change roles as friends or siblings to support their child's well-being. It enables them to communicate positively and pay attention to their children's problems. It also inspires teachers to support students in various activities at school and pay more attention to them, which positively affects their achievement [86,87]. Meanwhile, this study also found that parents' support did not significantly reduce students' stress levels when learning mathematics (H10). Students felt that academic teaching (H2) and emotional (H6) support can reduce their stress levels when learning mathematics. They felt that the stress caused by mathematics lessons is more effective when conveyed to teachers at school. This finding provides suggestions for teachers to understand that students have diverse abilities. Therefore, varying learning approaches and models are needed to support them in mathematics lessons while simultaneously reducing the stress level caused by learning it in the classroom. The use of technology-based learning media can reduce students' stress levels. Several studies have shown that using ICT in the classroom improves students' soft skills [88][89][90]. However, teachers may need more time to understand students' individual characters in order to provide appropriate support. Based on the factors related to students' interest in learning mathematics, the study found that teacher academic support (H8), parent support (H12) and stress level factors (H15) significantly have a relationship with learning interest. Meanwhile, the emotional support provided by the teachers does not significantly affect students' interest in learning. Several preliminary studies also found that students' stress levels reduce interest in learning [91]. This is because those who feel stressed are usually excessively anxious, therefore, they lose their sense of interest. Moreover, secondary school students tend to avoid stress rather than tackle it, which makes the role of teachers and parents very important. Parents can support students by creating a comfortable learning atmosphere and providing adequate support learning materials within and outside the school. Meanwhile, teachers can select attractive learning media and provide easy learning methods capable of increasing students' interest in learning mathematics [92]. The latest findings are factors related to students' mathematics achievement, including teaching academic support (H7), parent support (H11), stress levels (H14), interest in learning (H15), and well-being (H17). However, well-being is not the main factor that significantly affects students' mathematics achievement. This is explained by the fact that schools focus more on cognitive and academic performance. Adler stated that teaching students about well-being increases their academic achievement. These findings are appro-priate to the results of a meta-analysis [93] which showed a relationship between well-being and students' achievement, which only had a small effect. The stress level can also reduce student achievement, when increased. This study provides several theoretical and practical implications. Firstly, it modifies and develops a research model from the teacher-parent support model by adding additional predictors and strengthening explanation power. Secondly, it provides theoretical implications for exploring the relationship between mathematics teachers' and parents' support on students' stress levels, well-being, learning interest, and mathematics achievement to fulfill the conceptual framework, especially in mathematics education, for developing countries such as Indonesia. It is necessary to understand that teacher and parental support at school and home has the same effect on students' well-being, reduces stress, and increases interest in learning. This opens up new knowledge for parents to support children both academically and emotionally. This study provides a deeper understanding of the factors that affect students' mathematics achievement in developing countries, especially Indonesia. The results educate parents, teachers, schools, and students about the importance of teachers' support for students while studying mathematics. This is because it increases their well-being and reduces stress levels despite the difficulty attached to learning this subject. It also indicates that students' mathematics achievement depends not only on the learning media and the teachers' teaching abilities but on the used educational technologies and the drill-andpractice of mathematical tasks as well. However, it is also associated with the psychological factors, namely, stress, well-being, interest in learning, and parental support. Therefore, there are several vital points associated with this study. Firstly, the parents' role is essential to students' mathematics achievement, learning interest, and stress levels. Parents should set aside time to assist their children because it increases their self-confidence and motivation to learn, improving mathematics achievement. Students do not need parents to be able to teach or answer existing mathematics material, but parental support is important. Secondly, school mathematics teachers need to understand the importance of psychological support for students [94,95]. For instance, K-12 students consider support from people in their environment very important [96,97]. Students are not usually able to independently motivate themselves, hence, they are easily stressed. Furthermore, due to their varying abilities, it would be imperative for mathematics teachers to provide academic support to benefit the emotional and educational support of those with lower abilities. Thirdly, schools can provide briefings or training for teachers and parents on the importance of maintaining students' interest in learning and the need to encourage them academically and emotionally continuously. Students' growth, development, and achievement depend not only on the teachers at the school but is the task of both teachers and parents. Conclusions In conclusion, this study was conducted to determine whether teacher and parent support are related to students' achievement, stress levels, and interest in learning mathematics in Indonesia. Initial hypotheses were developed based on literature reviews and modification of the current teacher-parent support model. This study developed and validated a model to significantly increase students' mathematics achievement at the secondary school level. The model provides new ideas and knowledge that are important and need to be implemented in Indonesia, significantly changing parents' perspectives of students. Furthermore, it analyzed students' well-being, which is essential in increasing their interest in learning mathematics and achieving success. Reducing stress levels and increasing feelings of well-being affect students' mathematics achievement. Therefore, teachers should not only master how to teach mathematics but also need to have some knowledge of students' psychology. The results also showed that students in Indonesia have high enthusiasm for their parents and teachers to reduce their stress levels with an increase in overall well-being while learning mathematics. They also think that their mathematics achievement is influenced by the academic and emotional support provided by teachers and parents at home. However, more efforts are needed to improve teachers' abilities to support students psychologically successfully. This discovery is a new step and attitude that requires more effort from parents and teachers. This study opens up initial knowledge about the importance of teacher and parent support for students' mathematics achievement. Further studies need to be conducted to support the results of this study. Limitations Although this study provides several implications and contributes to the mathematics education field, it has some limitations that can be a starting point and recommendations for further research. For instance, it uses a correlational design, which makes it prone to bias when data collection is analyzed. Furthermore, it was only conducted at the secondary school level, hence, future studies need to test the developed models. Although the results need to be interpreted carefully, they cannot be generalized because the sample used in this study was less than 500 secondary school students in West Java, Indonesia. Therefore, further studies need to be conducted using a larger scale of respondents to prove the findings and determine the comparative narrative between countries. This study also recommends the development of new research models at other education levels. Similar and different effects on other subjects need to be further investigated. Several factors related to the stress and well-being model may also be added to the research model to investigate whether they relate to students' mathematics achievement. Institutional Review Board Statement: The study's ethical approval was obtained from Beijing Normal University in the BNU ethical review board verbally. All research was performed in accordance with the relevant guidelines and regulations and in accordance with the Declaration of Helsinki. Informed Consent Statement: All participants were informed about the purpose of the study as well as the ways the data would be used and were required to consent to participate in the study. Data Availability Statement: The data that support the findings of this study are available on request from the corresponding author.
2022-12-07T16:17:44.481Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "f676def625a06639ddd22c3acacab1c4eb5f9a40", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/19/23/16247/pdf?version=1670203548", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "03b189adf053a6031aaa107b362cb96ef1c0774d", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
30853226
pes2o/s2orc
v3-fos-license
Screening limited switching performance of multilayer 2D semiconductor FETs: the case for SnS Gate tunable p-type multilayer tin mono-sulfide (SnS) field-effect transistor (FET) devices with SnS thickness between 50 and 100 nm were fabricated and studied to understand their performances. The devices showed anisotropic inplane conductance and room temperature field effect mobilities ~5 - 10 cm$^2$/Vs. However, the devices showed appreciable OFF state conductance and an ON-OFF ratio ~10 at room temperature. The weak gate tuning behavior in the depletion regime of SnS devices is explained by the finite carrier screening length effect which causes the existence of a conductive surface layer from intrinsic defects induced holes in SnS. Through etching and n-type surface doping by Cs2CO3 to reduce/compensate the not-gatable holes near SnS flake's top surface, the devices gained an order of magnitude improvement in the ON-OFF ratio and hole Hall mobility ~ 100 cm$^2$/Vs at room temperature is observed. This work suggests that in order to obtain effective switching and low OFF state power consumption, two-dimensional (2D) semiconductor based depletion mode FETs should limit their thickness to within the Debye screening length of carriers in the semiconductor. Introduction Since the work by Novoselov and coworkers on exfoliable 2D van der Waals (vdW) materials 1 and studies that demonstrated Dirac electrons in graphene, 1-2 scientists and engineers have been conducting enormous amount of research on the topic. A wide variety of 2D materials were explored and studied for their exotic electrical and optical properties, in particular, transitional metal dichalcogenides (TMDs) 3-7 such as MoS 2 8-10 , MoSe 2 11,12 , WS 2 13,14 and WSe 2 15,16 . In addition to TMDs, the pursuit of 2D semiconductors for high performance electronics or optoelectronics is extended to non-TMDs such as phosphorene 17,18 and III-VI materials (e.g. InSe). 19,20 Among the numerous 2D semiconductors that have been explored, many high quality n-type materials or device structures have been developed. Achieving high performance p-type devices has been more challenging. Notable recent progresses in this area are black phosphorus devices 17,18 and novel p-type 2D contact scheme for hole injection into TMD devices. 21 Tin monosulfide (SnS), a IV-VI compound 2D semiconductor, has a layered Pnma crystal structure. In the past, due to its 1.07 eV band gap and its high optical absorption coefficient above band gap, 22,23 SnS has been mainly studied for solar cell applications. 23 More recently, the electronic and thermal properties of bulk SnS and SnSe have attracted increasing attention for thermoelectric applications. [24][25][26] So far, transport study on nanoscale SnS and SnSe crystals is very limited. 27 Given that bulk SnS has p-type nature (due to tin vacancies) and hole mobility of 90 cm 2 /Vs, 28 SnS appears to be a good candidate for use in p-type 2D nanoelectronic devices. The anisotropic crystal structure of SnS also suggests that the transport properties of SnS could be strongly anisotropic within the 2D plane, similar to black phosphorus. 26 However, there has been no carrier transport or field effect device studies on nanoscale SnS. In this work, nanoflakes were exfoliated from bulk SnS single crystal and ptype SnS FET devices were successfully fabricated and studied. We found that due to the high intrinsic p-type doping and strong carrier screening in as grown SnS, the field effect of gate is not sufficient to tune the carrier transport throughout the whole thickness (ca. 50-100 nm) of the devices and renders poor switching behavior. By means of surface doping to compensate the intrinsic p-type holes or etching to reduce the sample thickness, devices with hole Hall mobility ~ 100 cm 2 /Vs (at room temperature) and improved ON-OFF ratio (~ 100 at room temperature) were realized. In addition to gaining a general insight on the role of carrier screening as the limiting factor in the switching performance of multilayer 2D semiconductor depletion mode FETs, this work paves a way towards high performance 2D electronic devices based on IV-VI semiconductors and the theoretically envisioned novel valleytronics based on single layer SnS. Results and discussion The layered vdW crystal structure of SnS is illustrated in the scheme shown in Fig. 1a. The structure is highly anisotropic, both along the direction perpendicular to the 2D layer as well as within the two-atom thick 2D layer. Within the 2D layer, the atoms are arranged in the armchair fashion along the z (or c) direction (as shown in Fig. 1a) while the y (or b) direction is the zigzag direction. The x-ray diffraction (XRD) data of the bulk SnS crystal used in this work are shown in Fig. 1b, confirming its single crystalline quality. Fig. 1c shows Raman spectra from 50 nm and 85 nm SnS flakes. Raman spectrum from each sample shows six well-defined peaks, among which four are A g modes and two are B 3g modes. The frequencies of these modes are in excellent agreement with those reported for bulk SnS, 30,31 confirming that our samples are in orthorombic phase. When exfoliated into small multilayer flakes, SnS samples often appear in a shape near a rectangle. To identify the anisotropic electronic properties of SnS, we distinguish the short and long directions of SnS nanoflake samples as the 'A' and 'B' directions, respectively, as noted in the picture of a representative device in Fig. 1d. The shorter flake length and higher mobility measured along the 'A'-direction (to be discussed later) suggest that the 'A'-direction corresponds to principal axis b while B corresponds to principal axis c. 26 Electron-beam lithography was used to fabricate four-probe devices in the van der Pauw configuration (Fig. 1d). Atomic force microscope (AFM) was used to identify the thickness of SnS flakes studied. An example of the height profile of a 64 nm thick SnS flake imaged by AFM is given in Fig. 1e. The sample was loaded into a Physical Properties Measurement System (PPMS) for electrical measurements. As demonstrated by the linear relation between the source-drain current I sd and the source-drain voltage V sd in Fig. 2a, Ohmic contacts were obtained between Ni source/drain contacts and SnS nanoflake. Moreover, the four-probe van der Pauw geometry allowed comparing SnS nanoflake's intrinsic conductance along two orthogonal crystal directions without the influence of contact resistance. Fig. 2b presents the fourprobe conductance of a SnS nanoflake measured along the Adirection or B-direction as a function of backgate voltage V g at T=300 K. The decreasing trend of conductance vs. V g indicates the p-type nature of the SnS flake. The conductance along the A-direction is generally found to be higher than that measured along B-direction in our van der Pauw devices. For the device in Fig. 2b, the conductance ratio between these two directions is about 2.5 ( Fig. 2b inset). This is consistent with the anisotropic lattice structure and electronic structure in the 2D plane of SnS. 26 It is noteworthy from Fig. 2b that the device could not be completely turned off at high positive gate voltages (i.e. there is a non-negligible conductance in the OFF state of the device at high V g ). Fig. 2c further shows the fourprobe conductance G vs. V g along the B direction at different temperatures. The G(V g ) data at different T show that raising temperature increases the overall conductance of the device and the device tends to saturate towards a lower OFF state conductance at lower T as indicated by the dashed horizontal lines in Fig. 2c and d. The strong influence of temperature on devices' ON-OFF ratio can be more clearly seen in Fig. 2d Fig. 2. a) I sd -V sd plot of a 60 nm thick multilayer SnS device at 300 K. b) Four-probe conductance vs. V g along two perpendicular directions in the 2D plane at 300 K (inset: conductance ratio between the two directions). c) Four-probe conductance along the B direction vs. V g at different temperatures and d) Data in c) shown in semi-log scale. displays the G(V g ) data in semi-log scale. At room temperature, the ON-OFF ratio is ~10 and lowering T to 250 K enhances the ON-OFF ratio to ~50. Fig. 2c and d also show noticeable hysteresis effect in the G vs. V g sweep loop, indicating the existence of charge trap states. 9,32 From the quasi-linear regime of the G(V g ) plots at negative V g , the field effective mobility  FE can be extracted using the relation / , where g m is the trans-conductance or the slope of conductance vs. V g while C g is the gate capacitance and is the correction factor in the van der Pauw method. can be estimated to be around 4-7 cm 2 /Vs along the B direction with no obvious T-dependence in the temperature range 250 -350 K. The along the higher mobility A direction is around 10 cm 2 /Vs. Compared to the ~90 cm 2 /Vs hole mobility in bulk SnS, this is almost an order of magnitude smaller. Our gate dependent carrier density measurement from the Hall effect ( Fig. 3a) showed that the gate capacitance is significantly over-estimated in the parallel plate capacitance model, due to the disregard of trap states capacitance. Thus this apparent ~4-10 cm 2 /Vs significantly under-estimates the hole mobility in multilayer SnS. (The extracted Hall mobility for the device in Fig. 2&3 is ~ 30 cm 2 /Vs along the high mobility direction and ~ 10 cm 2 /Vs along the low mobility direction. Electronic Supplementary Information Fig. S1.) The intriguing behavior of gate being able to enhance the carriers but incapable of depleting carriers in SnS was observed in many devices with thickness in the range of 50 to 100 nm. This characteristics was also quite common in other semiconductor nanoscale FET devices with similar thickness, (e.g. InAs nanowire FETs 33 ) and poses a limitation on the performance of semiconductor nanoelectronics. In the following, we will show that this phenomenon is originated from the simple fact that carrier screening length L D in semiconductors is finite such that semiconductor FETs with body thickness larger than L D has a surface 'dead layer' whose carrier conduction cannot be tuned by the gate. To comprehend the origin of carriers and the underlying gate tuning mechanism of the transport properties of the device, Hall measurements at different temperatures and back-gate voltages were conducted. By measuring the Hall resistance R xy at various magnetic field B, we observed a linear relation in R xy vs. B and the Hall coefficient can be calculated from . The hole carrier density can be extracted through relation with e being the fundamental electron charge. From the hole carrier density p data shown in Fig. 3a, we see that p increases linearly as V g reduces towards the negative direction but it saturates towards a residual value at high positive V g , in a similar way to the G(V g ) data in Fig. 2. This suggests that there is a limit on the number of holes that can be removed by applying positive V g in these SnS flakes. Moreover, for a given V g , there are more carriers at higher temperature, suggesting the thermally excited nature of carriers. Fig. 3b shows the temperature dependence of hole density in the Arrhenius plot. The plot highlights that available hole carrier concentration for electrical transport follows ∝ exp ∆ . Similar thermally activated behavior is seen in the Arrhenius plot for the temperature dependent conductance in Fig. 3c. The activation energy ∆ extracted from fitting the Arrhenius plots of p(T) and G(T) along both the A and B-directions of the SnS flake are displayed in Fig. 3d. We see a good agreement between ∆ fitted from the thermal activation model for the carrier density and conductance data: the activation energy increases with the gate voltage in the negative V g regime and stays relatively constant in the positive V g regime. This trend is again correlated with the gate dependent conductance and hole density in Fig. 2 and Fig. 3a. The gate tuned thermally activated transport and the behavior of residual conductance in the high V g regime in these SnS devices can be naturally explained by a simple model in which the holes originate from acceptors with ionization energy E a ~ 0.22 eV (Fig. 4a) and the carrier screening effect in semiconductor with thickness exceeding the carrier screening length (Fig. 4b). Our estimate of acceptor ionization energy ~ 0.22 eV is based on the activation energy for the carrier density and conductance at V g =0 in Fig. 3d. In the negative gate bias regime, more holes are induced into the bottom surface of SnS flake. This accumulation of holes enhances the conductivity of a layer near the bottom of the flake and the measured conductance is dominated by this bottom layer in which the Fermi energy E F is shifted closer to the valence band. Thus a reduced activation energy and increased conductance were observed. On the other hand, when a positive V g is applied to deplete the carriers, holes near the bottom surface are first depleted and the conductance of the sample gradually becomes dominated by the holes near top surface where p and E F remain constant. Therefore, the activation energy E appears to be pinned at ~0.22eV by the carriers in the top surface layer. We next discuss the OFF state performance of depletion mode FET devices made from 2D semiconductors with appreciable thickness and doping, using the multilayer SnS devices in this work as an example. With small V g , the depth of bottom surface tuned by gate is approximately the Debye screening length ⁄ with k B as the dielectric constant of semiconductor and the Boltzmann constant (Fig. 4b). In very high V g towards depletion, one reaches the maximum depletion depth when E F of bottom surface is tuned to the middle of the bandgap at which point the depletion thickness 2 ln with p i and N a as the intrinsic carrier concentration and acceptor density. 34 Based on the hole density at V g =0 (Fig. 3a) and flake thickness, we estimate the ionized acceptor density N a to be about 10 17 /cm 3 in our SnS at room T. This leads to a maximum depletion width W D to be a few tens of nm at 300K. 34 Therefore, when a large gate voltage is used to deplete carriers in doped 2D semiconductors with thickness h>W D , the conduction through the sample is shunted by a 'dead-layer' near the top surface and the device remains to be conductive in the OFF state. The thickness of this 'dead-layer' is approximately h ̶ W D and the residual OFF-state conductance in the depletion regime is where L , W and h are the length, width and thickness of the sample and  h is the hole (or majority carrier) mobility. Based on these discussions and the relatively large thickness of SnS employed here (50-100 nm), we see that the poor gate tuning performance in the positive gate voltage regime is caused by the shunt effect from holes in the surface dead-layer. Examining Eq.1, one sees that 2D semiconductor samples with heavier doping and larger thickness h are expected to have larger residual OFF-state conductance in the depletion regime and a poor ON-OFF ratio. SnS is known to be p-type due to rich intrinsic defects. 22,23,28 Photoluminescence (PL) measurements were performed to gain more insights on the impurity states. Fig. 4c displays PL spectra from the 85 nm SnS sample at variable temperatures. A broad luminescence band centered at ~1.7 eV becomes better defined when temperature is lowered and is clearly seen at 200 K. As noted in prior PL study on SnS films, 35 this PL band is likely from the radiative defect states, attesting the existence of rich defect states in SnS studied here. Although fabricating SnS devices with thickness less than 50 nm turned out to be difficult, we performed a study to investigate the SnS FET device's performance upon successive thinning of the sample by dry etching. As displayed in Fig. 4d, after Ar ion etching at 300 W with Ar flow of 30 sccm at 0.1 Torr for 180 sec, the OFF-state conductance was greatly suppressed and a higher ON-OFF ratio (red curve vs. black curve in Fig. 4d) was obtained. Unfortunately, further ion etching over longer period of time likely created severe damage to the sample, resulting in sample's mobility being greatly reduced and the ON-OFF ratio cannot be further improved (green curve in Fig. 4d). We found that compensating the acceptors with n-type doping on the surface of SnS flake is an effective way to suppress the conductance of surface dead layer and improve the ON-OFF ratio of device. Previously, Cs 2 CO 3 was used to induce n-type doping in MoS 2 and black phosphorus. 36,37 We compare the gate modulated conductance of a SnS device before and after evaporating 5 nm Cs 2 CO 3 on its surface. As demonstrated in Fig. 5a, at least an order of magnitude increase in the ON-OFF ratio was achieved in the same device after surface n-type doping with Cs 2 CO 3 . Hall effect measurements showed that Cs 2 CO 3 doped sample has hole density about ten times lower than the pristine samples (10 10 -4×10 11 /cm 2 at room temperature in Fig. 5b vs. 10 11 -10 12 /cm 2 in Fig. 3a). The sample also demonstrated high hole mobility (~100 cm 2 /Vs at room T) which is comparable to the bulk value (Fig. 5c). It is also note-worthy that with the n-type Cs 2 CO 3 doping compensating the p-type acceptors from intrinsic defects in SnS, not only the hole density was suppressed, there was also an increase in the trans-conductance and ON-state current (Fig. 5a). This is likely due to the reduced number of charged impurities (thus less carrier scattering and better mobility) after compensation. Note that the typical ON-OFF ratio of our SnS devices after n-type surface doping reached 100 but is significantly lower than that of TMD based (e.g. MoS 2 ) FETs. We believe that more elaborated control of doping and using thinner SnS samples will further improve the switching performance of SnS based 2D FET devices. Experimental Bulk SnS single crystal growth SnS single crystals were prepared from the Sn and S 5N (99.999%) purity compounds. The synthesis of the compounds was carried out in conical quartz ampoules evacuated to 10 -4 Pa. The homogenization of the batches and synthesis of the compounds was carried out in a horizontal furnace at 900 °C for 48 h. The mixed crystals were grown by a vertical Bridgman method. Before pulling, the ampoules containing the melt were heat-treated at 900 °C for 24 h and when the melt filled the tip of ampoule, the ampoules were lowered through the temperature gradient at a rate of 0.1 mm/h. The amount of single crystalline areas was successfully increased by a reduction of the growth velocity from 5 mm/h to 0.5 mm/h. Approximately 2/3 of the whole sample volume is single crystalline, due to the still present stress during the phase transition. The obtained SnS single crystals were 3 cm long and 1.2 cm in diameter. They exhibited good cleavability in the direction perpendicular to the trigonal axis c, i.e. along the (001) plane. This preparation gave homogeneous single crystals with well-developed cleavage faces. These faces, being perpendicular to the c-axis, were always parallel with the direction of pulling, i.e. with the ampoule axis. Multilayer SnS Device Fabrication The bulk SnS was then exfoliated onto degenerately doped Si substrates with silicon oxide or silicon nitride on surface. Electron beam lithography process with standard PMMA/MMA copolymer bilayer resist and metal deposition of Ni were subsequently used to create contacts for four-probe electrical transport characterization. Sample is a single crystal flake with atomically flat surface, and the sample generally has thickness ranging from 40-120 nm. Raman and PL characterization Raman and photoluminescence (PL) measurements were conducted using a Horiba Labram HR Raman microscope system. A 532 nm laser light was used. The laser was focused on sample surface using a 50× objective lens (spot diameter of about 2 µm). Instrument resolution was 0.5 cm -1 for Raman measurements and 2 cm -1 for PL measurements. Samples were mounted in an optical cryostat for variable temperature measurements. Conclusions In conclusion, p-type multilayer SnS FET devices were fabricated and studied. a) b) c) best ON-OFF ratio, 2D semiconductor FETs should limit their thickness to within the Debye screening length (around 10 -50 nm), beyond which there is a residual conductance from the top surface layer and a deteriorated ON-OFF ratio is expected. Similar conclusion regarding the deteriorated ON-OFF ratio with increasing thickness was reached in a recent study on multilayer MoS 2 FETs. 38 Anisotropic Hall mobility of SnS multilayer devices Anisotropic conductance was observed in SnS nanoflake van der Pauw devices, suggesting the holes have anisotropic transport mobility. The direction dependent Hall mobility for the 60 nm thick SnS device discussed in Figure 2 and 3 of the main manuscript is analyzed and displayed in Figure S1.
2017-11-07T05:19:31.640Z
2016-08-23T00:00:00.000
{ "year": 2016, "sha1": "8c8d9e2711b11f2fb0fff68a5436948a7af14236", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1608.06501", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "5455bc5c8fe50ba3955b47327544fbcd122b0b0f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Medicine", "Physics" ] }
32432039
pes2o/s2orc
v3-fos-license
The Coatomer-interacting Protein Dsl1p Is Required for Golgi-to-Endoplasmic Reticulum Retrieval in Yeast* Sec22p is an endoplasmic reticulum (ER)-Golgi v-SNARE protein whose retrieval from the Golgi compartment to the endoplasmic reticulum (ER) is mediated by COPI vesicles. Whether Sec22p exhibits its primary role at the ER or the Golgi apparatus is still a matter of debate. To determine the role of Sec22p in intracellular transport more precisely, we performed a synthetic lethality screen. We isolated mutant yeast strains in which SEC22 gene function, which in a wild type strain background is non-essential for cell viability, has become essential. In this way a novel temperature-sensitive mutant allele, dsl1-22, of the essential gene DSL1 was obtained. The dsl1-22mutation causes severe defects in Golgi-to-ER retrieval of ER-resident SNARE proteins and integral membrane proteins harboring a C-terminal KKXX retrieval motif, as well as of the soluble ER protein BiP/Kar2p, which utilizes the HDEL receptor, Erd2p, for its recycling to the ER. DSL1 interacts genetically with mutations that affect components of the Golgi-to-ER recycling machinery, namely sec20-1, tip20-5, and COPI-encoding genes. Furthermore, we demonstrate that Dsl1p is a peripheral membrane protein, which in vitro specifically binds to coatomer, the major component of the protein coat of COPI vesicles. Membrane-bound compartments in eukaryotic cells can fuse directly as shown for the endoplasmic reticulum (ER) 1 and mitotic Golgi fragments as well as endosomal and lysosomal compartments (homotypic fusion; see Ref. 1). However, vectorial transport between distinct compartments mainly involves small coated vesicles whose formation from the donor membrane is mediated by proteinaceous coats, either COPI, COPII, or clathrin. After uncoating, vesicles fuse selectively with an acceptor membrane (heterotypic fusion; see Ref. 2). Both homotypic and heterotypic fusion events rely on specific attachment reactions to guarantee that only appropriate membranes can mix. The membrane attachment itself consists of two steps, tethering and docking, involving different sets of proteins (3,4). Tethering factors are peripherally membrane-associated protein complexes consisting of up to 10 different subunits, which share little sequence similarity. The subsequent docking stage involves specific sets of membrane-anchored proteins, so-called SNARE proteins (SNARE is soluble NSF (for N-ethylmaleimide-sensitive fusion protein) attachment protein receptor) (5)(6)(7). SNAREs are inserted into the membrane either by a C-terminal transmembrane domain or through lipid moieties attached to C-terminal cysteine residues. In contrast to the tethering factors, all known SNARE proteins are members of either of three protein families: the syntaxins, the synaptobrevins or VAMPs, and the SNAP-25 family members. To induce membrane fusion, SNARE proteins from apposed membranes must interact in trans. The formation of a stable four-helix bundle may generate enough energy to promote mixing of the lipid bilayer (8 -10). Lipid mixing experiments using SNARE complexes reconstituted into lipid bilayer vesicles indicated that only cognate SNARE combinations are able to induce fusion (11). However, SNARE proteins are rather promiscuous when the formation of the tight SDS or heat-resistant SNARE complexes is analyzed (12,13). Moreover, SNARE proteins can be part of more than one SNARE complex in vivo (14), and some SNARE proteins can functionally replace each other (15,16). In vitro the synaptobrevin/VAMP homologs Snc1p and Snc2p in yeast can be replaced by two other members of the synaptobrevin family, the ER-Golgi SNARE Sec22p and the vacuolar SNARE Nyv1p (11). However, these SNAREs are unable to replace Snc1/2p in vivo (17), probably because they are retained in their specific compartments. Thus, the targeting of SNAREs to the right compartment is one way to increase the specificity of intracellular membrane attachment/fusion events. We analyzed previously (18) the targeting of the ER-to-Golgi SNARE Sec22p and show that the correct targeting of Sec22p involves its recycling from the Golgi to the ER via COPI-coated vesicles. In this respect, Sec22p as well as Bos1p (19) behave like ER-resident proteins that carry a KKXX ER-retrieval signal (20). The coat of COPI vesicles in mammalian cells and yeast consists of seven subunits (␣-, ␤-, ␤Ј-, ␥-, ␦-, ⑀-, and -COP) and the small GTPase, ARF1 (21). The observations made by Letourneur et al. (20) and Cosson et al. (22) that KKXX-tagged proteins require COPI components for retrieval from Golgi to the ER provided first evidence that COPI vesicles mediate this retrograde transport. The same is true not only for Sec22p but also for other yeast proteins that recycle from Golgi to ER, for example, Emp47p, a Golgi lectin-like protein; Erd2p, the HDEL receptor; Sed5p, a Golgi-localized syntaxin homolog; and Mnn1p, a glycosyltransferase (23)(24)(25)(26). How Sec22p is sorted into COPI vesicles is currently unknown. Moreover, the function of Sec22p is not entirely understood. The SEC22 gene was first isolated by us as a multicopy suppressor of defects in the small GTPase Ypt1p involved in ER-to-Golgi transport (named SLY2; see Ref. 27). Later SLY2 was found to be identical to SEC22 (28) for which conditional mutant alleles had been identified by Novick et al. (29). Like several other SNARE proteins, Sec22p can be a component of more than just one SNARE complex. Its physical interaction with the SNARE proteins Sed5p, Bos1p, Bet1p, and other Golgi SNARE proteins argues for a role in anterograde traffic from ER-to-Golgi (6,30,31). Sec22p also co-precipitates with the ER proteins Ufe1p and Sec20p that function in retrograde Golgi-ER transport (24,32,33). sec22 mutants lead to a defect in forward traffic (34 -37). However, this defect, as with many other mutants affected in retrograde transport, could be a secondary effect. In vitro assays performed with permeabilized mutant cells showed that the sec22-3 mutation does not slow down forward transport but does inhibit retrograde transport (38,39). Membrane fusion reconstituted with liposomes containing the ER-to-Golgi SNARE Bet1p requires the presence of Sec22p along with Sed5p and Bos1p on the opposing membranes to drive fusion (11,40). In mammalian cells the Sec22p homolog sec22b coprecipitates with syntaxin 5 (ϳSed5p), rbet1 (ϳBet1p), and membrin (ϳBos1p) (41) as well as syntaxin 18, which may be functionally equivalent to Ufe1p (42). Therefore, the dual function of Sec22p may be conserved throughout evolution. To obtain additional clues to the function of Sec22p, we used a genetic approach. SEC22 is not essential for cell viability (27). We tried to find mutant yeast strains in which SEC22 became essential. A new allele of yeast ORF YNL258c showed synthetic lethality with sec22⌬. Mutants in YNL258c were recently shown to be dependent on a dominant allele of SLY1, a suppressor of many ER-to-Golgi transport defects, and the gene was named DSL1 (43). Evidence was provided for a function of Dsl1p in ER-to-Golgi forward transport. Genetic interaction of some dsl1 mutants with the ␥-COP-encoding SEC21 gene also suggested a role of Dsl1p in retrograde Golgi-to-ER traffic (43). We show that a new allele of DSL1, dsl1-22, isolated in our screen indeed affects Golgi-to-ER retrieval of several proteins with only slight effects on forward transport. dsl1-22 interacts genetically with factors required for retrograde traffic, and Dsl1p binds coatomer. Taken together, our data provide strong evidence for a direct role of Dsl1p in Golgi-to-ER traffic. EXPERIMENTAL PROCEDURES Yeast Strains, Genetic Techniques, and Plasmids-Saccharomyces cerevisiae strains used are listed in Table I. Cells were grown in yeast extract/peptone/dextrose or synthetic minimal medium containing galactose (2%) or glucose (2%) as carbon sources and supplemented as necessary with 20 mg/liter tryptophan, histidine, adenine, uracil or 30 mg/liter leucine or lysine. To enhance the visualization of sectoring colonies, plates with low adenine concentration (10 mg/liter adenine) were prepared. 5-FOA plates were prepared as synthetic minimal medium containing 0.1% 5-FOA. Yeast transformations were performed as described previously (44). Standard techniques were used for mating of haploid strains, complementation analysis, sporulation, and the analysis of tetrads (45). The assay to detect retention defects using Ste2-Wbp1p was described previously (20,46). The analysis of synthetic lethal effects between the dsl1-22 mutation and other ER-Golgi defects was performed with strains derived from the original mutant by three crosses to wild type strains or a dsl1-22-myc::KanMX strain derived from the original transformant by two crosses to wild type strains. When possible tetrad analysis was performed 2 or 3 days after placing diploid cells on potassium acetate plates. The viability of spores varied considerably. Therefore, the genotype of viable spores was determined by crosses to tester strains (complementation assays Table II. Synthetic Lethality Screen-Mutants synthetically lethal with sec22⌬ were isolated using the ade2/ade8, red/white sectoring system (48). The SLA28-6C and SUA1-12D strains were red after transformation with pHDS228 on selective plates but gave white sectors under non-selective conditions due to plasmid loss. After mutagenesis with ethyl methane sulfonate 15 non-sectoring colonies could be identified among 200,000 screened. 10 clones were not able to lose the plasmid (SEC22, URA3) on 5-FOA plates, and 5 of these did not grow at 37°C. Three of these allowed the displacement of SEC22 by sec22-3. They were transformed with a LEU2/CEN-based yeast genomic library, and one strain showed transformants that sectored and did not contain SEC22. The complementing gene was identified by sequencing the ends of the insert and expression of the single ORF. Protein Extraction and Immunoblotting-Western blotting analysis was performed as described by Boehm et al. (53). Aliquots (1 A 600 ϭ 1.7 ϫ 10 7 cells) of transformed cells were lysed in 2 M NaOH, 5% mercaptoethanol and proteins precipitated with 10% trichloroacetic acid, neutralized with 1.5 M Tris base, and dissolved in SDS sample buffer. Proteins were resolved on 12% SDS-PAGE. Purification of Recombinant Proteins and Affinity Binding Assay-E. coli and S. cerevisiae strains expressing GST fusion proteins were lysed, and proteins were solubilized in lysis buffer (20 mM Hepes, pH 6.8, 150 mM KOAc, 5 mM Mg(OAc) 2 , 1 mM dithiothreitol, 1% Triton X-100, protease inhibitor mix). GST fusion proteins were immobilized on glutathione-Sepharose 4B and washed 5 times with 10 volumes lysis buffer. Proteins bound to GST fusion proteins expressed in yeast were separated by SDS-PAGE and analyzed by immunoblotting. E. coli proteins immobilized on glutathione-Sepharose 4B were incubated at 4°C for 2 h with 100,000 ϫ g supernatant of yeast cell lysate. The beads were washed 5 times, and proteins were separated by SDS-PAGE followed by immunoblot analysis. Subcellular and Sucrose Gradient Fractionation-Yeast cells were harvested at mid-logarithmic phase. The cell pellet was washed twice with water and once with B88 (20 mM Hepes, pH 6.8, 250 mM sorbitol, 150 mM KOAc, 5 mM Mg(OAc) 2 ), resuspended in a minimal volume of B88 containing EDTA-free protease inhibitor mix (Roche Molecular Biochemicals), and pipetted into liquid nitrogen. Cells were ground up in a mortar. The cell powder was resolved in B88 (supplemented with EDTA-free protease inhibitor mix) and centrifuged twice at 500 ϫ g for 5 min to remove cell debris, and the clear lysate was centrifuged at 10,000 ϫ g for 15 min to obtain the P10 pellet. The S10 fraction was then subjected to centrifugation at 100,000 ϫ g at 4°C for 1 h to obtain P100 and S100. To investigate the membrane localization of Dsl1p, the supernatant of the cell lysate after a 500 ϫ g centrifugation was divided into different portions that were treated for 30 min on ice with either 5 M urea, 1% Triton X-100, or 1 M NaCl. The 500 ϫ g lysate was also subjected to sucrose density gradient centrifugation. For fractionation experiments, lysates were loaded on sucrose density gradients (51) and spun at 4°C in a Beckman SW40 rotor at 37,000 rpm for 2.5 h. 1-ml fractions were taken, and the last fraction was adjusted to 1 ml with B88. Each fraction was mixed with 1 ml of SDS-PAGE sample buffer (8 M urea, 50 mM Tris-HCl, pH 8.0, 2% SDS, 0.1 mg/ml bromphenol blue) and incubated at 50°C for 10 min prior to analysis by SDS-PAGE and immunoblotting. Protein Labeling, Immunoprecipitation, and Invertase Assay-For detection of CPY processing cells were shifted to 37°C for indicated times, pulse-labeled for 5 min with Tran 35 S-label (ICN) and chased for 30 min. The labeled proteins were immunoprecipitated using specific antibodies and separated by SDS-PAGE. After incubating the gel with Amplify (Amersham Pharmacia Biotech) for 45 min, the proteins were detected by exposing the gels to X-Omat AR (Eastman Kodak Co.) at Ϫ80°C. Invertase activity staining was carried out as described previously (54). Fluorescence and Electron Microscopy-Indirect immunofluorescence was performed as described by Schröder et al. (51) using rabbit polyclonal anti-Kar2p and monoclonal mouse c-Myc epitope (9E10) antibodies. Cy2 TM -conjugated goat anti-rabbit or anti-mouse F(abЈ) 2 fragment (Jackson ImmunoResearch) served as secondary antibody. DNA was stained with 4Ј,6-diamidino-2-phenylindole (DAPI). Cells expressing GFP fusion proteins were grown in SD medium at 25°C to mid-log phase and placed onto a slide. A coverslip was added, and cells were examined immediately. DAPI staining was achieved after fixing cells in methanol at Ϫ20°C for 10 min, washing with acetone at Ϫ20°C, and washing three times with ice-cold PBS, pH 7.4. Confocal images were obtained with a TSC SP1 confocal laser-scanning microscope (Leica). For electron microscopy, yeast cells at mid-logarithmic phase were fixed and stained with permanganate to enhance visualization of membrane structures (54). Identification of Mutants for Which SEC22 Is Essential-To find proteins that can substitute for Sec22p or to identify factors that prevent these proteins from functioning normally, we performed a synthetic lethality screen. Mutants inviable in the absence of SEC22 were isolated by using a colony sectoring assay (48). sec22⌬ mutants that carry a functional SEC22 gene on the centromeric plasmid pHDS228 were mutagenized. In addition to SEC22 this plasmid contains the following two markers required for pyrimidine and purine biosynthesis: URA3 as selectable marker and ADE8, which can serve as a color marker in yeast strains carrying mutated versions of the ADE2 and ADE8 genes on the chromosomes. The ade8 mutation is epistatic to ade2 and prevents the formation of the red color typical for ade2 mutants. Therefore, cells expressing ADE8 from a plasmid are red, whereas those that lost the plasmid turn white. As expected, on rich media sec22⌬, ade8, ade2, ura3 cells containing pHDS228 could form white sectors since neither SEC22, ADE8, nor URA3 are essential. After mutagenesis, we screened for non-sectoring colonies (for details see "Experimental Procedures"). To confirm that the non-sectoring phenotype in fact reflects a positive selection for the presence of the SEC22-carrying plasmid, all mutants were tested for their ability to lose the second plasmid-encoded marker, URA3. This test makes use of the drug 5-FOA (5fluoroorotic acid), which is toxic to Ura ϩ cells (55). In fact, most of the non-sectoring mutants were sensitive to 5-FOA and only these mutants were analyzed further. In addition to these two phenotypes, five mutants obtained in two independent screens were also temperature-sensitive for growth. Genetic analysis showed that the mutations are recessive and that the inability to lose the SEC22 gene is tightly linked to the growth defect at 37°C (see Fig. 1A). They belong to three different complementation groups that we called "LSD1, -2, and -3" (lethal with SEC22 deletion). We tried to clone the "LSD" genes from single copy or multicopy genomic libraries containing LEU2 as a selectable marker (27). To obtain complementing plasmids, we selected transformants on plates lacking leucine and looked for colonies with white sectors. The formation of white sectors indicated that the cells had again acquired the ability to lose the SEC22carrying plasmid pHDS228. Those transformants, which had simply received an additional copy of SEC22 from the library, were identified by PCR and discarded. So far our attempts to isolate complementing plasmids from a single copy library were successful only for the "lsd1-1" mutant. The library plasmid that we obtained harbored three intact open reading frames. Sequencing and subcloning showed that the presence of YNL258c alone was sufficient to suppress both the non-sectoring phenotype and the temperature sensitivity of the lsd1-1 mutant. The open reading frame YNL258c, located on chromosome XIV, encodes an essential protein with a predicted molecular mass of 88 kDa with no similarity to other proteins in data bases (56). The following observations confirmed that defects in YNL258c result in a SEC22-dependent phenotype as well as a conditional lethal phenotype. Cloning and sequencing of the lsd1-1 mutant allele revealed the presence of a stop codon at position 2173 of the 2265-base pair long reading frame. This would lead to a gene product, which is 30 residues shorter than the putative wild type protein. By using a PCR-based method described by De Antoni and Gallwitz (47), we replaced YNL258c either by a full-length or a shortened version, which were fused to sequences encoding a His 6 epitope followed by two copies of a c-Myc tag. The KanMX cassette inserted downstream of the c-Myc-tagged YNL258c sequences served as a selectable marker that allows the transformants to grow in the presence of geneticin (G418). Temperature-sensitive transformants were obtained only when the C-terminally truncated version of YNL258c was introduced into wild type cells. Western blotting analysis showed that Ts Ϫ transformants in fact encode a shorter c-Myc-tagged YNL258c protein than cells ::HIS3/sec22::HIS3 DSL1/dsl1-22, ade2/ade2, ade8/ade8 heterozygous diploid cells carrying ADE8 and SEC22 on a plasmid (pHDS228) were sporulated; spores were separated, and the segregants were incubated on rich medium. Tetrads were replica-plated to low adenine plates and incubated either at 25 or 37°C. All red colonies that are not able to lose the plasmid pHDS228 are temperature-sensitive showing that both defects are closely linked. B, growth of wild type (MSUC-3B) and dsl1-22 (YUA1-9C) cells was monitored by measuring the cell density (A 600 nm ) during incubation at 25°C. After 5 h of incubation at 25°C an aliquot of each sample was shifted to 37°C for additional 5 h. expressing the full-length version (data not shown). Tetrad analysis also confirmed that these mutants need SEC22 for growth (see below). The same results were obtained when Nterminally tagged versions of YNL258c and its mutant variant expressed from a centromeric vector were used to complement the deletion of YNL258c. In summary, these data established that the deletion of 30 C-terminal triplets from the ORF YNL258c results in a conditional lethal phenotype. In cells carrying this mutation the otherwise non-essential SEC22 gene is rendered essential. While this work was in progress Waters and co-workers (43) showed that mutations in YNL258c can make cells dependent on the SLY1-20 mutation. The mutants identified were accordingly named dsl1-1 to dsl1-7 (dependent on SLY1-20). SLY1-20 is a dominant mutation, which suppresses the defects in several yeast mutants affected in ER-to-Golgi transport (27,37,(57)(58)(59)(60). Accordingly, the mutant we obtained was renamed dsl1-22. Consistent with the results obtained by VanRheenen et al. (43), the temperature sensitivity of dsl1-22 is suppressed by the SLY1-20 mutation on a single copy plasmid (data not shown). Genetic Interaction of dsl1-22 with Other Genes Whose Products Act in ER-Golgi Anterograde and Retrograde Transport-In the process of cloning out sequences able to complement the dsl1-22 mutation, we also obtained clones from multicopy libraries. Among these clones were plasmids containing the YKT6 gene. YKT6 encodes a lipid-anchored member of the synaptobrevin family of SNARE proteins (61). This prompted us to test whether the overexpression of other SNARE-encoding genes has similar effects. We found that, similar to the results obtained with YKT6, overexpression of SED5 allowed dsl1-22 mutants to tolerate the loss of SEC22. However, the overexpression of neither YKT6 nor SED5 was able to suppress the Ts Ϫ phenotype of ds11-22 mutants. Overexpression of the other SNARE-encoding genes specific for ER-Golgi transport, BET1, BOS1 or UFE1, was unable to suppress the non-sectoring phenotype of dsl1-22 mutants. The approach, which led to the isolation of the dsl1-22, was based on the synthetic lethality of the dsl1-22 mutation when combined with the sec22 deletion. Therefore, we also addressed the question whether dsl1-22 is synthetically lethal with other defects in ER-to-Golgi transport. For this and all subsequent assays we used dsl1-22 mutants expressing SEC22 from its normal locus on chromosome XII: (i) a strain obtained by backcrossing cells derived from the original mutant (Fig. 1A) twice to wild type cells (SEC22), and (ii) a mutant in which we had introduced dsl1-22-myc construct at the YNL258c locus (see above). The analysis of tetrads was greatly facilitated by the presence of the KanMX cassette closely linked to the dsl1-22myc allele which thus allowed us to identify the dsl1-22 mutants by their resistance to G418. Viable double mutants were obtained when we combined the dsl1 -22 defect with sec23-1, sec22-3, bet1-1, sed5-1, bos1 (sec31-1), and sec27-1 mutations. The first mutation leads to a block in anterograde ER-to-Golgi transport due to a defect in COPII assembly (62); bet1-1, sec22-3, and sed5-1 are mutations that affect genes encoding SNARE proteins involved in ER-Golgi transport, whereas SEC27 encodes a COPI component (63). The number of viable double mutants obtained differed to a great extent as determined by complementation assays and analyzing their resistance to G418 (for details see "Experimental Procedures"). The observation that sec22-3, dsl1-22 double mutants are viable whereas dsl1-22 mutants are inviable in the absence of SEC22 was confirmed by plasmid shuffling experiments using a dsl1-22 mutant and SEC22 or sec22-3 containing plasmids (data not shown). This finding illustrates that this assay is specific for certain alleles. Therefore, missing or weak genetic interactions mentioned above do not rule out that the gene products perform a related function. This may be true at least for BOS1 and DSL1 since all the bos1 (sec31-1), dsl1-22 double mutants that we obtained formed very small colonies. No double mutants were obtained when diploids heterozygous for the dsl1 -22 and the sec22⌬, sec21-1, ret1-1, ret1-1, sly1 ts , sec20-1, or tip20-5 mutations were subjected to tetrad analysis. The sec21-1 (␥-COP), ret1-1 (␣-COP), ret1-1 (␦-COP), sec20-1, and tip20-5 mutants primarily affect the retrograde transport from Golgi to ER, and defects in forward transport may be secondary (20,22,24,32,33). The strong genetic interaction between dsl1-22 and these mutations indicates that DSL1 may be required for Golgi-ER retrograde transport. The synthetic lethality of dsl1-22 and sly1 ts are consistent with the observation made by VanRheenen et al. (43) who isolated dsl1 mutants that depend on a dominant SLY1 mutation. The dsl1-22 Mutant Shows Slight Defects in Forward Transport-The dsl1-22 mutant cells gave rise to slightly smaller colonies than wild type cells even at room temperature. Accordingly, the growth rate of dsl1-22 mutant cells is slower when measured in liquid culture (Fig. 1B). Growth of dsl1-22 mutants stops completely 2 h after a shift to 37°C. This Ts Ϫ phenotype allowed us to examine the function of Dsl1p in the secretory pathway at restrictive temperatures. First we analyzed the secretion of periplasmic invertase in wild type and dsl1-22 cells at different times after shifting cells to 37°C. Measuring total invertase activity using intact and permeabilized cells (64) showed that the ratio of secreted to intracellular invertase does not change significantly up to 3 h after the shift to 37°C (data not shown). To detect a possible glycosylation defect due to slower ER-to-Golgi transport in dsl1-22 mutants, intracellular and extracellular fractions of wild type and mutant cells were separated by non-denaturing PAGE. Invertase was visualized by an activity stain. As shown in Fig. 2A, dsl1-22 cells secrete partially underglycosylated invertase even at 25°C. The shift to the restrictive temperature leads to some accumulation of the ER core-glycosylated form inside the cell. For comparison, at restrictive temperature the sec22-3 mutation also leads to the intracellular accumulation of core-glycosylated invertase and secretion of a small amount of underglycosylated enzyme. An incomplete block in anterograde transport also became evident when the maturation of the vacuolar protease CPY was analyzed in dsl1-22 cells (Fig. 2B). In pulse-chase experiments CPY appears first as a p1 precursor in the ER, is then modified to a larger form, p2, in the Golgi, and is transported to the vacuole where it is processed to its mature form (m) by proteolysis. As expected, sec22-3 mutant cells show a complete block in ER-to-Golgi transport 15 min after the shift to 37°C. In this mutant only the ER form (p1) is visible consistent with a complete block in ER-to-Golgi transport. In dsl1-22 cells about half of CPY is still normally processed even 3.5 h after the shift to 37°C. This corresponds to the results observed with other temperature-sensitive alleles of DSL1 (43). dsl1-22 Cells Accumulate ER Membranes but Not Vesicles at Restrictive Temperature-The morphology of wild type and dsl1-22 cells incubated at 25 or 37°C was compared by electron microscopy. As shown by Kaiser and Schekman (36), mutants with defects in the budding reaction accumulate membranes, whereas mutants that exhibit defects in fusion of vesicles with target membranes accumulate vesicles. At 25°C the morphology of dsl1-22 cells does not differ significantly from that of wild type cells grown at 37°C (Fig. 3, A and B). Fig. 3C shows a representative micrograph of a dsl1-22 mutant cell after incubation at the nonpermissive temperature for 90 min. Compared with wild type cells (Fig. 3A) dsl1-22 cells show a strong accumulation of membranes, which mainly emerge from the ER contiguous with the nuclear membrane (Fig. 3, C and D, arrow). Similar structures also originate from cortical endoplasmic reticulum close to the plasma membrane (Fig. 3D, arrowhead). No significant increase in the number of small vesicles was observed. Thus, dsl1-22 mutants very much resemble the coatomer mutant sec27-1 (63). dsl1-22 Mutants Are Defective in the Retrieval of ER Proteins from the Golgi-The strong genetic interaction of the dsl1-22 defect with mutations affecting retrograde Golgi-to-ER transport and the incomplete block in anterograde transport when growth already had ceased indicated that the primary function of Dsl1p could be in the retrieval of proteins from the Golgi complex. Therefore, we employed different assays to compare retrograde transport in wild type and dsl1-22 cells. Mutants affecting genes required in retrograde transport like SEC20 and SEC22 secrete large amounts of the soluble ER protein BiP/Kar2p (65). Fig. 4A shows that the same is true for dsl1-22 and dsl1-22-myc mutants. This defect in BiP/Kar2p localization was also observed by immunofluorescence microscopy using an affinity-purified polyclonal anti-BiP/Kar2p antibody. In wild type cells BiP/Kar2p antibodies stain the nuclear periphery which is the characteristic ER staining in yeast (66). In contrast to the typical ER staining in wild type cells, we could observe a dot-like pattern in dsl1-22 cells even at permissive temperature (Fig. 4B), similar to "BiP bodies" observed in several ER-to-Golgi mutants at restrictive temperature (67). To examine the defect in retrograde transport more specifically, we focused on the targeting of the SNARE protein Sec22p. As described previously (18,46) ␣-factor fused to Sec22p through a Kex2p cleavage site, and a c-Myc epitope is a suitable tool for analyzing targeting of Sec22p. Several recycling mutants exhibit mislocalization of Sec22-␣ (18) resulting in cleavage by the late Golgi protease Kex2p. The removal of the ␣-factor reporter from Sec22p is easily detected by immunoblot analysis. Fig. 4C shows the steady state processing of Sec22-␣ in wild type, dsl1-22 and dsl1-22-myc strains incubated at 25°C. About 75% of Sec22-␣ proteins was cleaved by Kex2p in mutant cells, whereas very little of the reporter was cleaved by Kex2p in wild type cells. Pre-shifting cells to 37°C for 2 h did not result in a more efficient cleavage (data not shown). It is unlikely that more efficient cleavage of Sec22-␣ in dsl1-22 is due to some Kex2p activity in the ER since mislocalization of a Sec22p-derived fusion protein was also obvious when we analyzed cells producing a GFP-tagged Sec22 protein (Fig. 4D). This fusion protein is fully functional since it is able to suppress the growth defect of sec22-3 mutants (data not shown). Moreover, GFP-Sec22p behaves like C-terminally tagged Sec22 proteins when analyzed in wild type and ufe1-1 mutant cells (Fig. 4D; see Ref. 18). In wild type cells fluorescence appeared as a ring around the nucleus which represents ER, whereas in dsl1-22 cells a punctated staining was detectable, very likely representing Golgi structures (18). As with other recycling mutants, this defect already occurs at 25°C (20). Taken together, both the efficient Kex2p processing of Sec22-␣ and the localization of GFP-Sec22 indicate that dsl1-22 mutants are defective in ER retention of Sec22p. To examine whether the dsl1-22 mutation also interferes with the ER retention of type I transmembrane proteins carrying the KKXX retrieval signal, we performed the Ste2-Wbp1dependent mating assay described by Letourneur et al. (20). We introduced the dsl1-22-myc allele into a strain expressing a KKXX-tagged version of the ␣-factor receptor (Ste2-Wbp1p) instead of the wild type STE2 gene. Wild type cells of mating type a expressing only this receptor cannot mate with cells of mating type ␣ since Ste2-Wbp1p is efficiently retained in the ER due to the KKXX-tag fused to the C terminus. Mutants that mislocalize the receptor to the plasma membrane can form diploids with a suitable tester strain. With the Ste2-Wbp1based assay efficient mating occurs for instance in sec21-2 (␥-COP) mutants (see Ref. 20; see also Fig. 4E). Fig. 4E shows that dsl1-22-myc cells producing Ste2-Wbp1p can mate as efficiently as the sec21-2 mutants indicating that targeting of KKXX-tagged ER proteins is impaired already at a permissive temperature of 30°C. In summary, the data show that dsl1-22 mutants are defective in the ER retention of different types of proteins: soluble HDEL carrying proteins like BiP/Kar2p, type II transmembrane proteins like the v-SNARE Sec22p, as well as type I transmembrane proteins carrying a KKXX retrieval signal. Subcellular Distribution of Dsl1p-According to its primary sequence, Dsl1p contains no putative transmembrane domains. Extracts from Dsl1-myc producing cells (YUA11) were used to examine a possible membrane association of Dsl1p. A 500 ϫ g supernatant of cell lysate was treated either with buffer (B88), 5 M urea, 1% Triton X-100, or 1 M NaCl and subsequently centrifuged at 10,000 ϫ g and 100,000 ϫ g (Fig. 5). When incubated with buffer alone, no Dsl1-myc was detectable in the soluble fraction, whereas both urea and detergent treatment led to solubilization of Dsl1-myc. Less than 5% of the total amount of Dsl1-myc became soluble upon treatment with high salt suggesting that Dsl1p is a peripherally associated membrane protein. In contrast, the transmembrane protein Sec22p could only be solubilized by detergent. Experiments using a recently obtained Dsl1-specific serum gave identical results (data not shown). Next we performed subcellular fractionation studies using sucrose density gradients to compare the localization of Dsl1p with that of known Golgi-and ER-resident proteins. Cell lysates of strain YUA11 (DSL1-myc) were prepared and loaded on top of sucrose gradients, and fractions were collected after centrifugation as described under "Experimental Procedures." Fig. 6, A and B, shows that Emp47p, a Golgi marker, the ER resident t-SNARE Ufe1p, as well as the ER-marker BiP/Kar2p display characteristic distributions (51,24,66). Like Ufe1p and BiP/Kar2p Dsl1-myc protein was detectable exclusively in the dense fractions when using the monoclonal anti-c-Myc anti- MATa ste2⌬ yeast cells expressing Ste2-Wbp1p were grown on YPD plates and replica-plated to a lawn of MAT␣ cells. After 6 h at 30°C to allow mating, cells were replica-plated to SD plates selective for the growth of diploid cells only. sec21-2 (PC82) and dsl1-22-myc (SUA5) mutants were able to form diploids, whereas wild type (WT) (STE2-4B) cells could not mate with the tester strain (MSUC-2D). body 9E10 directed against the c-Myc epitope, presumably reflecting ER localization (Fig. 6C). Dsl1p Interacts Physically with Coatomer-To get additional clues for the involvement of Dsl1p in retrograde and/or anterograde ER-to-Golgi transport, we investigated possible interactions of Dsl1p with proteins involved in these trafficking steps. First we tried to address this question by expressing Dsl1p tagged with glutathione S-transferase (GST) in yeast. The 100,000 ϫ g supernatants from detergent-lysed yeast cells (YUA11) expressing GST or GST-Dsl1p were loaded on glutathione-Sepharose 4B to immobilize GST or GST-Dsl1p and associated proteins. After washing the beads to remove unbound proteins, antibodies were used to monitor the binding of several ER/Golgi proteins to Dsl1p. Anti-coatomer antibodies resulted in very strong signals (data not shown), whereas only weak signals were obtained with Emp47p-specific antibodies. These signals were specific for the Dsl1 part of the fusion protein since no binding was observed when lysates from GSTexpressing cells were analyzed. The SNARE proteins Bet1p, Bos1p, Sec22p, and Sed5p as well as the COPII component Sec24p and the Rab-like GTPase Ypt1p were not retained on the affinity matrix in significant amounts. To verify and extend these findings, we incubated extracts of detergent-lysed yeast cells with different GST fusion proteins purified from E. coli. In line with the results obtained with GST fusion proteins expressed in yeast, coatomer (COPI) showed strong binding to GST-Dsl1p. Notably, coatomer recruitment to GST-Dsl1p from E. coli takes place even at 4°C (see "Experimental Procedures"), a temperature where enzymatic activities are low. As controls, GST, GST-Sed5p, GST-Bos1p, or GST-Sec22p were not able to recruit coatomer from cell lysates (Fig. 7). Very faint bands representing coatomer were seen when GST-Tip20p was loaded on glutathione-Sepharose 4B (Fig. 7B, lane 5). Dsl1p may mediate this indirect binding between GST-Tip20 and coatomer because Ito et al. (68) recently showed that Tip20p and Dsl1p interact in two-hybrid assays. However, so far we could not observe direct binding of Dsl1-myc to GST-Tip20p in vitro. In addition, Dsl1-myc did not bind to GST-Bos1p, GST-Sec22p, or GST-Sed5p (data not shown). Likewise, GST-Dsl1p was not able to bind Bet1p, Bos1p, Sec22p, Sed5p, Ypt1p, Sec24p, or Emp47p, suggesting that the weak binding of Emp47p mentioned above could be indirect via coatomer. Genetic Analysis Indicates That Dsl1p Is Required for Retrograde Golgi ER Traffic-In the present study we identified a novel mutation that renders cells dependent on the otherwise dispensable SNARE protein Sec22p. This mutation makes cells temperature-sensitive for growth, allowing us to analyze the function of the affected gene. Cloning and sequencing showed that this mutant is a new allele of the essential open reading frame YNL258c, encoding a truncated protein that lacks its 30 C-terminal residues. Recently, mutant alleles of YNL258c, named dsl1-1 and dsl1-2, were identified by VanRheenen et al. (43) as mutations that make yeast cells dependent on the dominant suppressor mutation of SLY1, SLY1-20. Thus, screening for genetic defects that confer dependence on either Sec22p or on the dominant SLY1-20 mutation led to the identification of the same gene, DSL1. By comparing the results, the Sec22p-dependent dsl1-22 mutant has similar properties as the PCR-generated temperature-sensitive alleles dsl1-5 and dsl1-6 obtained by Van-Rheenen et al. (43). They show a slight defect in ER-to-Golgi transport of CPY, and their growth defect at 37°C can be suppressed by the SLY1-20 mutation. However, the mutants obtained using the two approaches differ in several other phe- 5. Dsl1p is a peripheral membrane protein. Logarithmically grown cells of YUA11 (DSL1-myc) were disrupted using glass beads. The lysate (500 ϫ g supernatant) was treated as indicated (see "Experimental Procedures") and then centrifuged at 10,000 and 100,000 ϫ g. The resulting pellet (P10 and P100) and supernatant (S100) fractions were resolved on a 12% polyacrylamide gel and immunoblotted with anti-Sec22p and anti-c-Myc antibody (9E10). In contrast to the integral membrane protein Sec22p, Dsl1-myc became soluble after incubation with 5 M urea. notypes. The SEC22-dependent dsl1-22 mutant is temperature-sensitive, and defects in vesicular transport could thus be analyzed directly. The SLY1-20-dependent mutants dsl1-1 and dsl1-2 are not Ts Ϫ and display secretory defects only after expression of the SLY1-20 allele is shut off. Another difference concerns the suppression of the non-sectoring phenotype. The dependence of the dsl1-1 mutant on SLY1-20 could not be suppressed by overexpression of the t-SNARE-encoding SED5 gene (43), whereas SED5 overexpression in dsl1-22 cells could eliminate the requirement for Sec22p. This observation may imply a direct functional link between Dsl1p and SNARE proteins like Sec22p and Sed5p. Both Sec22p and Sed5p show strong genetic interactions with genes encoding proteins involved in Golgi-ER retrograde transport (37,69,70). Double mutant analysis revealed that the same is true for dsl1-22. During our attempts to create double mutants harboring the dsl1-22 mutation combined with additional mutations affecting the ER-Golgi transport cycle, we observed the strongest genetic interactions of dsl1-22 with mutations affecting retrograde Golgi-to-ER transport. No double mutants were obtained when we crossed dsl1-22 strains with mutants affected in coatomer subunit-encoding genes like RET1 (␣-COP), RET2 (␦-COP), and SEC21 (␥-COP) or with mutants in SEC20 and TIP20 which are important for fusion of Golgi-derived vesicles with the ER (32,33). In accordance with this, VanRheenen et al. (43) found that overexpression of the ␥-COP-encoding SEC21 gene partially suppresses the Ts Ϫ defect of dsl1 mutants. Together these results strongly suggest that Dsl1p may play a role in Golgi-ER retrograde traffic. One could speculate that the need for Sec22p displayed by the dsl1-22 mutant may be due to the mislocalization of SNARE proteins that can functionally replace Sec22p. This is also indicated by the fact that the requirement for Sec22p at least at room temperature can be alleviated either by excess of Ykt6p or Sed5p, two other SNARE proteins. As discussed below, Sec22p as well as Bos1p are in fact mislocalized in dsl1-22 cells. Unexpectedly, SEC22 can be replaced in dsl1-22 mutants by the sec22-3 allele. This was surprising since the sec22-3 point mutation has stronger effects on the growth of certain strains than the deletion of SEC22 (sec22⌬ cells that are not Ts Ϫ can become temperature-sensitive after introducing a sec22-3-containing plasmid). 2 dsl1-22 Mutants Have a Strong ER Retention Defect-In dsl1-22 cells maturation of the vacuolar hydrolase CPY is only partially inhibited, similar to what has been described for the PCR-generated dsl1-5 and dsl1-6 mutants (43). Invertase secretion is almost normal in dsl1-22 mutants and a slight inhibition of anterograde transport is indicated by the accumulation of a small amount of core-glycosylated invertase. Electron microscopy analysis of mutant cells reveals a severe accumulation of membranes emerging from the ER after shift to nonpermissive temperature. Similar structures were observed in a ␤Ј-COP mutant, sec27-1 (63). Since the morphology of dsl1-22 mutant cells is almost normal at 25°C, a temperature at which retrograde transport is already affected (see below), this EM phenotype at restrictive temperature is likely to be a more indirect effect due to perturbed forward transport. The weak inhibitory effect on forward transport appears to be a result of a strong defect in retrograde transport back to the ER. In dsl1-22 cells this block is already seen at permissive temperature, consistent with what has been seen with other recycling mutants (18,20,22). The dsl1-22 mutant allele affects the retrieval of recycling SNARE proteins, proteins sorted by their C-terminal KKXX motif, and the soluble ER protein BiP/Kar2p, whose recycling depends on the HDEL receptor Erd2p (65). How can retrograde transport defects have an effect on forward transport? Obviously, one possibility is that components of the vesicle budding and fusion machineries may become limiting due to their mislocalization. In addition, it is known that exit from the ER requires the proper folding of cargo molecules, and this in turn depends on chaperones like BiP/Kar2p or PDI (71,72). These chaperones carry a C-terminal HDEL signal that mediates their retention in the ER. In dsl1-22 mutant cells, BiP/Kar2p and very likely PDI are not properly retained in the ER. Insufficient amounts of BiP/Kar2p and PDI in the ER could retard the exit of cargo molecules (71,72). The following results demonstrated that dsl1-22 cells are defective in Golgi-to-ER-retrieval of Sec22p. A GFP-tagged version of Sec22p localizes to the ER in wild type cells, whereas in dsl1-22 cells GFP-Sec22p displays a punctate staining pattern typical for Golgi markers. The Sec22-␣ fusion protein reaches the late Golgi apparatus in dsl1-22 cells but not in wild type cells as indicated by its Kex2p-dependent cleavage. Fractionation studies with sucrose density gradients showed that Bos1p exhibits a shift from ER-to-Golgi fractions in dsl1-22 cells compared with wild type cells (data not shown). Mislocalization of the soluble ER marker BiP/Kar2p was also demonstrated using immunofluorescence and a secretion assay. BiP/Kar2p fluorescence in dsl1-22 mutant cells shows a punctate pattern. Similar, more randomly distributed structures were described previously for several sec mutants and were named BiP bodies 2 T. Neumann, unpublished results. FIG. 7. Specific binding of coatomer subunits to GST-Dsl1p. Proteins from detergent-lysed yeast cells were incubated at 4°C for 2 h with GST alone or GST fusion proteins purified from E. coli and immobilized on glutathione-Sepharose 4B. Beads were washed 5 times (see "Experimental Procedures"), and the proteins bound were analyzed by SDS-PAGE followed by Coomassie Blue staining (A) and immunoblot analysis using a polyclonal antibody against coatomer (B). The positions of molecular weight markers and the different COPI subunits are indicated. (67). These authors suggested that BiP bodies could be exit sites where leaving proteins accumulate in different mutant strains due to low efficiency of Golgi-to-ER retrieval. Some mutants even secrete Kar2p into the medium. Indeed, this phenomenon can be observed with dsl1-22 mutant cells. The level of Kar2p secretion by these cells is comparable to that of sec22-3, sec22⌬, sec20-1 cells (65). 3 Besides mislocalization of SNARE proteins and of the luminal ER protein BiP/Kar2p, dsl1-22 cells exhibit also defects in retrieval of proteins sorted by their C-terminal KKXX motif. In this study we used Ste2-Wbp1p as a marker protein (20). Our results implicate Dsl1p in retrograde transport of dilysinetagged proteins from the Golgi compartment to the ER. We also analyzed the localization of Emp47p, a Golgi protein carrying a variant of the dilysine-motif, KXKXX (51). Unlike Ste2-Wbp1p, the localization of Emp47p is unaffected in dsl1-22 cells. This is indicated by the results of gradient fractionation and immunofluorescence experiments (data not shown). In this respect, dsl1-22 mutants resemble ret1 (␣-COP) mutants that also mislocalize KKXX-tagged proteins of the ER but not the KXKXXtagged Emp47p (23). The Localization of Dsl1p Is Still Unclear-Dsl1p is a peripheral membrane protein that can be solubilized with 5 M urea and colocalizes with ER marker proteins in sucrose density gradients. Fractionation experiments were performed with a c-Myc-tagged Dsl1 protein expressed at wild type levels. These results were later confirmed using antibodies raised against bacterially produced Dsl1 protein. We also tried to determine the localization of Dsl1p by immunofluorescence. Unfortunately, affinity purified polyclonal antibodies against Dsl1p still exhibited strong cross-reactivities and were thus not helpful for immunofluorescence analysis. Specific signals were only obtained when tagged versions of Dsl1p were overproduced. The expression of GFP-Dsl1p led to fluorescence pattern varying from Golgi-like staining consisting of a few large dots in cells from early logarithmic growth phase to nuclear staining at A 600 nm Ͼ1. Overexpression of c-Myc-tagged Dsl1p led to diffuse punctate fluorescence. A cytoplasmic staining consisting of many small dots was also observed for Tip20p, which is cytoplasmic when overproduced. Tip20p could be recruited to the ER when Sec20p was overproduced as well (73). Given the tight genetic (see above) and direct interactions (68) between Tip20p and Dsl1p they may behave similarly. Unlike Tip20p, overproduced Dsl1p does not localize to the ER when SEC20 was overexpressed simultaneously (data not shown). Dsl1p Interacts Strongly with Coatomer-As mentioned above, a recent systematic yeast two-hybrid study revealed direct interactions of Dsl1p with Tip20p (68). Dsl1p showed interactions with several other proteins. However, only in the case of Dsl1p and Tip20p, this interaction was observed with Dsl1p as bait as well as prey, i.e. both fusion orientations. This is consistent with the genetic data since the tip20-5 defect is synthetically lethal in combination with dsl1-22 (this study). The genetic as well as physical interaction between DSL1 and TIP20 and their gene products suggest that both proteins could be involved in the same transport step. Tip20p is able to bind to the cytosolic region of Sec20p (73). Together they form a complex with the SNARE proteins Ufe1p and Sec22p (32). This unconventional SNARE complex is involved in retrieval of dilysine-tagged proteins from Golgi to ER (33). In summary, Dsl1p interacts directly with Tip20p (68), and the dsl1-22 mutation exhibits synthetic lethality in combination with sec22⌬, sec20-1, and tip20-5. Synthethic-lethal genetic interaction between mutations in SEC22, SEC20, TIP20, as well as UFE1 and mutations affecting coatomer subunits were established previously (69). As expected, dsl1 mutants also exhibit genetic interactions with coatomer mutants (see Ref. 43; this study). Final evidence for Dsl1p playing an important role in retrograde Golgi-ER traffic is our finding that Dsl1p interacts physically with coatomer. Coatomer could be copurified with GST-Dsl1p from yeast cells, and it could be recruited from yeast lysates to recombinant GST-Dsl1p purified from E. coli. No additional factors present in the cell extracts were required for this interaction since purified coatomer can also bind to GST-Dsl1p (data not shown). Interestingly, the C-terminal truncated mutant protein, Dsl1-22p, which leads to a defect in retrograde transport, is still able to bind all coatomer subunits with an affinity comparable to the full-length protein (data not shown). Thus the C terminus of Dsl1p is not essential for binding of coatomer but perhaps could represent a binding region for other proteins involved in these transport steps. Considering the fact that Dsl1p binds coatomer as well as Tip20p, a component of the putative docking complex at the ER, we suggest that Dsl1p is involved in a step between uncoating and docking. It will be important to determine whether Dsl1p can bind both coatomer and Tip20p at the same time or whether the interaction is sequential.
2017-07-30T01:21:25.901Z
2001-10-19T00:00:00.000
{ "year": 2001, "sha1": "a17413b3c76d0973a8612603e9e29937f9e9dbb7", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/276/42/39150.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "362c29009537219b7254846ff0183eecbcdeb245", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
245587741
pes2o/s2orc
v3-fos-license
Detection of Larch Forest Stress from Jas’s Larch Inchworm ( Erannis jacobsoni Djak) Attack Using Hyperspectral Remote Sensing : Detection of forest pest outbreaks can help in controlling outbreaks and provide accurate information for forest management decision-making. Although some needle injuries occur at the beginning of the attack, the appearance of the trees does not change significantly from the condition before the attack. These subtle changes cannot be observed with the naked eye, but usually manifest as small changes in leaf reflectance. Therefore, hyperspectral remote sensing can be used to detect the different stages of pest infection as it offers high-resolution reflectance. Accordingly, this study investigated the response of a larch forest to Jas’s Larch Inchworm ( Erannis jacobsoni Djak) and performed the different infection stages detection and identification using ground hyperspectral data and data on the forest biochemical components (chlorophyll content, fresh weight moisture content and dry weight moisture content). A total of 80 sample trees were selected from the test area, covering the following three stages: before attack, early-stage infection and middle-to late-stage infection. Combined with the Findpeaks-SPA function, the response relationship between biochemical components and spectral continuous wavelet coefficients was analyzed. The support vector machine classification algorithm was used for detection infection. The results showed that there was no significant difference in the biochemical composition between healthy and early-stage samples, but the spectral continuous wavelet coefficients could reflect these subtle changes with varying degrees of sensitivity. The continuous wavelet coefficients corresponding to these stresses may have high potential for infection detection. Meanwhile, the highest overall accuracy of the model based on chlorophyll content, fresh weight moisture content and dry weight moisture content were 90.48%, 85.71% and 90.48% respectively, and the Kappa coefficients were 0.85, 0.79 and 0.86 respectively. Introduction Jas's Larch Inchworm (Erannis jacobsoni Djak) is a forest pest that is mainly distributed in northern and northeastern Mongolia.As the main defoliators of coniferous forests in Khentii Province, Mongolia, Jas's Larch Inchworm (JLI) poses serious threats to the ecological security of the Siberian larch (Larix sibirica Ledeb) forest [1].Mongolia is rich in forest resources dominated by coniferous forests, which make up 76.6% of the total forest area and provide good habitat for insects.According to statistics from the Mongolian Ministry of Forestry, the area of the larch forest threatened by JLI increased from 46,838 hm 2 to 292,833 hm 2 between 2010 and 2017, making it the most serious pest in Mongolia.Forest destruction will increase the likelihood of forest fires and pose a serious threat to the forest ecosystem [2,3].According to reports, as of 17 April 2020, Mongolia has 23 counties in seven provinces including Arhangei, Bulgan, Donald, Serenge, Sukhbaatar, Kenti and Huwengur.A total of 31 forest and grassland fires occurred over an area of 9.82 hm 2 and caused severe economic damage.With global warming, longer dry spells and longer summers have created conditions for harmful forest pests to survive, which has increased the pressure on forest pests [4,5].We have checked a lot of the literature on forest pest and found that there are few studies on JLI pests in the world.In particular, there are few experimental studies on remote sensing technology, and it is difficult to control the spread of JLI in larch forests. Timely detection of pests and diseases is an important link for foresters to control the spread of pests [6].Traditional JLI detection methods rely heavily on the visual recognition and empirical analysis of local national experts.This method is time-consuming, labor-intensive, and problematic [7][8][9].Even in the early stages of the attack, JLI did not show any obvious symptoms on the forest canopy, and it was difficult to be detected in time.Hence, it is very important to develop an effective method for detecting JLI infection in Siberian larch forests.The pest feeds on needle leaves and twigs, altering the content of biochemical components (such as chlorophyll content and water content) in leaves from late May to June (larval stage) every year [10].As the severity of pest infestation increases, the loss rate of needles will increase, and the color of the Siberian larch forest canopy will change from green ("green attack") to yellow ("yellow attack") to red ("red attack"), and finally to gray ("gray attack") [10].The transition period from the green canopy to the yellow canopy is called the green attack stage, which is the early stage of pest infestation [11,12].Many studies on plant diseases and insect pests show that the detection rate of yellow, red and gray attack is higher [13][14][15].However, the detection of "green attack" is relatively low, although the stress of biochemical components such as leaf chlorophyll content and water content are obviously detected at this stage [16,17].The study also showed that hyperspectral techniques can be used to estimate the chlorophyll and water content of forest leaves under pest stress.For instance, RL et al. [18] analyzed the sensitivity between absorption characteristics, three-band ratio indices of spectra, and corresponding relative water content of oak leaves.They concluded that the relative moisture content of oak leaves and the absorption characteristic parameters exhibit linear relationships at 975 nm, 1200 nm, and 1750 nm, which indicates that hyperspectral can capture changes in moisture content.Zhang et al. [19] used spectral continuous wavelet coefficients to estimate the chlorophyll content of corn pests, and the results confirmed the potential of hyperspectral inversion for determining chlorophyll content.Asner et al. [20] developed a spectrum feature of Rapid Ohia Death (ROD) and found that 80% of plants infected with fungal pathogens had reduced water content and chlorophyll content.From what has been said above it can be seen that hyperspectral data are obviously very sensitive to these small changes and can offer technical assistance in the detection of pest forests. The hyperspectral continuous wavelet transform detects small changes in forest tree infestation status, which can improve the weak or insignificant spectral signals caused by the infestation and highlight some characteristic information, such as the spectral absorption and reflection properties of the shape and position of the canopy or leaves [21][22][23] Therefore, the use of continuous wavelet coefficients is very important for detecting forest pests at different stages of infestation.This experiment aims to explore a new method for extracting continuous wavelet spectral features sensitive to chlorophyll and moisture content.Pearson correlation analysis is usually used to extract sensitive spectral features.[19,[24][25][26].Then the more sensitive spectral bands are further screened out by statistical analysis.For the correlation analysis of the entire waveband, a correlation value is obtained for each waveband, so that the obtained value is continuous and forms a waveband curve with inconsistent fluctuations.Peck et al. [27] A review article summarized some peak extraction algorithms, from which we found that the Findpeaks function can effectively divide the highest peaks into an interval.This is an effective algorithm for extracting sensitive bands and is worth using in this research.Since the sensitive band extracted by the Findpeaks function may not correctly explain the contribution of the relevant band to target detection, we introduce a successive projections (SPA) algorithm.If the number of original bands is large, the process will take a long time, but the RMSE value can quantify the whole process.This feature makes SPA more suitable for practical applications [28]. In addition, some machine learning algorithms have been successfully applied to detect the symptoms of plant infestation.For example, Tian et al. [29] used the SVM algorithm to successfully identify the different degrees of damage to rice leaf blast infected rice plants, and the classification accuracy rate in both the asymptomatic phase and the early stage of infection exceeded 80%.Sarangdhar et al. [30] developed a support vector machine algorithm to identify cotton leaf diseases, and the classification result was 83.26%.These studies show that the support vector machine algorithm has great potential in monitoring plant symptoms.Support Vector Machine is a non-parametric method that attempt to use an optimal hyperplane for training data in a multidimensional feature space.Therefore, when data is classified from multiple sources, it can have better classification accuracy than non-parametric methods such as the method of maximum likelihood.At present, the remote sensing research using hyperspectral continuous wavelet coefficients to detect pest infection status is still immature.In particular, the hyperspectral classification of asymptomatic, early and infected larch has not, to our knowledge, been attempted under the stress of JLI infestation.In response to the above problems, the overall goal of this research is to establish a classification method based on Findpeaks-SPA-SVM to provide the most important experimental data and theoretical basis for large-scale detection of JLI, identification of asymptomatic, early and infected larch trees.Therefore, the specific objectives are: (1) By analyzing the differences in the sensitivity of different biochemical components of larch to different hyperspectral continuous wavelet coefficients, to explore the potential of different biochemical components in detecting pest infected forests; (2) Evaluate the classification accuracy of Findpeak-SPA-SVM algorithm in asymptomatic, early and infection stages and the best combination of hyperspectral features. Study Area The study site is a larch forest in northeastern Mongolia (110°46′1.2″E to 110°46′33.6″E, 48°26′13.2″N to 48°26′34.8″N) (Figure 1).It is located 100 km from Khentii Province and has a total area of 16.75 hm 2 and an annual average altitude of 1330 m.The study area has a continental climate, with an annual average temperature of 20 °C in June and July and an average annual precipitation of 200-300 mm [31].The forest has a single tree species (larch), and it is characterized by poor forest quality and susceptibility to outbreaks of JLI.Larch trees in the study area exhibit different levels of damage, which can represent different stages of pest infestation.Therefore, it can meet the needs of this research. Selection of Sample Trees JLI pests damage larch forests mainly during the period from late May to mid-July (larval stage).During this time, the larvae will eat a large number of healthy needles and twigs, which will seriously damage the tree and change the color of the leaves [1,10].Therefore, sample trees were selected on the basis of leaf loss rate and canopy color data.Canopy color data were determined through a combination of visual discrimination in the field and indoor photo identification (using the straw tool of Adobe Photoshop software to obtain the RGB information of larch canopy photos) (Figure 2).For the data of leaf loss rate, typical branches were selected from the upper, middle, and lower levels of each sample tree, and a total of six typical branches were selected.Then the number of damaged needles and healthy needles were counted, and the leaf loss rate of each sample number was calculated using Equation (1): where is the leaf loss rate of the -th (i = 1, 2, 3... 79, 80) sample tree, and its value is in the range of 0% to 100%. and are the number of healthy needles and damaged needles, respectively, of the -th sample branch.This sampling approach provides a fairly accurate and feasible method for spectral identification of forest canopy affected by insects.The JLI attack is highly likely to begin at the upper crown, and the entire sampling process from the upper to the middle and lower layers is a scalable sampling method for investigating chaotic insect invasion.As shown Table 1, sample larch trees with a leaf loss rate of 0%-5% and a green canopy color were defined as healthy trees (before attack).Sample larch trees with a leaf loss rate of 5%-15% and a green canopy color were defined as damaged early trees (earlystage infection).Sample larch trees with a leaf loss rate of 15%-100% and yellow, red, and gray canopy colors were defined as damaged trees (middle to late-stage infection).In this manner, 19 healthy trees, 21 damaged early trees, and 40 damaged trees were selected from the test area as the basic data of the hyperspectral identification model (Table 1).In the experiment, the ground object spectrometer, SVC HR-102, was used; this spectrometer has a spectral range of 350-2500 nm.During the collection of hyperspectral reflectance data, the same distance (approximately 20 cm) was maintained between the instrument probe and the needle of the sample tree and the correction distance of the whiteboard; the field of view angle was 25°, vertical downward, and the data were collected under fine weather conditions from 10:30 a.m. to 14:30 p.m. Cut a typical branch from the upper, middle, and lower layers of each sample canopy.Hyperspectral data for each layer were collected five times, and the whiteboard was calibrated for each sample tree to ensure accuracy and reliability of the data. Data Collection and Preprocessing of Biochemical Components (1) Chlorophyll content data The SPAD-502 portable chlorophyll meter was used to measure the relative chlorophyll content of larch needles.In the determination of leaf SPAD value, as leaf thickness, development stage, and environmental conditions are variable, the actual chlorophyll content has a certain influence, and it needs to be strictly controlled [32].In this study, only one test area was considered, and its development stage and environmental stage were relatively consistent.Therefore, in order to improve the universality of experimental data, we ensured that the thickness of the needles of each sample tree was as consistent as possible during the clamping of experimental instruments.The specific approach is to ensure that the needle samples to be tested completely cover the instrument receiving window, and the needles should be arranged in a layer.According to this experimental process, at least three repeated measurements were performed on each tree, and the average value was taken to represent the relative chlorophyll content of the sample tree.Then the absolute chlorophyll content was further calculated by Equation ( 4): where y is absolute chlorophyll content (μg/cm 2 ), and x is relative chlorophyll content. (2) Water content data First, three twigs of similar sizes were cut from each sample tree, and their weights were averaged in the field.Then, they were sealed and taken to the laboratory for drying and weighing.The average weight of branches weighed in the field is called fresh weight, and the average weight of branches dried in the laboratory is called dry weight.It should be noted that the fresh weight and dry weight data cannot truly represent the moisture content of each sample tree.Many scholars use the fresh weight moisture content and dry weight moisture content to express the moisture content of plant tissues [33][34][35].The fresh weight and dry weight moisture contents were calculated by Equations ( 5) and ( 6): where FW is the fresh weight, DW is the dry weight, and LWCF and LWCD are the fresh weight and dry weight moisture contents of twigs, respectively. Method The overall research technical route of the hyperspectral identification of JLI infection forest in the different stage is shown in Figure 3 2.3.1.Sensitivity Analysis Before conducting a sensitivity analysis, the significant differences in the biochemical composition of healthy, early damaged and damaged trees should be assessed.We calculated the mean, maximum, minimum, and standard deviation (std) of various sample trees (Table 2).The conditions for measuring biochemical composition in the field have some accumulated errors in the process of environmental, instrument and human operation.Therefore, the abnormal and non-compliant sample trees are eliminated.In addition, the experiment considers the authenticity of the measured biochemical composition values, so the data quality of the sample tree we select must ensure that it meets the experimental requirements.The Pearson correlation analysis method was used to determined significant correlations between the biochemical components and continuous wavelet coefficients.The value of p and the correlation coefficient (r) determine the correlation between them.The smaller the p-value, the more significant the correlation between variables.The closer the value of r is to 1, the better the sensitivity.This study analyzed the correlation between forest biochemical composition and spectral continuous wavelet coefficients to understand the distribution of sensitive spectral bands under JLI attack.The number of peak points in the waveform function and the corresponding peak values can be rapidly extracted using the Findpeaks function in MATLAB2021b [36][37][38].When the value of a function is greater than the left and right adjacent dependent variables, it is defined as a peak value.In this study, the Findpeaks toolbox function was used to automatically find the R 2 peak between the biochemical components and hyperspectral features, and then extract the corresponding sensitive hyperspectral features.Findpeaks function has two important parameters.The first is mpd, which is expressed as the minimum distance between adjacent peaks in the frequency band R 2 ; the other is mph, which is expressed as the minimum height between adjacent R 2 peaks.There is no clear standard for the choice of mpd and mph, which depends on the requirements of the experiment.After observing the R 2 results and considering the band range, we select the mpd and mph values to be 20 and 0.25, respectively.The result of this parameter selection is to control the number of extracted bands between 0-90 (band range is 350-1800 nm) and its R 2 value will be greater than 0.25 (p < 0.05) for the purpose of extracting to reach sensitive hyperspectral feature extraction. Biochemical components and hyperspectral data acquisition of Larch forest attacked by JLI The successive projection algorithm (SPA) has been successfully applied in many studies on the dimensionality reduction processing of vegetation hyperspectral features [39][40][41][42].SPA can overcome the collinearity between sensitive bands, select important wavelengths, and establish reliable models.The principle for the SPA to select the sensitive band is that the selected sensitive band is the new sensitive band among all remaining sensitive band and the new sensitive band has the largest projection value on the orthogonal subspace of the previously selected sensitive band and use the root mean square error (RMSE) as a scoring standard to determine the optimal band.The principle and steps of SPA algorithm are explained in many related articles, so there is no more description here.Because of the Findpeaks function extracts many sensitive hyperspectral features, including a large amount of low sensitivity spectral feature information.Therefore, in order to improve the stability and accuracy of the detection model, the SPA is used to further process the Findpeaks processing results to obtain high-sensitivity hyperspectral features (the combined algorithm is denoted as Findpeaks-SPA). Model Establishment and Evaluation From the data of all larch sample trees, 59 trees (70%) were selected as the training sample data, and the remaining 21 (30%) trees as the verification sample data.The training sample data includes 14 healthy trees, 15 damaged early trees, 29 damaged trees, and the rest is verification sample data.The sensitive spectral features of chlorophyll content, fresh weight moisture content, and dry weight moisture content extracted based on Findpeaks-SPA were taken as independent variables of the model, and sample trees at different stages of damage were taken as the dependent variables.Using the support vector machine (SVM) algorithm, the hyperspectral recognition model was established, and we evaluated the accuracy of the model.SVM algorithm have been widely used in many fields, and I have made many achievements in various applications [43][44][45][46].Therefore, it is worthwhile to try to apply these methods in this research. Support vector machines are non-parametric supervised classifiers.It follows the strategy of minimizing structural risk, constructs an optimal separation hyperplane, and maximizes the boundary between classes with fewer support vectors.Compared with traditional training methods, this can achieve accurate classification results in a data structure with fewer training samples and stronger aggregation.[47][48][49].We use a free library (LibSVM) with radial basis function (RBF) kernel to perform support vector machine tasks.Its classification accuracy and precision are mainly controlled by the parameters c and γ. γ controls the width of the Gaussian kernel and c controls the penalty for training samples on the wrong side of the decision limit.The value of c will determine the number of support vector machines obtained.For example, the smaller the value of c, the smaller the number of support vectors obtained and the greater the classification error; otherwise, the larger the number of support vectors, the problem of overfitting arises.Taking into account the rationality of the parameters c and γ, we use the libSVM library for grid search and five cross-validations. In order to evaluate the accuracy of the classification model, we use MATLAB2021b software to construct a confusion matrix for the classification results.We can obtain four model evaluation indexes: user accuracy (UA), producer accuracy (PA), overall accuracy (OA) and Kappa coefficient.The specific effects of these indicators have been explained in many articles [50,51], so they will not be repeated here. Sensitivity Analysis of Hyperspectral Features to Biochemical Components Given that the spectral continuous wavelet coefficients appear to be related to the outbreak of JLI, the correlation coefficient r squared (R 2 ) between spectral continuous wavelet coefficients and the biochemical components (chlorophyll content, fresh weight moisture content and dry weight moisture content) were analyzed.Differences in the responses of the biochemical components to different wavelengths of spectral continuous wavelet coefficients were identified.A significant correlation was observed when the pest outbreak conditions significantly influenced the biochemical components, indicating that the pest outbreak causes varying degrees of changes in the biochemical components of the larch.Therefore, the components exhibited different sensitivities to different bands of the continuous wavelet coefficients of the spectrum.The coefficients between the continuous wavelet coefficients and chlorophyll content, fresh weight moisture content, and dry weight moisture content at each wavelength are shown in Figure 4. The continuous wavelet coefficients exhibited varying degrees of sensitivity to the biochemical components, although the sensitivities were very similar with subtle differences.In relative terms, the spectral continuous wavelet coefficients were highly sensitive to fresh weight moisture content, followed by chlorophyll content and dry weight moisture content, which indicates that the pest outbreak had a stronger effect on fresh weight moisture content than on chlorophyll content and dry weight moisture content.This finding suggests that the continuous wavelet coefficients can capture these subtle changes, thus facilitating the early identification of pest outbreaks. The results show that continuous wavelet coefficients can capture the spectral absorption and reflection characteristics caused by chlorophyll and moisture.With increasing degree of damage by the JLI, the rate of leaf loss will gradually increase.At the same time, the chlorophyll content and water content will gradually decrease, leading to obvious responses of the spectral reflectance of the forest canopy [52].Therefore, the continuous wavelet coefficient of the spectrum and the chlorophyll content, fresh weight moisture content, and dry moisture content of the needles have high potential for application to the detection of JLI outbreak. Extraction of Sensitive Hyperspectral Feature Bands In this study, the Findpeaks function and combined Findpeaks-SPA function was used to process the sensitivity function between the continuous wavelet coefficients and the biochemical components and extract sensitive hyperspectral feature bands (Figure 5).As shown in Figure 5a, sensitive hyperspectral features corresponding to the chlorophyll content are mainly in the ranges of 360-522 nm, 772-972 nm, 1140-1433 nm, and 1536-1791 nm.This shows that the blue absorption band (403 nm, 427 nm, 434 nm, 450 nm, 453 nm, 471, and 484 nm) and red absorption band (614 nm and 654 nm) and green reflection peaks (506 nm and 542 nm) of chlorophyll content were well captured using the Findpeaks-SPA function.Figure 5b,c show that sensitive hyperspectral characteristic bands corresponding to fresh weight moisture content are mainly in the ranges of 953-962 nm, 967-1010 and 1404-1468 nm.Sensitive hyperspectral bands corresponding to dry weight water content are mainly in the ranges of 978-1007 nm, 1168-1195 nm, and 1439-1519 nm.This indicates that the Findpeaks-SPA function well captures the absorption bands of water (964 nm, 983 nm, 991 nm, 1099 nm, 1184 nm, 1421 nm, 1493 nm, and 1502 nm,).In general, the continuous wavelet coefficients in the bands of 360-522 nm, 772-1010 nm, 1140-1433 nm, 1404-1519 nm and 1536-1791 nm were highly sensitive to the damage degree of larch.This can be explained by the serious decrease in chlorophyll and water content in the conifer of larch forest due to the JLI attack (Table 2).The 360-522 nm band mainly reflects the reflection characteristics of chlorophyll, while the 772-1010 nm, 1440-1433 nm, 1404-1519 nm and 1536-1791 nm bands mainly show the characteristics of water absorption in needles.With the increase of leaf loss rate caused by JLI and the change of larch canopy color from green to yellow to red to gray, the chlorophyll content and the water content of conifers gradually decreased, leading to the gradual increase of spectral reflectance.Therefore, the obtained experimental results are undoubtedly in line with the biological characteristics of plants.These results satisfactorily prove that the Findpeaks-SPA function can effectively extract sensitive hyperspectral features of chlorophyll content and water content. Model Results Figure 6 shows the overall accuracy (OA) and kappa coefficient (A) of the SVM classifier for chlorophyll content, moisture content in fresh weight, and moisture content in dry weight of different trees.Chlorophyll content shows high recognition accuracy on bior2.8,coif3 and sym3 (OA: 0.81-0.90;K: 0.72-0.85),and the accuracy of these mother wavelet bases is higher than that of fresh weight moisture content and dry weight moisture content except that OA and K (0.81,0.74) similar to dry weight moisture content are produced on sym3.Among them, the chlorophyll content has the best accuracy at coif3, and its overall accuracy and kappa coefficient are 0.90 and 0.85, respectively, as shown in Figure 6a.The fresh weight moisture content classification accuracy of coif4, sym2 and sym7 is higher, the overall accuracy is 0.86, 0.86 and 0.86, and the kappa coefficient is 0.79, 0.79 and 0.80, respectively (Figure 6b).The dry weight moisture content produced high classification accuracy on bior4.4,db2, db4, db5 and sym3, especially bior4.4showed the highest overall accuracy and kappa coefficient (0.90, 0.86) (Figure 6c).It should be noted that chlorophyll content, fresh weight moisture content and dry weight moisture content produced the lowest classification accuracy on bior3.3,bio2.2 and sym2, with the overall accuracy of 0.52, 0.43 and 0.38, and kappa coefficients of 0.34, 0.26 and 0.21, respectively.These results show that the models constructed by different biochemical components on various mother wavelet bases show different accuracy.Chlorophyll content data produced more stable overall accuracy (0.52-0.90) than fresh weight moisture content and dry weight moisture content data (0.43-0.86,0.39-0.90).The overall accuracy of the chlorophyll content data (0.52-0.90) is higher than that of the fresh weight moisture content (0.43-0.86) and dry weight moisture content (0.39-0.90).On the whole, the larch damage identification model constructed on each biochemical component can better classify healthy trees, early damaged trees and damaged trees, which can meet the requirements of classification accuracy.Since our model has many outcomes, the confusion matrix of each outcome can only be used as a reference for supplemental data.The confusion matrix and evaluation model index results of the optimal SVM classifier based on chlorophyll content, fresh weight moisture content and dry weight moisture content are shown in Tables 3-5.The overall accuracy was 90.48%, 85.71% and 90.48%, and the Kappa coefficients were 0.85, 0.79 and 0.86, respectively, which met the requirements of this study.The user accuracy and production accuracy of the healthy trees and the early damaged trees based on chlorophyll content were consistent, being 1 and 0.8, respectively.The user accuracy and production accuracy of the damaged forest were 91.68% and 100%, respectively.The user accuracy was 83.33%, 66.67% and 100%, and the production accuracy was 100%, 80% and 81.81%, respectively.The user accuracy of various sample trees based on dry weight moisture content was 100%, 71.43% and 100%, and the production accuracy was 80%, 100% and 90.91%, respectively.It can be seen that the identification rate of damaged trees was higher than that of healthy and early damaged trees, while the identification rate of early damaged trees is relatively low, especially the production accuracy of early damaged trees based on dry weight moisture content was only 71.43%.At the same time, it can also be seen that there are fewer cases of misclassification of injured trees into other categories, while there are relatively more cases of misclassification of healthy trees into early injured trees or early injured trees into healthy trees.This is because there is no obvious quantitative change in the biochemical components of our healthy sample tree and the early victim sample tree, and there is a high gap between the healthy sample tree and the victim sample tree, which will affect the difficulty of classification between the early victim tree and the healthy tree.This was consistent with our experimental design.From the accuracy analysis of the above models, we can see that this experimental mode is workable to detect the damage status of larch forest in the early stage of inchworm invasion. Sensitive Hyperspectral Feature Bands This study proved that there was a good sensitivity between biochemical components and hyperspectral continuous wavelet coefficients in the larch forest under the stress of JLI in the whole hyperspectral band (Table 6).In some specific band ranges, spectral continuous wavelet coefficients can capture some spectral absorption and reflection characteristics corresponding to changes in the chlorophyll content and water content of leaves.Hence, when we use Pearson correlation to analyze the relationship between the hyperspectral continuous wavelet coefficient and the chlorophyll content and water content of leaves, we find that there are many sensitive band regions of different degrees.Some recent study also confirmed that hyperspectral continuous wavelet coefficients are sensitive to changes in plant biochemical parameters [19,53,54].Such a large number of sensitive bands extracted will affect the stability and accuracy of later modeling because it contains a large number of continuous bands.We select the sensitive band corresponding to the peak of the correlation coefficient by introducing Findpeaks function.The selected band has high sensitivity and ensures the dispersion of the band.We have identified different sample tree classes based on the sensitive bands extracted by Findpeaks function, but the results are not ideal.In addition, we find that the sensitive bands extracted based on Findpeaks function have multicollinearity problem, which reduces the stability of the model to some extent.In order to solve the problem of band multicollinearity, we use SPA function to further screen the highly sensitive spectral features, which can improve the speed and stability of later modeling.The SPA function is widely used in sensitive hyperspectral feature extraction [55][56][57].After the Pearson-Findpeaks-SPA process, we extracted the sensitive hyperspectral bands that met our experimental requirements.These extracted sensitive hyperspectral bands can effectively capture the spectral reflection and absorption characteristics caused by differences in biochemical components, but they cannot correctly explain the contribution of the relevant bands to target detection.This is a shortcoming of this experiment.In addition, the main purpose of our experiment is to explore a new method to extract sensitive hyperspectral features.To establish an effective identification model of different degrees of damage of JLI infected larch forest.Therefore, the sensitive hyperspectral feature extraction experiment of Pearson-Findpeaks-SPA has certain application value. Future Trends and Prospects of Remote Sensing Monitoring of JLI Outbreak In this study, hyperspectral data were used to detect the damage of larch forest under the infection of JLI.This method has good stability in the identification of stressed forest.However, there are still several shortcomings.For example, we cannot determine the optimal pest index, because our sensitivity analysis results showed that although there was a good sensitivity between the biochemical components of the influence of the JLI and the spectral continuous wavelet coefficient, there was little difference in sensitivity between them.At the early stage of larch forest destruction, the canopy was green, and pathogens or other climatic conditions probably caused the decrease in chlorophyll content and water content in leaves.Therefore, our experiments cannot accurately judge the external factors of forest destruction.As a result, pathogen infection or climatic conditions may have damaged some trees in our study area to some extent before our data collection.This damage is likely to work with the JLI pest to damage the trees.Accordingly, when we only detect pest stress through the spectral difference between asymptomatic samples and infected samples, it is often not comprehensive enough.However, this method is especially useful for pest control.If the aim is to trace the cause of symptoms, we suggest future studies should consider forest conditions prior to pest attack.Specifically, a certain number of sample trees were selected, and then the gradual changes of forest spectral reflectance and some biochemical components in the time series from pupal stage to adult stage were recorded.Such spectral and biochemical data can also explore the cause of symptoms using the method proposed in this study. At present, the pest of inchworm is only distributed in larch forest of Mongolia, but the prediction and control of this pest should be carried out further.If the pest is left unchecked, it is highly likely to explode and spread to neighboring countries, causing unpredictable damage.We hope that more international researchers will pay attention to and take part in forest remote sensing research on JLI infection.Since we only did this under the condition that a single pest (JLI) infected a single forest (larch forest), the applicability of other forest pests needs to be further verified.In addition, remote sensing monitoring research on this insect pest has not been widely concerned by international researchers, and many articles supported by experimental technology are lacking.Therefore, the technical theories and methods of this study are referenced from other plant diseases and insect pest articles.Of course, we are only designing a study using ground nonimaging hyperspectral remote sensing, and implementing the technology is the key and the focus of future development. Conclusions This study showed that hyperspectral features based on chlorophyll content, fresh weight water content and dry weight water content could detect JLI infected larch forests.Nondestructive testing of healthy, early and damaged trees is important for controlling JLI outbreaks.We make full use of the full band spectrum of all sample trees and study the spectral characteristics of 36 parent wavelet bases obtained by continuous wavelet transform.A total of 11 mother wavelet bases (bior2.8,coif3, sym3, coif4, sym2, sym7, bior4.4,db2, db4, db5 and sym3) are considered to reveal the important characteristics of JLI infection.In addition, coif3, coif4 and bior4.4 are the optimal hyperspectral characteristics of chlorophyll content, fresh weight moisture content and dry weight moisture content in turn.The overall accuracy of the model based on these hyperspectral mother wavelet bases is 0.86-0.90, the kappa coefficient is 0.79-0.86,and the identification accuracy of early victims is more than 80%.These results prove the feasibility of extracting sensitive bands by Pearson-Findstacks-SPA and the practicability of SVM classification algorithm.This design method is helpful to JLI's early warning of Larch Forest stress and provides a very important theoretical basis and technical guidance for the detection of other largescale pest outbreaks, which has a certain reference value. Figure 1 . Figure 1.(a) Study area in Mongolia: Elevation (b) Drone photography of the study site; distribution of sample tree (c) larch forest in the study area; (d) Jas's Larch Inchworm (Erannis jacobsoni Djak). Figure 4 . Figure 4. Variations of R 2 (the correlation coefficient r squared) with wavelength between the spectral continuous wavelet coefficients and: (a) chlorophyll content (b) fresh weight moisture content; and (c) dry weight moisture content. Figure 6 . Figure 6.Accuracies of early pest discrimination models based on (a) chlorophyll content, (b) fresh weight moisture content, and (c) dry weight moisture content. Table 1 . Sample tree evaluation table. Table 2 . Descriptive statistics of biochemical components of different types of sample trees. Table 3 . Confusion matrix and evaluation index based on optimal mother wavelet basis (coif3) model of chlorophyll content. Table 4 . Confusion matrix and evaluation index based on optimal mother wavelet basis (coif4) model of fresh weight moisture content. Table 5 . Confusion matrix and evaluation index based on optimal mother wavelet basis (bior4.4)model of dry weight moisture content. Table 6 . Sensitivity analysis results obtained by using Pearson-Findpeaks-SPA.
2021-12-31T16:19:00.285Z
2021-12-28T00:00:00.000
{ "year": 2021, "sha1": "1dd93235486fadaef8d700f69488a4704e249297", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-4292/14/1/124/pdf?version=1640769484", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "9d78fa9e4c68cce288dbcd8812ce14afdd097282", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
261740410
pes2o/s2orc
v3-fos-license
Alcohol does not influence trust in others or oxytocin, but increases positive affect and risk-taking: a randomized, controlled, within-subject trial Background Alcohol consumption to facilitate social interaction is an important drinking motive. Here, we tested whether alcohol influences trust in others via modulation of oxytocin and/or androgens. We also aimed at confirming previously shown alcohol effects on positive affect and risk-taking, because of their role in facilitating social interaction. Methods This randomized, controlled, within-subject, parallel group, alcohol-challenge experiment investigated the effects of alcohol (versus water, both mixed with orange juice) on perceived trustworthiness via salivary oxytocin (primary and secondary endpoint) as well as testosterone, dihydrotestosterone, positive affect, and risk-taking (additional endpoints). We compared 56 male participants in the alcohol condition (1.07 ± 0.18 per mille blood alcohol concentration) with 20 in the control condition. Results The group (alcohol versus control condition) × time (before [versus during] versus after drinking) interactions were not significantly associated with perceived trustworthiness (η2 < 0.001) or oxytocin (η2 = 0.003). Bayes factors provided also substantial evidence for the absence of these effects (BF01 = 3.65; BF01 = 7.53). The group × time interactions were related to dihydrotestosterone (η2 = 0.018 with an increase in the control condition) as well as positive affect and risk-taking (η2 = 0.027 and 0.007 with increases in the alcohol condition), but not significantly to testosterone. Discussion The results do not verify alcohol effects on perceived trustworthiness or oxytocin in male individuals. However, they indicate that alcohol (versus control) might inhibit an increase in dihydrotestosterone and confirm that alcohol amplifies positive affect and risk-taking. This provides novel mechanistic insight into social facilitation as an alcohol-drinking motive. Supplementary Information The online version contains supplementary material available at 10.1007/s00406-023-01676-w. Introduction Alcohol is among the most culturally meaningful substances that people use throughout history to induce specific bodily states [1,2].However, since consuming alcohol is a major health risk, it is necessary to consider relevant drinking motives [3].An important reason for alcohol consumption is facilitation of social interaction-especially in males [4]. Alcohol may exert its socially facilitating effects by increasing perceived trustworthiness of others.Trustworthiness determines the degree of trust that is seen in other individuals [5].Trust involves the expectation of mutually benevolent interaction and forms a central precondition for the emergence of social interactions and relationships [6][7][8].Consequently, a lack of trust in the persons present hinders social interactions.An influence of alcohol consumption on perceived trustworthiness is suggested by the fact that social anxiety is both associated with an increased prevalence of alcohol use disorder (AUD) and negatively associated with trust and perceived trustworthiness regarding presented faces [9][10][11][12][13].Consequently, it is assumed that socially anxious individuals consume alcohol, partly because this increases their trust in interaction partners and, as a result, facilitates social interaction.To our knowledge, there is a gap in research on whether alcohol influences perceived trust in others. It is possible that the prosocial hormone oxytocin [14][15][16][17] mediates the hypothesized alcohol-induced increase in perceived trustworthiness, since intranasal administration of oxytocin increases perceived trustworthiness regarding presented faces as well as interpersonal trust [18,19].Previous studies associated oxytocin concentrations with alcohol consumption and AUD [20,21].In this regard, a previous study [22] identified higher oxytocin blood concentrations in patients with AUD (than in same-sex controls) at the time of hospital admission for detoxification, which decreased by the time of the follow-up survey (approximately 5 days later) and then did no longer significantly differ from those of same-sex controls.This observation may suggest that elevated oxytocin concentrations in early abstinent (defined as 24-72 h of abstinence) patients with AUD result from acute alcohol intoxication.This assumption is supported by a positive correlation between blood alcohol and oxytocin concentrations in males of this study.However apart from associative findings, the literature cannot conclusively answer the question whether alcohol consumption in an experimental setting leads to an increase in oxytocin concentration.Although some experimental studies have failed to demonstrate a significant effect of alcohol consumption on the oxytocin blood concentration [23][24][25][26], these few studies are subject to several limitations.First, the participants did not reach blood alcohol concentrations of more than about 0.9 per mille and often much less, which might have been too low to induce a significant increase in oxytocin concentration [22,27].Also, many previous studies did not control for food and fluid intake as well as sexual or high physical activity prior to the experiment, which may have reduced the impact of alcohol and influenced the oxytocin concentration [28][29][30].Moreover, some previous studies were limited due to small sample sizes. Main aims of the study: The goal of this study was to establish that in male social drinkers, an alcohol challenge (versus water; both mixed with orange juice) increases the behavioral endpoint perceived trustworthiness and that the hypothesized association between alcohol concentration and trustworthiness is mediated by salivary oxytocin concentrations.We aimed to overcome the above reported limitations of the literature.We used a male sample for several reasons: central preliminary findings such as the association between alcohol and oxytocin blood concentrations exclusively emerged in males [22].In line with this, studies further indicate that oxytocin is more relevant to alcohol use in males than females [31,32]. Additional aims of the study: In addition, administration of testosterone has been shown to reduce perceived trustworthiness of others [33,34] and alcohol intake was demonstrated to decrease testosterone concentrations [35,36].However, we lack experimental data on how alcohol intake influences dihydrotestosterone (DHT) concentrations, which is a metabolite of testosterone and has a higher affinity to the androgen receptor than testosterone [37].Therefore, we also explored the effects of alcohol on testosterone and DHT concentrations as well as a potential mediation effect regarding perceived trustworthiness.Finally, we aimed to confirm previous findings on how alcohol influences positive affect and risk-taking, since multiple studies demonstrated that alcohol administration increases positive affect and risk-taking [2,[38][39][40][41], both of which exert prosocial effects [42][43][44][45][46][47][48] and may, thus, be involved in alcohol-induced facilitation of social interaction.We did not examine negative affect since we did not sample depressed participants and hence, potential floor effects would have prevented the detection of an alcohol-related decrease. Study description The study was conducted at the Central Institute of Mental Health (CIMH) Mannheim, Germany.The Ethics Committee II of the Heidelberg University approved the project (ID: 2021-608) and all participants provided written informed consent and received 50 euros each for their participation.The study with its primary and secondary endpoints perceived trustworthiness and oxytocin concentrations has been preregistered in the German Clinical Trials Register (DRKS00026599).Of 79 participants who were recruited via the CIMH website and social media, 76 were analyzed after being randomized to the experimental (n = 56) and control conditions (n = 20).The randomization was based on a single sequence of random numbers.Participants in the experimental and control conditions did not significantly differ in any sociodemographic characteristic (Table 1). Inclusion criteria were male sex, minimum age of 18 years, and being a social drinker, which was defined as regularly consuming alcohol in social contexts with blood alcohol concentrations of approximately 1.5 per mille [49].To overcome limitations of previous studies, further criteria were defined (for an overview see Supplementary Appendix SA1).Among others, these included abstaining from drinking more than 0.5 L of fluid as well as sexual activity and high physical activity before the assessment on the day of the experiment.Also, subjects were meant to eat their last meal no later than 3 h before the start of the experiment. During a screening interview prior to the experiment, subjects were instructed to adhere to these guidelines.However, asking the subjects about their adherence on the day of the experiment revealed that only 41 participants (53.95%) had actually complied.We, thus, conducted sensitivity analyses. In a randomized, controlled, within-subject, parallel group, alcohol-challenge experiment, the study subjects consumed an alcohol (Vodka) orange juice mix in the alcohol condition and a water orange juice mix in the control condition (for further details see the study flow diagram in Fig. 1).The conditions were identical apart from the consumed beverage.The subjects were surveyed between 2 and 4 pm.There were three time points of measurement, with perceived trustworthiness, positive affect, and risk-taking, measured at the first and third time points, and salivary oxytocin, testosterone, DHT, as well as breath alcohol concentration measured at all three time points.The mean time span from the start of the first to the end of the third time point was M = 101.64min.The consumed total mass of liquid was kept equal at 1200 g Table 1 Sociodemographic characteristics of the study participants in the alcohol and control conditions The table shows the valid number of subjects analyzed (N), means (M) or relative frequencies (F), standard deviations (SD), and the results of # t, § Welch, and + χ 2 tests.AUDIT, Alcohol Use Disorders Identification Test; BMI, Body Mass Index a The reported amount of pure alcohol corresponds to a total liquor mass (g) of M = 274.37(SD = 29.86)and a total liquor volume (mL) of M = 293.39 between the participants of the alcohol and control conditions.Between the first and second as well as the second and third time point, participants consumed 600 g of liquid each.Based on previous findings [22], the amount of vodka was calculated individually in grams to theoretically evoke blood alcohol concentrations of 1.5 per mille [50] (for details see Supplementary Appendix SA2). Survey of problematic alcohol consumption and alcohol expectancies Problematic alcohol consumption was surveyed using the German version of the Alcohol Use Disorder Identification Test (AUDIT) [51].The extent to which people expect that socially beneficial effects can be achieved through alcohol consumption was captured using a self-created measure that presented the participants with three self-descriptive statements.For each item, the subjects had to indicate on a five-point Likert-scale how much the given statement applied to them.The individual item responses were added up to a total score (for details see Supplementary Appendix SA3). Survey of perceived trustworthiness, positive affect, and risk-taking The paradigm for assessing perceived trustworthiness was adopted from Theodoridou et al. [19].The subjects were each presented with the same 30 pictures of people with neutral facial expressions in random order (see https:// www.kdef.se/ downl oad-2/).For each picture, they had to indicate on a five-point Likert-scale how trustworthy they considered the person depicted.The individual item responses were added up to a total score.Positive affect was measured using the Positive and Negative Affect Schedule (PANAS) [52] and risk-taking using the Expected Involvement subscale of an adapted and translated version of the Cognitive Appraisal of Risky Events questionnaire (CARE) [53]. Quantification of blood alcohol and salivary oxytocin, testosterone, and DHT concentrations The blood alcohol concentration was calculated from breath alcohol content, using the AlcoTrue® M device (5040112002) by bluepoint MEDICAL (Selmsdorf, Germany).Saliva was collected using salivettes according to instructions (e.g., abstain from eating 1 h prior to saliva collection).Two saliva samples of 2 min each were collected at each of the three time points and stored at −20 °C for up to six months.After thawing, the samples were centrifuged at 1000 g for 2 min.The supernatant was used for hormone quantification.Salivary oxytocin, testosterone, and DHT concentrations were quantified using the Cayman Chemicals Oxytocin ELISA kit (500440, Cayman Chemicals, Ann Arbor, MI, USA), the Demeditec Diagnostics Testosteron frei im Speichel ELISA kit (DES6622, Demeditec Diagnostics GmbH, Kiel, Schleswig-Holstein, Germany), and the Tecan 5-alpha Dihydrotestosterone (DHT) ELISA kit (DB52021, IBL International GmbH, a Tecan Group Company, Hamburg, Hamburg, Germany), respectively, according to the manual's instructions.For oxytocin, 50 µL were applied in parallel to a standard curve ranging from 6 to 7,500 pg/mL.Testosterone was quantified in 100 µl with a standard curve from 5 to 1,000 pg/mL.For DHT, 50 µl and a standard curve from 12.5 to 2,500 pg/mL were used.All measurements were performed blinded and within one assay run. Data preparation and statistical analyses After completion of the data collection, the final dataset contained 79 cases.We removed all cases that had to be excluded due to different reasons (e.g., persons who had arrived already intoxicated or persons who indicated that their data should not be used), N = 76 cases (alcohol condition, n = 56; control condition, n = 20) remained in the dataset.Alcohol-induced changes in perceived trustworthiness, positive affect, and risk-taking, as well as oxytocin, testosterone, and DHT concentrations were analyzed using two-factorial analyses of covariance (ANCOVA) with the within-subjects factor time (1 versus 3 or 1 versus 2 versus 3), the betweensubjects factor group (alcohol versus control condition), and the AUDIT score as a covariate.For models with a significant group x time interaction, paired t tests separately for the alcohol and control conditions were calculated to compare the first and third time point.Because it was not possible to blind alcohol administration, correlations between the respective change from the first to the third time point in the alcohol condition and participants' alcohol expectancies were calculated to control for potential expectancy effects.For models with a non-significant group x time interaction, Bayes factors were calculated to further evaluate the given absence of an effect.Structural equation modeling was used to assess the mediation hypothesis.Data were analyzed using R-Studio 2021.09.1 Build 372 (Posit PBC, Boston, MA, USA) and visualized using GraphPad Prism 8.4.3 (Graph Pad Software Inc., San Diego, CA, USA). Main aims: no significant alcohol-related changes in perceived trustworthiness and salivary oxytocin concentration Means and standard deviations regarding perceived trustworthiness and oxytocin at the three time points within the experimental and control groups are displayed in Table 2. A structural equation model was specified within the alcohol condition, which included the indirect effect of the blood alcohol concentration via oxytocin on perceived trustworthiness.For each variable, the difference value from the first to the third time point was used.The oxytocin model revealed no significant indirect effect of alcohol concentration via oxytocin concentration on perceived trustworthiness (z = −0.09,p = 0.933, β = −0.001,CI [−0.014, 0.044]). Group-dependent changes in salivary DHT concentration, without alcohol-related changes in salivary testosterone concentration Means, standard deviations, and standard errors regarding salivary testosterone, and DHT concentrations at the three time points within the experimental and control groups are displayed in Table 2. Structural equation models were specified within the alcohol condition, which included the indirect effect of the blood alcohol concentration via testosterone or DHT on perceived trustworthiness.For each variable, the difference value from the first to the third time point was used.The testosterone model revealed no significant indirect effects of alcohol concentration via testosterone concentration on perceived trustworthiness (z = −0.885,p = 0.376, β = −0.028,CI [−0106, 0.015]).Similarly, the DHT model revealed no significant indirect effect of alcohol concentration via DHT concentration on perceived trustworthiness (z = −0.830,p = 0.407, β = −0.035,CI [−0.156, 0.011]). Alcohol-related changes in positive affect and risk-taking Means and standard deviations regarding positive affect and risk-taking at the first and third time point within the experimental and control groups are displayed in Table 2. Sensitivity analyses Since several participants indicated that they had not adhered to the guidelines on the day of the experiment, all analyses were recalculated excluding these subjects.In these sensitivity analyses, the significant findings of the whole sample persisted (for details, see Supplementary Table ST2). Table 2 Means and standard deviations at the different time points in the alcohol and control conditions EP = endpoints; n = number of participants with available data; M = mean; SD = standard deviation; possible range of instruments: perceived trustworthiness (30-150), positive affect (1-5), risk-taking .For the analysis of the oxytocin concentration and the testosterone as well as DHT concentrations, three or one additional subject(s), respectively, were excluded because of missing values for at least one of the three time points Discussion The present study examined the effects of alcohol administration on perceived trustworthiness and oxytocin concentration (primary and secondary endpoints), as well as on testosterone and DHT concentrations, positive affect, and risk-taking (additional endpoints).We were not able to verify our main hypothesis that alcohol (versus water) increases perceived trustworthiness via modulation of oxytocin.However, we detected an overall decrease in perceived trustworthiness over time in both the alcohol and the control condition.This might be explained by boredom that worsened subjects' stimulus-related attitudes after repeated presentation of the same stimuli [54].We are also tempted to speculate that the study setting might have produced the feeling of competition and stress in the participants that could have deteriorated ratings of trustworthiness.The finding in combination with the results of the Bayesian analysis suggests that alcohol consumption does not facilitate social interaction through an increase in trust in others.There were also no indications of methodological artifacts such as a ceiling effect. An alternate explanation for the null finding emerges by considering that alcohol has a stronger positive-euphoric effect when it occurs in naturalistic-social versus artificially isolated drinking contexts [55].The alcohol effect on perceived trustworthiness might manifest particularly in naturalistic-social drinking contexts.In the present study, the participants consumed alcohol in an artificially isolated laboratory context.In the trustworthiness task, pictures of individuals with neutral facial expressions were presented and the participants were asked to judge without interaction.Therefore, future research is needed to translate this project in a more real-life experiment (e.g., through virtual reality [56] or ecological momentary assessments [57]). Also, oxytocin concentrations did not significantly increase in the alcohol versus the control condition.The absence of alcohol-induced effects was supported by the results of Bayesian analyses, providing substantial evidence in favor of the lack of alcohol-related changes in oxytocin.In the present study, we addressed a population of social drinkers and quantified oxytocin in saliva samples in contrast to patients with AUD and blood serum sampling of oxytocin in the Lenz et al. investigation [22], which might explain the discrepancy concerning oxytocin.Besides, dysregulated concentrations of the soluble blood oxytocin receptor have recently been reported in patients with AUD [58].Hence, it will be interesting to include this receptor in future studies on the effects of alcohol on the oxytocin system. Similar to oxytocin, testosterone concentrations did not significantly change in the alcohol versus the control condition, which was again supported by the results of Bayesian analyses.For both oxytocin and testosterone, the lack of alcohol-related effects might, among others, be due to the induced alcohol concentration in the present study (M = 1.07).Regarding oxytocin, previous work suggests that alcohol-induced increases might be found at higher alcohol concentrations (largely above 1 per mille [22]), indicating that alcohol concentrations in the present study might have been too low to induce changes in oxytocin.Regarding testosterone, recent studies indicate that while low to moderate amounts of alcohol increase testosterone, high amounts of alcohol can be associated with a decrease in testosterone concentrations [59].Given these findings, the present study might have failed to demonstrate an alcohol-induced change in testosterone because the induced alcohol concentration fell in between these opposing effects. Interestingly, DHT did not change over time in the alcohol condition, while it increased in the control condition.We are tempted to speculate that the study setting might have produced a feeling of competition in the participants which is supported by the observed reduction of perceived trustworthiness over time and which can increase androgen concentrations [60,61].This study's results suggest that potential environment effects on androgens might be inhibited by acute alcohol consumption.Future studies should implement alcohol challenges in other study settings to validate these assumptions.Also, it should be examined whether potential alcohol effects are due to alcohol-related decreases in DHT, which might be suggested by different non-experimental findings [62,63]. This study verified our predefined hypotheses that alcohol increases positive affect and risk-taking.The results agree with the findings of McCollam et al. [40] and Lane et al. [41].With the PANAS and CARE, the present study used alternative instruments and added external validity to the earlier findings. Strengths and limitations The present study eliminated several limitations of previous studies on the effect of alcohol consumption on oxytocin concentration.We provided a larger sample size and controlled for food and water intake as well as sexual and high physical activity prior to the experiment.Even though several subjects did not adhere to the guidelines on the day of the experiment, the results persisted even when these subjects were excluded.To our knowledge, this study was the first experiment investigating the effects of a mean blood alcohol concentration above one per mille on perceived trustworthiness and oxytocin.However, the overall findings are limited to the range of blood alcohol concentrations evoked, which were between 0.6 and 1.48 (M = 1.07) per mille.Since blood alcohol concentrations were not calculated from plasma samples but inferred from breath alcohol content, the findings are also limited in this regard.Due to the specific taste of alcohol and its typical physiological effects after high doses, this study could not be blinded.However, additional analyses suggested that the findings regarding positive affect and risk-taking were not due to expectancy effects.For reasons explained earlier, the experiment was limited to males.Future studies are needed to investigate whether alcohol challenges influence perceived trustworthiness, oxytocin, testosterone, and DHT in females.The same applies to people of older age, since the current sample was rather young (M = 23.51).Also, it should be examined whether the present results transfer to plasma oxytocin measurements, since salivary oxytocin might be a weak surrogate for plasmatic oxytocin [64].Moreover, it remains to be shown if further measures of trust in others than self-reports (e.g., economic games [18]) are also nonresponsive to alcohol consumption. Conclusion The present randomized, controlled, within-subject, parallel group, alcohol-challenge experiment contributes to a better understanding of how alcohol may facilitate social interaction.A blood alcohol concentration of 1.07 per mille increased positive affect and risk-taking but did not significantly influence perceived trustworthiness of others, oxytocin, or testosterone concentrations.As far as we know, this study is the first to suggest that alcohol may inhibit environmentally induced DHT increases.The results provide further insight in the role of social facilitation as an alcoholdrinking motive, which might contribute to the development of problematic alcohol use. adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http:// creat iveco mmons.org/ licen ses/ by/4.0/. Fig. 2 Fig. 2 Group x time interaction on positive affect (A) and risk-taking (B).The alcohol condition versus control condition x time interactions were qualified by a significant increase in positive affect and risk-taking in the alcohol condition.PANAS, Positive and Negative Affect Schedule; CARE, Cognitive Appraisal of Risky Events ques- (SD = 31.93)b Kein Schulabschluss, c Hauptschul-oder Realschulabschluss, d (Fach)Abitur/allgemeine Hochschulreife,
2023-09-14T20:23:49.354Z
2023-09-14T00:00:00.000
{ "year": 2023, "sha1": "02149daa3c281129ce7f76d3b9cd9d7cb40194d0", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00406-023-01676-w.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "92103c1cba10933a65d8a2ac0e7847f0d0fed248", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
247210117
pes2o/s2orc
v3-fos-license
Chondroprotective Effects of 4,5-Dicaffeoylquinic Acid in Osteoarthritis through NF-κB Signaling Inhibition Osteoarthritis (OA) is characterized by cartilage degradation, inflammation, and pain. The dicaffeoylquinic acid (diCQA) isomer, 4,5-diCQA, exhibits antioxidant activity and various other health-promoting benefits, but its chondroprotective effects have yet to be elucidated. In this study, we aimed to investigate the chondroprotective effects of 4,5-diCQA on OA both in vitro and in vivo. Primary rat chondrocytes were pre-treated with 4,5-diCQA for 1 h before stimulation with interleukin (IL)-1β (5 ng/mL). The accumulation of nitrite, PGE2, and aggrecan was observed using the Griess reagent and ELISA. The protein levels of iNOS, COX-2, MMP-3, MMP-13, ADMATS-4, MAPKs, and the NF-κB p65 subunit were measured by Western blotting. In vivo, the effects of 4,5-diCQA were evaluated for 2 weeks in a destabilization of the medial meniscus (DMM)-surgery-induced OA rat model. 4,5-diCQA significantly inhibited IL-1β-induced expression of nitrite, iNOS, PGE2, COX-2, MMP-3, MMP-13, and ADAMTS-4. 4,5-diCQA also decreased the IL-1β-induced degradation of aggrecan. It also suppressed the IL-1β-induced phosphorylation of MAPKs and translocation of the NF-κB p65 subunit to the nucleus. These findings indicate that 4,5-diCQA inhibits DMM-surgery-induced cartilage destruction and proteoglycan loss in vivo. 4,5-diCQA may be a potential therapeutic agent for the alleviation of OA progression. In this study, diclofenac was set to be administered once every two days, but it showed an effect on OA. These results may be used as basic data to suggest a new dosing method for diclofenac. Introduction Osteoarthritis (OA) occurs due to cartilage wear associated with the long-term use of joints and is present in people over the age of 60 years [1]. OA is characterized by subchondral bone remodeling and osteophytes, as cartilage is damaged by abrasion and inflammation, ultimately interfering with the overall quality of life [2]. From a molecular biology perspective, OA is caused by an increase in extracellular matrix (ECM) degradation in chondrocytes by cartilage-destroying factors such as oxidative stress and the overexpression of inflammatory mediators. Among them, interleukin 1 beta (IL-1β) and tumor necrosis factor-alpha (TNF-α) are the main factors that accelerate degenerative arthritis by inducing the expression of other cartilage ECM-degrading factors (iNOS, PGE 2 , MMPs, and ADAMTS-4) [3]. Cell Culture In order to isolate chondrocytes, we used a slightly modified method by Kim et al. (2013) [29]. In brief, five-day-old Sprague-Dawley rats were purchased from Damool Science (Daejeon, Korea) for rat primary chondrocyte isolation, and articular cartilage was digested using 0.3% (w/v) collagenase type II dissolved in DMEM/F12 at 37 • C overnight. All animal management and procedures were approved by the Chosun University Institutional Animal Care and Use Committee (CIACUC2021-S0013). The cells and debris were filtered through a cell strainer (0.45 µm). Approximately 4.5 million chondrocytes were collected from eleven rats killed at the same time. Chondrocytes were seeded at 2 × 10 6 cells/mL into 6-well cell culture plates with DMEM/F12 containing 10% FBS and 1% penicillin/streptomycin in a humidified incubator with 5% CO 2 at 37 • C. The chondrocytes were cultured up to 90% confluency and were not passaged during the experiment. Cell Viability Viability analysis of 4,5-diCQA on chondrocytes was performed using the MTT assay following the Sigma-Aldrich manufacturer's protocol. The chondrocytes were treated with 4,5-diCQA (10,20,40,100, and 200 µM) for 24 h. Post-incubation, the MTT solution (5 mg/mL) was added to each well (100 µL/well), and the cells were incubated for another 2 h at 37 • C. Next, the cell culture medium, including the MTT solution, was removed, DMSO (1 mL/well) was added to each well, and the absorbance was measured at 565 nm. Measurement of Nitric Oxide and PGE 2 Production The chondrocytes were pretreated with 4,5-diCQA (10,20, and 40 µM) for 1 h and then stimulated with IL-1β (5 ng/mL) for 24 h without removing 4,5-diCQA. Nitric oxide production was determined by measuring the accumulation of nitrite in the culture medium. In brief, the culture medium (100 µL) was mixed with 100 µL of the Griess reagent (1% sulfanilamide in 5% phosphoric acid and 0.1% α-naphthylamide in H 2 O) and measured at 540 nm using a microplate reader (Epoch BioTek Instruments Inc., Winooski, VT, USA) to evaluate the accumulation. PGE 2 production was measured using a Parameter™ prostaglandin E2 assay kit, according to the manufacturer's protocol. Western Blotting Analysis Chondrocytes were pretreated with 4,5-diCQA (10, 20, and 40 µM) for 1 h, and then stimulated with IL-1β (5 ng/mL) for 30 min or 24 h without removing 4,5-diCQA. The cells were then washed using 1× phosphate-buffered saline (PBS) and lysed with a PRO-PREP protein extraction solution (iNtRON Biotechnology, Seongnam-si, Korea) to isolate the whole protein for 30 min on ice. In addition, to isolate cytoplasmic and nuclear proteins, NE-PER™ Nuclear and Cytoplasmic Extraction Reagents (Thermo Fisher Scientific, Waltham, MA, USA) were used according to the manufacturer's instructions. Additionally, articular cartilages were cut from the explant organ using a blade, and the articular cartilage slices were extracted with a PRO-PREP protein extract to harvest the protein. The cartilage pieces were homogenized in lysis buffer, then incubated on ice for 30 min and centrifuged at 14,000× g at 4 • C for 15 min. Protein concentrations were determined using a bicinchoninic acid (BCA) protein assay kit (Pierce, Rockford, IL, USA). Equivalent amounts of lysate protein (10 or 20 µg) were separated on 6, 8, 10, or 12% sodium dodecyl sulfate polyacrylamide (SDS) gels, and then transferred to a polyvinylidene difluoride membrane (Bio-Rad Laboratories, Hercules, CA, USA). The transblotted membranes were blocked with 5% bovine serum albumin in Tris-buffered saline containing 0.1% Tween 20 (TBST) at room temperature for 1 h and then incubated with primary antibodies (1:1000) at 4 • C overnight. The membranes were rinsed three times with TBST and incubated with horseradish peroxidase (HRP)-conjugated secondary antibody (1:10,000) at 25 • C for 1 h. The immunoreactive bands were detected by an enhanced chemiluminescence (ECL) kit (Millipore, Bedford, MA, USA) and visualized using a MicroChemi 4.2 imager (DNR Bioimaging Systems, Jerusalem, Israel). Aggrecan ELISA Analysis The chondrocytes were pretreated with 4,5-diCQA (10, 20, and 40 µM) for 1 h, and then stimulated with IL-1β (5 ng/mL) for 24 h without removing 4,5-diCQA. The cultured medium was collected, and aggrecan content was assessed using the aggrecan ELISA kits. All assays were performed in duplicate. Animals Male, specific pathogen-free Sprague-Dawley rats (8 weeks old), weighing 300 g, with four in each group for a total of twenty-eight, were purchased from Damool Science (Daejeon, Korea). The animals were nurtured in a controlled environment (temperature: 21 ± 1 • C; humidity: 55 ± 5%; 12-h light/dark cycle) and were allowed free access to commercial feed. All animals were handled by procedures from the National Institutes of Health Guide for the Care and Use of Laboratory Animals [30]. The surgical model of destabilization of the medial meniscus (DMM) was established in Sprague-Dawley rats to induce OA. Prior to the experiment, the rats were acclimatized to the environment for 1 week. Approval to perform the DMM surgery was provided by the Chosun University Institutional Animal Care and Use Committee (CIACUC2021-S0015). DMM-Induced OA Model in Rats The rats were divided into seven groups of four rats each: Group 1 (normal), Group 2 (Sham, 0.9% saline), Group 3 (DMM, 0.9% saline), and Group 4-7 (5, 10, and 20 mg/kg 4,5-diCQA and 10 mg/kg diclofenac); each drug was administered as is. DMM surgery was performed by incising the medial meniscotibial ligament (MMTL) to induce OA above the right and left knees [18,31]. For this operation, the rats were anesthetized with 2.5% isoflurane, and then the MMTL was incised. In the Sham group, only the skin was incised and the MMTL was not incised. Two weeks after DMM surgery, the 4,5-diCQA groups were orally administered 4,5-diCQA (5, 10, or 20 mg/kg) and the diclofenac group was orally administered diclofenac (10 mg/kg). 4,5-diCQA was dissolved in DMSO at a high concentration and then diluted with saline to match the concentration. Diclofenac was dissolved in saline. The drugs were orally administered once every 2 days for 2 weeks. The sham and DMM-only groups were orally administered with saline for the same period. Subsequently, all the rats were sacrificed on the same day. Histology Analysis and Staining Extracted articular cartilages were fixed in 10% neutral-buffered formalin for one day at 4 • C, and then decalcified with 0.5 M EDTA (pH 7.4) for seven days at 4 • C. Following these steps, the articular cartilage was dehydrated through a series of ethanol solutions and embedded in paraffin blocks. After that, lateral serial sections of 4 µm thickness were sliced and stained with Safranin O/Fast Green. An EVOS Core microscope (Thermo Fisher Scientific, Waltham, MA, USA) was used to digitally photograph the stained sections. The stained sections were scored according to the Osteoarthritis Research Society International (OARSI) advanced Osteoarthritis Cartilage Histopathology Assessment System (0-6.5), and a summed OARSI score was used to analyze the degree of articular cartilage destruction [32]. Statistical Analysis All data were obtained from independent experiments. The results are expressed as mean ± standard deviation (SD). One-way analysis of variance (ANOVA) from Dunnett's test was employed for multiple comparisons using GraphPad Prism 5.0 software (GraphPad Software Inc., San Diego, CA, USA). Statistical significance was set at ### p < 0.005 compared with the control group and * p < 0.5, ** p < 0.05, *** p < 0.005 compared with the IL-1βtreated group. 2.5% isoflurane, and then the MMTL was incised. In the Sham group, only the skin was incised and the MMTL was not incised. Two weeks after DMM surgery, the 4,5-diCQA groups were orally administered 4,5-diCQA (5, 10, or 20 mg/kg) and the diclofenac group was orally administered diclofenac (10 mg/kg). 4,5-diCQA was dissolved in DMSO at a high concentration and then diluted with saline to match the concentration. Diclofenac was dissolved in saline. The drugs were orally administered once every 2 days for 2 weeks. The sham and DMM-only groups were orally administered with saline for the same period. Subsequently, all the rats were sacrificed on the same day. Histology Analysis and Staining Extracted articular cartilages were fixed in 10% neutral-buffered formalin for one day at 4 °C, and then decalcified with 0.5 M EDTA (pH 7.4) for seven days at 4 °C. Following these steps, the articular cartilage was dehydrated through a series of ethanol solutions and embedded in paraffin blocks. After that, lateral serial sections of 4 μm thickness were sliced and stained with Safranin O/Fast Green. An EVOS Core microscope (Thermo Fisher Scientific, Waltham, MA, USA) was used to digitally photograph the stained sections. The stained sections were scored according to the Osteoarthritis Research Society International (OARSI) advanced Osteoarthritis Cartilage Histopathology Assessment System (0-6.5), and a summed OARSI score was used to analyze the degree of articular cartilage destruction [32]. Statistical Analysis All data were obtained from independent experiments. The results are expressed as mean ± standard deviation (SD). One-way analysis of variance (ANOVA) from Dunnett's test was employed for multiple comparisons using GraphPad Prism 5.0 software (GraphPad Software Inc., San Diego, CA, USA). Statistical significance was set at ### p < 0.005 compared with the control group and * p < 0.5, ** p < 0.05, *** p < 0.005 compared with the IL-1β-treated group. Effects of 4,5-diCQA on IL-1β-Induced Nitrite and PGE 2 Expression in Rat Primary Chondrocytes Inflammation is the main cause of OA exacerbation. Therefore, the expression levels of nitrite and PGE 2 were first examined in the supernatant of IL-1β-treated rat primary chondrocytes. First, the rat primary chondrocytes were pre-treated with diCQA (10, 20, and 40 µM) for 1 h, and then treated with IL-1β (5 ng/mL) for 24 h. In the group subjected to only IL-1β treatment, the expression levels of nitrite and PGE 2 were significantly increased (Figure 2A,B). However, in the group pre-treated with 4,5-diCQA, the expression levels of nitrite and PGE 2 decreased in a concentration-dependent manner, even when treated with IL-1β. In addition, the levels of inflammatory mediators, such as iNOS, COX-2, and TNF-α, and inflammatory cytokines, were increased only in IL-1β-treated rat primary chondrocytes, but not in the group pre-treated with 4,5-diCQA ( Figure 2C). These findings indicate that 4,5-diCQA has potential anti-inflammatory effects by suppressing the IL-1βinduced inflammatory response. (10,20,40,100, and 200 μM) for 24 h, and viability was determined by MTT assay. Cells incubated without 4,5-diCQA were used as controls and were considered 100% viable. Data are represented as mean ± SD of three independent experiments. Effects of 4,5-diCQA on IL-1β-Induced Nitrite and PGE2 Expression in Rat Primary Chondrocytes Inflammation is the main cause of OA exacerbation. Therefore, the expression levels of nitrite and PGE2 were first examined in the supernatant of IL-1β-treated rat primary chondrocytes. First, the rat primary chondrocytes were pre-treated with diCQA (10, 20, and 40 μM) for 1 h, and then treated with IL-1β (5 ng/mL) for 24 h. In the group subjected to only IL-1β treatment, the expression levels of nitrite and PGE2 were significantly increased (Figure 2A,B). However, in the group pre-treated with 4,5-diCQA, the expression levels of nitrite and PGE2 decreased in a concentration-dependent manner, even when treated with IL-1β. In addition, the levels of inflammatory mediators, such as iNOS, COX-2, and TNF-α, and inflammatory cytokines, were increased only in IL-1β-treated rat primary chondrocytes, but not in the group pre-treated with 4,5-diCQA ( Figure 2C). These findings indicate that 4,5-diCQA has potential anti-inflammatory effects by suppressing the IL-1β-induced inflammatory response. Inhibitory effects of 4,5−diCQA on IL-1β-induced nitrite, PGE2, iNOS, COX-2, and TNF-α in rat primary chondrocytes. Cells were pre-treated with 4,5-diCQA (10, 20, and 40 μM) for 1 h, followed by IL-1β (5 ng/mL) stimulation for 24 h. (A) Nitrite production was determined in the cultured medium using a Griess reagent. (B) PGE2 production was determined in the cultured medium using an ELISA kit after 24 h. (C) Expression of the iNOS, COX-2, and TNF-α was determined using Western blot analysis. α-Tubulin served as an internal control. (D-F) Quantitative data of (C) were analyzed using the ImageJ bundled with Java 1.8.0_172 software. n = 5 per group. Data are represented as mean ± SD of three independent experiments. ### p < 0.005 vs. control group; ** p < 0.05, and *** p < 0.005 compared with the IL-1β-treated group. Effects of 4,5-diCQA on IL-1β-Induced Expression of Matrix-Degrading Enzymes in Rat Primary Chondrocytes Inflammatory mediators such as nitric oxide and PGE2 promote the secretion of matrix-degrading enzymes such as MMPs and ADAMTS-4. MMPs and ADAMTS-4 are Figure 2. Inhibitory effects of 4,5−diCQA on IL-1β-induced nitrite, PGE 2 , iNOS, COX-2, and TNF-α in rat primary chondrocytes. Cells were pre-treated with 4,5-diCQA (10, 20, and 40 µM) for 1 h, followed by IL-1β (5 ng/mL) stimulation for 24 h. (A) Nitrite production was determined in the cultured medium using a Griess reagent. (B) PGE 2 production was determined in the cultured medium using an ELISA kit after 24 h. (C) Expression of the iNOS, COX-2, and TNF-α was determined using Western blot analysis. α-Tubulin served as an internal control. (D-F) Quantitative data of (C) were analyzed using the ImageJ bundled with Java 1.8.0_172 software. n = 5 per group. Data are represented as mean ± SD of three independent experiments. ### p < 0.005 vs. control group; ** p < 0.05, and *** p < 0.005 compared with the IL-1β-treated group. Effects of 4,5-diCQA on IL-1β-Induced Expression of Matrix-Degrading Enzymes in Rat Primary Chondrocytes Inflammatory mediators such as nitric oxide and PGE 2 promote the secretion of matrix-degrading enzymes such as MMPs and ADAMTS-4. MMPs and ADAMTS-4 are enzymes that degrade aggrecan (ACAN), and ECM degradation is a prominent feature of OA. Therefore, the efficacy of 4,5-diCQA was examined in IL-1β-treated rat primary chondrocytes by evaluating the expression levels of MMP-1, -3, -13, and ADAMTS-4. The expression levels of MMP-1, MMP-3, MMP-13, and ADAMTS-4 increased in the group treated with IL-1β alone. However, the expression levels of these enzymes were significantly reduced in the group that was pre-treated with 4,5-diCQA and with IL-1β ( Figure 3A,B). Moreover, MMP expression level increased in the IL-1β-only group during gelatin zymography using the supernatant of the culture medium ( Figure 3C). These results indicate that 4,5-diCQA inhibited cartilage-degrading enzymes with IL-1β treatment. enzymes that degrade aggrecan (ACAN), and ECM degradation is a prominent feature of OA. Therefore, the efficacy of 4,5-diCQA was examined in IL-1β-treated rat primary chondrocytes by evaluating the expression levels of MMP-1, -3, -13, and ADAMTS-4. The expression levels of MMP-1, MMP-3, MMP-13, and ADAMTS-4 increased in the group treated with IL-1β alone. However, the expression levels of these enzymes were significantly reduced in the group that was pre-treated with 4,5-diCQA and with IL-1β ( Figure 3A,B). Moreover, MMP expression level increased in the IL-1β-only group during gelatin zymography using the supernatant of the culture medium ( Figure 3C). These results indicate that 4,5-diCQA inhibited cartilage-degrading enzymes with IL-1β treatment. were determined using Western blot analysis. α-Tubulin served as an internal control. (B-D) Quantitative data of (A) were analyzed using the ImageJ bundled with Java 1.8.0_172 software. (F) Quantitative data of (E) were analyzed by using the ImageJ bundled with Java 1.8.0_172 software. (G) Activity of MMPs was measured in conditioned medium using gelatin zymography. Data are represented as mean ± SD of three independent experiments. n = 5 per group. ### p < 0.005 vs. control group; ** p < 0.05, and *** p < 0.005 compared with the IL-1β-treated group. Effects of 4,5-diCQA on IL-1β-Induced ACAN Degradation in Rat Primary Chondrocytes ACAN is a component of cartilage ECM, but with IL-1β treatment, degradation of ACAN is promoted by inflammation. This experiment was performed to confirm whether 4,5-diCQA prevents the degradation of ACAN in rat primary chondrocytes. The expression level of ACAN was measured by ELISA using the supernatant of the cultured media, and by Western blot using the cell lysates. The expression level of ACAN was decreased in the group treated only with IL-1β in rat primary chondrocytes, but the group that was pre-treated with 4,5-diCQA was significantly increased (Figure 4). These findings suggest that 4,5-diCQA has a potential chondroprotective effect by suppressing the degradation of ACAN in the IL-1β-treated group. (B-D) Quantitative data of (A) were analyzed using the ImageJ bundled with Java 1.8.0_172 software. (F) Quantitative data of (E) were analyzed by using the ImageJ bundled with Java 1.8.0_172 software. (G) Activity of MMPs was measured in conditioned medium using gelatin zymography. Data are represented as mean ± SD of three independent experiments. n = 5 per group. ### p < 0.005 vs. control group; ** p < 0.05, and *** p < 0.005 compared with the IL-1β-treated group. Effects of 4,5-diCQA on IL-1β-Induced ACAN Degradation in Rat Primary Chondrocytes ACAN is a component of cartilage ECM, but with IL-1β treatment, degradation of ACAN is promoted by inflammation. This experiment was performed to confirm whether 4,5-diCQA prevents the degradation of ACAN in rat primary chondrocytes. The expression level of ACAN was measured by ELISA using the supernatant of the cultured media, and by Western blot using the cell lysates. The expression level of ACAN was decreased in the group treated only with IL-1β in rat primary chondrocytes, but the group that was pre-treated with 4,5-diCQA was significantly increased (Figure 4). These findings suggest that 4,5-diCQA has a potential chondroprotective effect by suppressing the degradation of ACAN in the IL-1β-treated group. Quantitative data of (B) were analyzed using the ImageJ bundled with Java 1.8.0_172 software. n = 5 per group. ANOVA and Dunnett tests were used to evaluate the signifi cance of the results. Data are represented as mean ± SD of three independent experiments. ## p < 0.05 and ### p < 0.005 vs. control group; ** p < 0.05 compared with IL-1β-treated group. Effects of 4,5-diCQA on the NF-κB Signaling Pathway in IL-1β-Treated Rat Primary Chondrocytes NF-κB is an important transcription factor that regulates the transcription of cartilage-degrading enzymes, such as MMPs, the ADAMTS family, and inflammatory mediators. Therefore, NF-κB activity in IL-1β-treated rat primary chondrocytes was examined to determine the efficacy of 4,5-diCQA. With 30 min of IL-1β treatment, the NF-κB p65 subunit translocated from the cytoplasm to the nucleus, and the expression level increased ( Figure 5A). The phosphorylation and degradation of IκB occurred simultaneously in the cytoplasm. However, in the group that was pre-treated with 4,5-diCQA for 1 h and with IL-1β, phosphorylation and degradation of IκB were suppressed, and the transfer of the NF-κB p65 subunit from the cytoplasm to the nucleus was also suppressed ( Figure 5) These findings suggest that the transcription of NF-κB is regulated by the chondroprotective effects of 4,5-diCQA. (B) Protein levels of ACAN were determined using Western blot analysis. α-Tubulin served as an internal control. (C) Quantitative data of (B) were analyzed using the ImageJ bundled with Java 1.8.0_172 software. n = 5 per group. ANOVA and Dunnett tests were used to evaluate the significance of the results. Data are represented as mean ± SD of three independent experiments. ## p < 0.05 and ### p < 0.005 vs. control group; ** p < 0.05 compared with IL-1β-treated group. Effects of 4,5-diCQA on the NF-κB Signaling Pathway in IL-1β-Treated Rat Primary Chondrocytes NF-κB is an important transcription factor that regulates the transcription of cartilagedegrading enzymes, such as MMPs, the ADAMTS family, and inflammatory mediators. Therefore, NF-κB activity in IL-1β-treated rat primary chondrocytes was examined to determine the efficacy of 4,5-diCQA. With 30 min of IL-1β treatment, the NF-κB p65 subunit translocated from the cytoplasm to the nucleus, and the expression level increased ( Figure 5A). The phosphorylation and degradation of IκB occurred simultaneously in the cytoplasm. However, in the group that was pre-treated with 4,5-diCQA for 1 h and with IL-1β, phosphorylation and degradation of IκB were suppressed, and the transfer of the NF-κB p65 subunit from the cytoplasm to the nucleus was also suppressed ( Figure 5). These findings suggest that the transcription of NF-κB is regulated by the chondroprotective effects of 4,5-diCQA. Quantitative data of (A) were analyzed using ImageJ bundled with Java 1.8.0_172 software. (E) Quantitative data of (D) were analyzed using the ImageJ bundled with Java 1.8.0_172 software. n = 5 per group. Data are represented as mean ± SD of three independent experiments. ## p < 0.05 and ### p < 0.005 vs. control group; * p < 0.5, and ** p < 0.05, and *** p < 0.005 compared with the IL-1β-treated group. Effects of 4,5-diCQA Administration on Macroscopic and Histological Parameters in the Articular Cartilage of the Rat OA Model DMM is a widely used surgical technique, in which the medial meniscus is incised; the model has a pathology like that of human OA. The effects of 4,5-diCQA on the DMMaltered OA cartilage structure were determined using Western blot by confirming MMP families and using Safranin O/Fast green staining based on the OARSI score (Table 1) [32]. The expression of MMP-1, -3, and -13 significantly increased in DMM-alone-induced rats, but the expression level was reduced in DMM-induced rats administered orally with 4,5-diCQA. 4,5-diCQA markedly suppressed the protein expression of DMM-induced MMP-1, -3, -13, which coincides with the in vitro results ( Figure 6A,B). Cartilage damage was observed only in the OA group, whereas it was not observed in rats subjected to DMM and in those that were orally administered with 4,5-diCQA ( Figure 6C). The DMM-induced OA group had an OARSI score of 16 ± 0.57, which indicated cartilage destruction and erosion. However, in the 5, 10, and 20 mg/kg 4,5-diCQA and 10 mg/kg diclofenactreated OA groups, the OARSI scores were 10 ± 0.76, 5 ± 0.57, 4 ± 0.57, and 4 ± 0.57, respectively, indicating a significant reduction in cartilage destruction ( Figure 6D). These results suggest that 4,5-diCQA alleviates OA in vivo. Cells were pre-treated with 4,5-diCQA (10, 20, and 40 µM) for 1 h, followed by IL-1β (5 ng/mL) stimulation for 1 h. (A) Phosphorylation levels of IκB-α and NF-κB p65 translocation to the nucleus were determined using Western blot analysis. α-Tubulin and PCNA were used as cytosolic and nuclear internal controls, respectively. (B,C) Quantitative data of (A) were analyzed using ImageJ bundled with Java 1.8.0_172 software. (E) Quantitative data of (D) were analyzed using the ImageJ bundled with Java 1.8.0_172 software. n = 5 per group. Data are represented as mean ± SD of three independent experiments. ## p < 0.05 and ### p < 0.005 vs. control group; * p < 0.5, and ** p < 0.05, and *** p < 0.005 compared with the IL-1β-treated group. Effects of 4,5-diCQA Administration on Macroscopic and Histological Parameters in the Articular Cartilage of the Rat OA Model DMM is a widely used surgical technique, in which the medial meniscus is incised; the model has a pathology like that of human OA. The effects of 4,5-diCQA on the DMMaltered OA cartilage structure were determined using Western blot by confirming MMP families and using Safranin O/Fast green staining based on the OARSI score (Table 1) [32]. The expression of MMP-1, -3, and -13 significantly increased in DMM-alone-induced rats, but the expression level was reduced in DMM-induced rats administered orally with 4,5-diCQA. 4,5-diCQA markedly suppressed the protein expression of DMM-induced MMP-1, -3, -13, which coincides with the in vitro results ( Figure 6A,B). Cartilage damage was observed only in the OA group, whereas it was not observed in rats subjected to DMM and in those that were orally administered with 4,5-diCQA ( Figure 6C). The DMM-induced OA group had an OARSI score of 16 ± 0.57, which indicated cartilage destruction and erosion. However, in the 5, 10, and 20 mg/kg 4,5-diCQA and 10 mg/kg diclofenac-treated OA groups, the OARSI scores were 10 ± 0.76, 5 ± 0.57, 4 ± 0.57, and 4 ± 0.57, respectively, indicating a significant reduction in cartilage destruction ( Figure 6D). These results suggest that 4,5-diCQA alleviates OA in vivo. Erosion of the matrix cracks by extending cartilage to <25% of the articular surface Grade 4 Erosion of the matrix cracks by extending cartilage to <25-50% of the articular surface Grade 5 Erosion of the matrix cracks by extending cartilage to <50-75% of the articular surface Grade 6 Erosion of the matrix cracks by extending cartilage to <75% of the articular surface Erosion of the matrix cracks by extending cartilage to <25% of the articular surface Grade 4 Erosion of the matrix cracks by extending cartilage to <25-50% of the articular surface Grade 5 Erosion of the matrix cracks by extending cartilage to <50-75% of the articular surface Grade 6 Erosion of the matrix cracks by extending cartilage to <75% of the articular surface Discussion Osteoarthritis, a debilitating degenerative joint disease found primarily in people over the age of 60 years, is a major cause of disability that increases medical costs and Discussion Osteoarthritis, a debilitating degenerative joint disease found primarily in people over the age of 60 years, is a major cause of disability that increases medical costs and reduces the quality of life [33]. The pathogenic mechanism of OA has not been elucidated to date, but accumulated research indicates that inflammation plays a vital role in the initiation and development of OA [33][34][35]. The imbalance in chondrocyte metabolism due to inflammation causes an overall shift toward catabolism over anabolism by excessively increasing the expression of inflammatory cytokines and substrate-degrading enzymes, eventually leading to apoptosis and cartilage destruction [36,37]. Therefore, research on the mechanism of protecting chondrocytes may be a strategy to delay or improve the development of OA, and plant-derived components with fewer side effects and excellent pharmacological effects are attracting attention as ideal drugs for OA [38][39][40]. In our previous study, A. sylvestris leaf water extract (AELAS) significantly suppressed the expression of IL-1β-induced expression of OA-catabolic factors (nitric oxide, iNOS, COX-2, PGE 2 , MMP-3, -13, and ADAMTS-4) and degradation of ACAN, collagen type II, and proteoglycan in rat primary chondrocytes [17]. In addition, AELAS inhibited DMM-surgery-induced cartilage destruction and proteoglycan loss [17]. Component analysis, to identify the active ingredients of AELAS, confirmed that a large number of CQA-derived ingredients were present. Therefore, this study suggests that 4,5-diCQA exhibits profitable chondroprotective effects by suppressing the expression of pathological factors that influence OA, including the degradation of articular ECM, nitrosative damage, and the expression of proinflammatory cytokines and mediators, via the NF-κB signaling pathways in vitro and in vivo. IL-1β is a potent catabolic factor involved in the pathogenesis of OA. It induces the expression levels of other OA catabolic factors, such as iNOS, nitric oxide, COX-2, PGE 2 , TNF-α, MMPs, and ADAMTSs, which relate to chondrocyte dysfunction and ultimately accelerate the initiation and progression of ECM degradation in chondrocytes [35]. In particular, nitric oxide and PGE 2 , which are highly expressed in OA patients, are the early mediators of inflammation and inhibit the synthesis of collagen type II by inducing the expression of other catabolic factors [41,42]. Therefore, the inhibition of IL-1β-induced inflammatory mediators (iNOS, nitric oxide, COX-2, and PGE 2 ) can alleviate OA pathogenesis and reduce pain, inflammation, and proteoglycan loss [35,43]. In this study, the expression of nitric oxide, PGE 2 , iNOS, and COX-2 improved upon IL-1β treatment. However, pre-treatment with 4,5-diCQA alleviated this effect. These results are related to those of previous studies that ameliorated the effects of OA; Liu et al. reported that treatment with the CQA-rich fraction of Periploca forrestii (CQAF) significantly blocked IL-1β-induced expression of nitric oxide, PGE 2 , COX-2, and iNOS in MH7A cells (human rheumatoid arthritis synovial cell line) [44]. MMPs are a family of proteinases that contribute to the degradation of collagen type II and proteoglycan, and elevated expression of MMP-13 is characteristic of OA chondrocytes [45,46]. ADAMTS is also deeply involved in OA pathogenesis, and ADAMTS-4 is considered the primary aggrecanase [44]. In this study, we noted that IL-1β treatment increased the expression and activity of MMP-1, MMP-3, MMP-13, and ADAMTS-4. In addition, IL-1β treatment induced the degradation of ACAN. However, pretreatment with 4,5-diCQA inhibited the degradation of ACAN by suppressing the IL-1β-induced activity of MMPs (1, 3, and 13) and ADAMTS-4. Similar effects of 4,5-diCQA were observed in previous studies on the anti-arthritic effects of natural products. Tran et al. reported that avenanthramide-C (Avn-C) isolated from oats suppressed IL-1β-induced expression of MMP-3, -12, and -13 in mouse articular chondrocytes, and Feng et al. reported that oleuropein inhibited the IL-1β-induced expression of inflammatory mediators (nitrite oxide, PGE 2 , COX-2, and iNOS) and ECM proteinases (MMP-1, MMP-3, MMP-13, and ADAMTS-5) [23,43]. These results clearly suggest that 4,5-diCQA has a chondroprotective effect against IL-1β-related induction and the development of OA. The MAPK and NF-κB pathways are critical in certain chronic inflammatory diseases, such as OA [17,[47][48][49]. The MAPK family includes various extracellular signal-regulated kinases, of which ERK modulates chondrocyte proliferation and gene expression; p38 and JNK affect the inflammation and destruction of articular cartilage [50]. Phosphorylated MAPKs (p-ERK1/2, p-JNK, and p-p38) relate MMPs expression and cartilage degradation [50]. The NF-κB pathway is modulated by the MAPKs phosphorylation and is included in the regulation of inflammatory mediators as well as OA progression [51,52]. Normally, NF-κB is localized to the cytoplasm with its inhibitor subunit IκB-α, but IL-1β induced phosphorylation and degradation of IκB, and results in the translocation of NF-κB p65 subunit into the nucleus, resulting in the induction of inflammatory mediators [51,53,54]. Therefore, repression of these pathways is crucial in suppressing inflammation, and several studies have reported that certain plants and naturally derived compounds, such as Caragana sinica root extract, Punica granatum extract, oleuropein, and CQAF, show antiarthritic activity by regulating the MAPK and NF-κB pathways [34,51,52,55]. In this study, 4,5-diCQA treatment inhibited the IL-1β-induced phosphorylation of MAPKs (ERK1/2, JNK, and p38), the degradation of IκB-α, and the translocation of the NF-κB p65 subunit. These results suggest that the chondroprotective effect of 4,5-diCQA may be mediated by MAPKs and the NF-κB signaling pathway. In vivo, we established a rat OA model through surgical destabilization of the medial meniscus (DMM) to evaluate the protective effect of 4,5-diCQA on cartilage degradation. The DMM-induced OA model is a meniscus dissection and degeneration induced, and it is widely used to evaluate the efficacy of the drug because it resembles the development of osteoarthritis associated with human aging [56]. Furthermore, histological staining provides specific information about the pathological conditions of articular cartilage, such as changes in chondrocytes and matrix components, and is, therefore, commonly used to evaluate the improvement of arthritis [57]. As a result of oral administration of 4,5-diCQA once every 2 days for 2 weeks, the severity of cartilage degradation in the DMMinduced OA model was alleviated by suppressing catabolic activity and reducing damage in chondrocytes, and this result was consistent with the OARSI score. The use of diclofenac as a positive control was two or three times a day, but it was shown to be effective for OA even when administered once every two days, which can be presented as basic data for a new dosing regimen of diclofenac. Therefore, the progressive efficacy of diclofenac for OA can be expected through additional pharmacological experiments in the future. Conclusions In conclusion, pre-treatment with 4,5-diCQA effectively inhibits the expression of IL-1β-induced inflammatory factors (nitrite oxide, PGE 2 , iNOS, and COX-2) and cartilagedegrading enzymes (MMP-1, -3 -13, and ADAMTS-4). Furthermore, 4,5-diCQA also protects the ACAN, which is a component of the chondrocyte ECM, from degradation due to IL-1β treatment and DMM surgery. These results suggest that 4,5-diCQA is the active ingredient of AELAS, with OA-ameliorating and chondroprotective effects. Like 4,5-diCQA, it is considered that diclofenac may be effective for OA even when taken at a dose of 10 mg/kg by rats once every 2 days for 2 weeks. Institutional Review Board Statement: The study was conducted according to the National Institutes of Health Guide for the Care and Used of Laboratory Animals and approved by the Chosun University Institutional Animal Care and Use Committee with approval numbers CIACUC2021-S0013 and CIACUC2021-S0015.
2022-03-03T16:09:09.201Z
2022-02-28T00:00:00.000
{ "year": 2022, "sha1": "cd30caed3f5874394e4493753f3649370a6be974", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3921/11/3/487/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a44a44330207ccb642875f37872651877b873d6d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
244742372
pes2o/s2orc
v3-fos-license
Recurrence of Graves’ Disease: What Genetics of HLA and PTPN22 Can Tell Us Background Approximately half of patients diagnosed with Graves’ disease (GD) relapse within two years of thyreostatic drug withdrawal. It is then necessary to decide whether to reintroduce conservative treatment that can have serious side effects, or to choose a radical approach. Familial forms of GD indicate a significant genetic component. Our aim was to evaluate the practical benefits of HLA and PTPN22 genetic testing for the assessment of disease recurrence risk in the Czech population. Methods In 206 patients with GD, exon 2 in the HLA genes DRB1, DQA1, DQB1 and rs2476601 in the gene PTPN22 were sequenced. Results The risk HLA haplotype DRB1*03-DQA1*05-DQB1*02 was more frequent in our GD patients than in the general European population. During long-term retrospective follow-up (many-year to lifelong perspective), 87 patients relapsed and 26 achieved remission lasting over 2 years indicating a 23% success rate for conservative treatment of the disease. In 93 people, the success of conservative treatment could not be evaluated (thyroidectomy immediately after the first attack or ongoing antithyroid therapy). Of the examined genes, the HLA-DQA1*05 variant reached statistical significance in terms of the ability to predict relapse (p=0.03). Combinations with either both other HLA risk genes forming the risk haplotype DRB1*03-DQA1*05-DQB1*02 or with the PTPN22 SNP did not improve the predictive value. Conclusion the DQA1*05 variant may be a useful prognostic marker in patients with an unclear choice of treatment strategy. INTRODUCTION Graves' disease (GD) is the most common cause of hyperthyroidism, affecting approximately 0.5% of men and 3% of women. In Europe, the first choice treatment is most often the administration of antithyroid drugs (1,2). However, approximately half of patients relapse within two years of drug withdrawal. It is then necessary to decide whether to reintroduce thyreostatic therapy that may have serious side effects, or to choose a radical approach -total thyroidectomy (TTE) or radioiodine treatment. A significant genetic component is evident from the familial occurrence of the disease. Studies on twin pairs have shown that the contribution of genetic factors can be as high as 70-80% (3). The autoimmune nature of GD has been associated with the human leukocyte antigen complex (HLA) as well as with the gene PTPN22 (protein tyrosine phosphatase, nonreceptor type 22) on chromosome 1p13.3-13.1 encoding for protein tyrosine phosphatase-22, a powerful inhibitor of T-cell activation. Within the HLA, DRB1*03, DQA1*05, and DQB1*02 allelic groups have proven to be promising predictors of the development and recurrence of GD in some studies (4). Outside the HLA region, the PTPN22 genetic variant rs2476601 has also been shown to be a potential Graves' disease predictor and together with the HLA variants was included in the Graves' Events After Therapy + (GREAT+) score (5). When individuals with a sensitive genetic background are exposed to certain environmental risk factors such as stress (6)(7)(8), smoking (9), iodine overdose (10), the postpartum period in women (11), microbiome-associated immunological changes (12) and possibly their combination, the production of autoantigens against the TSH receptor is triggered and the disease begins to develop or relapse. In addition to hyperthyroidism, extrathyroidal manifestations like Graves' orbitopathy, thyroid dermatopathy, and rarely acropachy may be present. The long-term conservative treatment of persistent and recurrent hyperthyroidism entails considerable medical expenses and may have serious side effects in some cases, notably liver disorders and agranulocytosis. In this retrospective study, we followed up on a pilot study published three years ago (13) with the aim to evaluate practical benefits of HLA and PTPN22 genetic testing for assessments of the disease recurrence risk in the Czech population. Taking genetics into account as part of long-term follow-up would make it easier for physicians and patients to consider the suitability and optimal timing of radical and definitive approaches to treatment. Participants We analyzed the HLA-DRB1, HLA-DQA1 and HLA-DQB1 allelic groups as well as the PTPN22 polymorphism rs2476601 (referred to also as 1858C/T) in 206 retrospectively observed Czech patients who had been diagnosed with Graves' disease according to hormonal profile [low/suppressed TSH with simultaneously elevated freeT4, freeT3, presence of thyrotropin receptor antibodies (TRAK)] as well as sonographic examination of the thyroid gland. Orbit ultrasound had been performed in patients with present and active thyroid eye disease as a complementary exam while considering corticosteroids. Patients were recruited during the years 2017-2020 mainly from the outpatients of the Institute of Endocrinology in Prague. These patients are continuously monitored at the Institute, which gives us detailed long-term or even lifelong retrospective information on the status of their remission/ relapse. An additional 16 patients were recruited from the outpatient Clinic of Endocrinology and Diabetology, Mcentrum, Chotebor, 3 patients were treated in the outpatient Clinic of Endocrinology in Řıćǎny, and 3 patients were from the 3 rd Medical Department, 1 st Faculty of Medicine, Charles University and General Faculty Hospital in Prague. All participants gave their informed consent for inclusion before they participated in the study. The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of the Institute of Endocrinology (MH CR-RVO 00023761, date of approval 26. 6. 2017). There were 171 women and 35 men in our cohort, and the mean age of patients at the time of diagnosis was 42.0 ± 14.54 years with median being 41 years and the age range spanning from 16 to 78 years. The mean duration of thyreostatic administration was 36.3 ± 42.47 months. Methimazol, or in case of intolerance propylthiouracyl, were prescribed. Systemic treatment of orbitopathy by peroral prednisone and/or by intravenous methylprednisolone pulses was indicated in 22.9% of the patients. Details regarding sex differences at the time of diagnosis, or in chronically ill patients at the time of the last relapse of GD, as well as retrospective specification of treatment are given in Table 1. The polymorphism rs2476601; c.1858C> T p. (Arg620Trp) was analyzed by sequencing exon 14 of the PTPN22 gene (protein tyrosine phosphatase, nonreceptor type 22) by next generation sequencing on a MiSeq sequencer (Illumina) using the Nextera XT kit (Illumina) to prepare libraries. Statistical Analysis Statistical analyses were performed using the NCSS/Pass 2004 software (NCSS, LLC, Kaysville, Utah, USA). Data are presented as means ± SD or as a percentage. The Chi-square test was used to compare the distribution of allelic groups in the particular cohorts. Odds ratios, relative risk and 95% confidence intervals were calculated in MedCalc Software. Differences in anamnestic data between men and women were tested by the non-parametric Mann-Whitney test. All tests were two-tailed (both positive and negative differences were considered). The p-value<0.05 threshold was used to suggest statistical significant differences. Evaluation of the Success of Conservative Treatment As regards the distinction between successfully and unsuccessfully treated patients, we followed these criteria: patients remaining in remission at least 2 years since discontinuation of thyreostatic treatment and with no relapse throughout life were considered successfully treated. On the contrary, those who had relapsed at least once during monitoring at our institution (many-year to lifelong perspective) as well as patients requiring thyreostatics for more than 5 years were considered unsuccessfully treated. Of the 206 patients examined, 87 met the criteria for unsuccessful treatment, 26 patients met the criteria for successful treatment, indicating a 23% success rate for conservative treatment of the disease in long term follow-up. In 93 people, the success of treatment could not be evaluated because they either underwent TTE immediately after the first attack (so it was not possible to determine whether the disease would relapse), they are still on antithyroid therapy (for a period of less than 5 years, so they cannot yet be classified as unsuccessfully treated), or their remission has not yet reached two years (so they cannot yet be classified as successfully treated). Comparison of patients with successful and unsuccessful conservative treatment is given in Table 2. Significant difference was observed in fT3 levels. Unsuccessfully treated patients showed twice as high concentrations compared to patients in remission. For fT4 levels, such differences between the two groups were not apparent. Frequencies of the HLA Allelic Groups The risk haplotype DRB1*03-DQA1*05-DQB1*02 was present in 35% of the entire cohort of 206 GD patients, 5 patients were homozygotes in this haplotype, 67 were heterozygotes. Haplotype frequency in the whole cohort was 18.7%. This highly exceeds the frequencies in non-GD populations as given in the Allele Frequencies website. For the individual HLA genes and the proportion of allelic groups in the whole cohort of GD patients, see Table 3. For the frequencies of the HLA allelic groups analyzed separately in the unsuccessfully and successfully treated cohort as well as in the patients whose success of treatment could not be evaluated, see Tables 4-6, respectively. Evaluated in the overall cohort of 206 participants, the allele DRB1*03 was more frequent in GD patients (21%) compared to the Czech bone marrow donors (12%) we used as a reference, see details in Methods. On the contrary, the DRB1*07 allele was less frequent in our GD group (6%) in comparison with bone marrow donors (14%), indicating its protective effect in terms of disease development. It is noteworthy that, in the unsuccessfully treated subgroup, the DRB1*07 allele frequency reached only 3%, while in the successfully treated subgroup it reached 10% (p=0.05), indicating a protective effect of the *07 variant even in terms of disease relapse. The risk variant DQA1*05 was present in 67.0% of our GD patients with an allele frequency of 42%, 17.5% of the patients (36 individuals) were homozygotes in this allelic group. The proportion of these homozygotes reached 18.1% in unsuccessfully treated patients, whereas it was only 12.0% in successfully treated patients Data are given as %, frequencies of the risk variants of the particular genes are in bold. The asterix represent an established form of HLA genes nomenclature in accordance with the current HLA nomenclature (http://hla.alleles.org/nomenclature/nomenc_updates.html). Success of Conservative Treatment in Relation to Genotype An evaluation of the three HLA genes and the PTPN22 SNP in terms of their association with treatment success showed that only the HLA-DQA1*05 variant reached statistical significance (p=0.03), as detailed in Table 7: There were 14 successfully treated patients among the 80 DQA1*05 carriers (17.5%), while there were 66 DQA1*05 carriers among the 87 unsuccessfully treated patients (75.9%). The combination of the DQA1*05 variant with both other risk HLA genetic variants forming the risk haplotype DRB1*03-DQA1*05-DQB1*02 and with the PTPN22 SNP did not improve the ability to predict relapse. The odds ratio (OR) for relapse (or more precisely, for unsuccessful conservative treatment, that is, for relapse of the disease or for prolonged inconclusive treatment lasting more than 5 years, which we observed in several cases) calculated as the ratio of the odds of the unsuccessful treatment occurring in the group of risk allele DQA1*05 carriers to the odds of it occurring in the group of risk allele DQA1*05 non-carriers was 2.7 (95%CI: 1.08-6.72); p=0.05. The relative risk was 1.3 (95%CI: 0.98-1.71); p=0.07. As regards rs2476601 (1858C/T) in the PTPN22 gene, the minor allele was present in 27% of patients, with an allelic frequency of 14.1%. The allelic frequency in unsuccessfully treated GD patients was 16.7%, in successfully treated patients 12.0% (Chi2-test=0.45; p=0.50). In patients with impossible evaluation of the treatment success, the allelic frequency was 12.4%. The OR calculated for relapse was 1.99, (95%CI: 0.68-5.83); p=0.30, and the relative risk was 1.15 (95%CI: 0.95-1.40); p=0.16. Although this polymorphism causing the substitution of arginine to tryptophan at codon 620 was found to be associated with GD in many European studies, its association with disease relapse was not apparent in our data. DISCUSSION Studies in the past two decades have clearly shown that multiple factors are involved in the development and recurrence of GD. The interaction of susceptible genes with variable environmental factors, mediated through very complex endogenous communication including dynamic epigenetic modulation and gene expression changes, may lead to impaired immunological tolerance and the outbreak of the disease (18)(19)(20). In addition to HLA variants, many non-HLA genes have been confirmed to be involved in the etiology of GD; some are unique to the recurrence (21,22) of GD, while others are common to both the disease onset and recurrence, or pose a risk for the development of an even wider range of autoimmune diseases (23). According to our data, the risk HLA haplotype DRB1*03-DQA1*05-DQB1*02 was more frequent in GD patients than in the general European population, as given in the Allele Frequencies website, with the two exceptions of Sardinia and Slovenia, that are represented by small groups (n ≤ 140). For the relatively small Czech control population sample in this website (n=180), a frequency of 9.2% is given for the haplotype DRB1*03-DQA1*05:01-DQB1*02:01. The haplotype frequency observed in our cohort of patients (18.7%) corresponds with a metaanalysis on GD patients by Gough (4). The risk allelic group DRB1*03 was more frequent in the GD group compared to the Czech bone marrow donors, which is in agreement with previous findings in other populations (24). On the other hand, the DRB1*07 allele was less frequent in our GD group in comparison with bone marrow donors and its frequency was clearly the lowest in unsuccessfully treated patients. The DRB1*07 allele has been reported to be protective for GD in UK Caucasians (25); however, due to its low incidence, a larger cohort of patients would be needed to demonstrate its significant protective role in the Czech population. The allele frequency of the risk variant DQA1*05 was markedly higher in our study (42%) than in other European populations, ranging from 16.6% in France to 30.3% in Belgium according to the Allele Frequencies website. The DQA1*05 variant showed the best, though borderline, predictive ability in terms of disease relapse, with an OR=2.7. Although it cannot be considered a reliable recurrence predictor, its evaluation may be useful in patients with a complex clinical picture and unclear treatment strategy. The tyrosine phosphatase-22 protein encoded by the PTPN22 gene inhibits T-cell activation. The substitution of arginine to tryptophan at codon 620 (rs2476601) disrupts an interaction motif in the protein (26). This SNP has been associated with GD in many studies. These associations have shown high reproducibility in Caucasian populations, with an OR of 1.9 in British Caucasians (27), 1.7 in a Polish population (28), and even 4.2 in a Russian population (29); however, in Asian and African populations the minor allele was sporadic or absent (30). The concept of our study does not allow the calculation of OR in this sense, because we worked only with a group of patients and have no comparison with the Czech healthy controls for this SNP. However, the OR calculated for relapse did not show statistical significance. A comparison with some reference European populations suggests that there is higher minor allele frequency in our GD patients (14.1%). For example, the minor allele frequency in controls was reported as 10.4% in the United Kingdom (31), 10.0% in Germany (32) and 9% in Italy (33). However, the northernmost and easternmost countries of the continent show minor allele frequencies in the general population similar to our GD cohort (15% in Finland and 14.1% in the Ukraine (34). Either way, our study did not demonstrate the predictive potential of the PTPN22 variant in terms of disease recurrence. It is worth noting the observation of significantly higher fT3 values -but not fT4 values -in patients whose conservative treatment has not been successful for a long time. It is consistent with the conclusions of the original (35) and review articles (36,37) on this topic, which describe a higher fT3 to fT4 ratio in patients who failed to respond to antithyroid drug therapy. This association may be mediated through variability in deiodinase type 2 gene, which affects activity of type 2 deiodinase regulating the transformation of T4 into T3 (38). The absence of a control group of healthy Czech volunteers, ideally without history of autoimmune disorders, is one limitation for an interpretation of the data presented here. Furthermore, the size of the cohort suitable for assessing the predictive ability in terms of disease relapse should be larger, considering that only 113 people could be included in the statistical evaluation. On the other hand, an advantage of our study is the availability of medical history data and retrospective course of the disease of almost all patients in the sample, as they are persons receiving long-term and often life-long care in our institution. This approach indicates a 23% success rate for conservative treatment. Available medical history will also allow for a re-evaluation after several years, when it will be interesting to verify how many participants now belonging to the group of successfully treated patients in remission will move to the group of those who relapsed, and how these potential changes will affect the statistical results. In conclusion, we analyzed the HLA DRB1, DQA1 and DQB1 allelic groups and rs2476601 in the PTPN22 gene in 206 patients diagnosed with GD. According to our data, the proportions of the risk variant HLA-DQA1*05 as well as the risk HLA haplotype DRB1*03-DQA1*05-DQB1*02 were significantly higher in GD patients than in the general European population as given in the Allele Frequencies website. Although the tested genes individually or in combination cannot be considered reliable relapse predictors in the Czech population, the HLA-DQA1*05 allelic group showed a statistically significant association with relapse or long-term unsuccessful conservative treatment. Therefore, this variant may be a useful prognostic marker in GD patients with a difficult interpretation of their clinical picture, when treatment strategies remain unclear. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: NCBI SRA Bio Project, accession no: PRJNA776790. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Ethics Committee of the Institute of Endocrinology. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS Conceptualization and design of the work, formal analysis, project administration, original draft preparation and writing, DV. Methodology of genetic analyzes, JVc and EV. Review & editing, statistics, MV. Data curation, critical revision of the work, KZ, JVr, MD, PP, and ZN. Supervision, BB. All authors contributed to the article and approved the submitted version. FUNDING The study was supported by Ministry of Health of the Czech Republic, grant RVO 00023761.
2021-12-01T14:30:32.896Z
2021-11-23T00:00:00.000
{ "year": 2021, "sha1": "ec1f4a98b7e66a71c58afa86e17146cdc8d95eb0", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2021.761077/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ec1f4a98b7e66a71c58afa86e17146cdc8d95eb0", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
54620560
pes2o/s2orc
v3-fos-license
CONTINUOUS CREATION IN THE PROBABILISTIC WORLD OF THE THEOLOGY OF CHANCE Christian theism is a creation ex nihilo view and is based on the view that God is the only Governor and Lord of all created and existing beings. If God the Creator is the only Lord of all creatures, it follows that he is at every moment in time the Lord of existence of all that exists (CC)1. In theistic metaphysics, continuous governance of all existing beings is called “conservation” or “continuous creation”. If there were no conservation, then all created beings would cease to exist because they could not continue to exist by themselves. This thesis, which can be called the conservation principle (formulated in a very general way), is based – I would like to claim – on another and even more fundamental principle, which I call the principle of divine control.The principle of divine control says that all that exists and happens is willed by God or permitted by Him2 . The principle of divine control is very important in discussions concerning the relation between the Creator and His creatures.These seem to be based on two assumptions.The first assumption is that God can achieve all His purposes in the created world (divine providence) if and only if He controls every existing being.Therefore, divine control must be perfect and unrestricted (divine volitions must be determined in every respect).Maximal possible control consists in the fact that God creates ex nihilo every being and subsequently conserves them.The second assumption is Anselmian: God is the greatest possible being one can conceive.A perfect being has everything under its control and a perfect being controls everything in the most perfect way possible.Furthermore, the best way to control everything is to create every being out of nothing and to create it as absolutely dependent in existence and nature upon God's will.Omnipotence thus means to conserve continuously all created beings.Continuous creation is the best way to express divine perfection: perfect power and perfect will.Therefore, all contingent beings exist this or that way as long as divine power is acting and divine will wills itself to act upon a given being. However, the justification of the principle of divine control by means of the ideas of divine providence and divine perfection is not convincing enough.The problem is that we do not know, at least when we are speculating metaphysicians, what divine creative aims are like.Nor do we know whether it is really necessary for God to control absolutely everything to achieve all the purposes He wants to achieve in the created world.Nor do we know whether divine omnipotent control is compatible with the aims He had in mind creating our universe.Therefore, we may consider yet another possible metaphysical principle, the principle of creaturely independence: Created beings (contingent things, whatever their specific metaphysical nature could be, be they simple or composed, material or immaterial) may be created as independent beings.A being is independent in relation to God if, after having been created by God ex nihilo, it can continue to exist by itself or by cooperation with other created beings; furthermore, it is at least possible that all of the properties of the independent being are causally independent of any direct divine action.Thus, it is at least possible that there exist beings created ex nihilo by God which, after having been created, continue to exist and maintain all their properties which they had at the moment of creation without continuous causal divine action.The principle of independence does not entail that God cannot control the created world in the most perfect possible way because we do not know which way of control is the best for the most perfect being. Concluding this part of our considerations one should say that the definition (principle) of conservation (CON), formulated as follows: God conserves x at t = def.God's willing that x exists at t brings about x's existence at t, and there is some t' prior to t such that x exists at t' (Quinn 1993, p. 598), 3 is not obviously true (because there might be no such divine action) and it is at least metaphysically possible that another principle, the principle of independence, is true. However, it is not only possible that the principle of creaturely independence is true: It is probable, in a sense.It is reasonable to believe that principle is true if we consider metaphysical consequences resulting from the idea of continuous conservation.It seems that the conservation principle leads, if not to occasionalism as Malebranche argued, than at least to a strong or weak concurrentism4 . Weak concurrentism is the view that God continuously conserves every created contingent being, that God brings about their existence at every moment of divine action, and that this type of divine causation is the only act God performs in the world, perhaps apart from special divine actions, such as miracles.Therefore, there is a room for secondary causation in the world.Secondary causes can bring about changes in other contingent beings even though they cannot be directly responsible for their existence.According to this view, God brings about the existence of sufficient causal power in secondary causes.God, however, is not directly responsible for the existence of causal relations between secondary causes; they are natural causes which produce their own effects.Divine continuous conservation is compatible with the existence of secondary causes in the world. I doubt, however, that this position is tenable.In order to demonstrate the weakness of weak concurrentism, we must have a theory of contingent beings and a theory of causation.Yet, if we could demonstrate that the very theory of contingent beings implies difficulties or perhaps is even inconsistent with other assumptions, then it would be unnecessary to consider a theory of causation. There is only one promising metaphysics of contingent beings which could be useful in the debate over the compatibility of divine continuous conservation and secondary causation in the world: the Aristotelian theory of substance, according to which a substance is a whole composed of material and formal parts (constituents).Formal parts of a substance are responsible for the internal structure of the whole as well as the functions of particular material parts of the substance.The Aristotelian theory of substance also says that a given substance has essential constituents (parts/properties), determined by the kind to which it belongs, and accidental or non-essential constituents, which are not strictly determined by any kinds of substances (Loux 2002, pp. 96-137).Other theories of contingent beings -e.g., the bundle theory and the theory of bare substratum -cannot help us in solving the problem of divine continuous creation and secondary causation.Now, let us suppose that x stands for a substance in the Aristotelian sense.Thus, if God brings about that x exists either at the moment t (creation) or at any subsequent moment t' (conservation), then he brings about the existence of all its parts (constituents), essential and accidental parts (qualitative, relational and quantitative properties) included.In order to be a substance, a being has to possess all its properties; it must be determined in every respect.Hence it must be the case that x is F or x is not F. Let y be an effect produced by x.Then x has the property (let it be G) of producing effect y.If God brings about G, then God brings about both the existence of y and x, since he brings about the existence of all the properties of x.Generalized to all substances, if God directly brings about the existence of all substances and all their material and formal parts (constituents), then He is the cause of all effects produced by every existent substance.Therefore, He is directly responsible for x's being the cause of y; however, if God is the cause of y, then x cannot be the cause of y, or at least not the only direct cause of y (as causal overdeterminism and strong concurrentism claim) 5 . It seems that there may be two possible ways to avoid occasionalism and strong concurentism without rejecting the principle of conservation (CON). I.1. The Meinongian approach The first way is Meinongian 6 : God solely brings about the existence of x and not any of its properties.Properties are effects of secondary causes acting upon substances.This solution, however, is internally inconsistent: To be a substance means to belong to a certain kind, which entails having some essential properties or constituents.Thus, if God brings about the existence of x which belongs to a certain kind K, then He brings about the existence of all its essential properties as determined by K. What is more, God brings about the existence of the kind K itself 7 . Perhaps we can better understand that the Meinongian way is incoherent if we restrict our consideration to creation: that is, to the first moment of the existence of any substance.Following the Aristotelian theory of substance, every substance (including those created by God) has to belong to a certain kind.However, it is impossible that any other contingent beings (substances) could in any way determine essential properties of any other substances, because they also have to be created ex nihilo by God as substances of a certain kind.Thus, if God creates every substance (meaning that he brings about 5 Causal overdeterminism is the view that "even though God's causal contributions are not entirely exclusive, they are still characterized by totality -that is, God's contributions alone are sufficient for every effect" (Miller 2007, p. 140). 6Meinong's idea is that properties of an object are independent of their existence or non-existence. 7 I suggest here that kinds and all abstract entities were created by God, but they could be regarded nonetheless as divine thoughts and not as external objects.For an abstract to be created by God does not entail having any exemplifications.Thus, my suggestion here is not strictly the Aristotelian doctrine of universals.It is possible, however, to maintain the Platonist theory of universals and the Aristotelian theory of substance.The Aristotelian theory of substance is neither a version of a bundle theory nor a bare substratum theory.the existence of this and not that substance at the moment t), then he also brings about the existence of all of its essential properties at t. I.2. Essentialist approach The second way is more promising.It consists in the claim that divine conservation has a restricted (essential) range and concerns only the existence of x and all its essential properties, and not any of its accidental properties or constituents.All accidental parts of the substance x are produced by external agents (secondary causes).In this way, room is made for non-divine agency in the world of substances created and continuously conserved by God.Thus, God brings about the existence of x but not the existence of all its parts.At least some of them can be produced by chance in a sense (for example, via the cooperate actions of many external agents). Let us suppose that such a scenario is true.We must at the very beginning note that God, when He brings about the existence of x and its essential parts, determines the range and kind of its possible accidental properties as well as its substantial changes.For example, a table cannot sing and a human cannot fly (like a bird can).Thus if x belongs to a kind K (x is K), then no other contingent being (substance) can bring it about that x is F, if F is incompatible with K.But if it is true for any substance x that x is F or x is not F, meaning that x is determinate in every respect and F is not essential for x, then it must be the case that if God brought about the existence of x, then he brought about that x is F or (non-F).If x has been created by God, then x must be determinate in every respect, since x is a substance.Therefore x is F or x is non-F.It is also impossible that any non-essential properties of x could be (directly and totally) caused by other created substances, because every other substance distinct from x has to have all its own properties, including all its accidental properties.It must be so because every substance to be a substance must have all its properties both essential and accidental.Thus it is not possible that any substance created by God (ex nihilo) could bring about the existence of any accidental properties of any other substance because all its properties (parts or constituents) are determined directly (intimately) and totally by God. It seems that this trouble, if it is any trouble, could be easily omitted by the hypothesis that a substance x created by God at the moment t or conserved by God at any subsequent moment t' can itself determine ("decide") to be F or non-F at t or t'.This process of partial self-determination could concern all substances created by God ex nihilo.It also seems to be possible that accidental properties of x which are produced by it at the first moment of its existence can be replaced by other properties compatible with a given kind K produced by agents distinct from x and from God (say by y).But if x brings about that x is F at t, then x creates F ex nihilo.The reason for this is that if God creates x and God does not bring about F (or that x is F), then either x is doing it or another causal agent distinct from x and from God is doing it.Whatever that being could be, it would have to create F ex nihilo.This is impossible, because only God can do that.If x brings about that x is F at t', then either the principle of divine control (at least in its unrestricted form: "all-form") has to be rejected or x's self-determination is an illusion. If this line of reasoning is correct, then all substances must be totally and directly determined (created and caused) by God.They must be determined by God "from the bottom up".Therefore, it is metaphysically impossible that God created x and conserved it at the moment t' and did not conserve all its essential and accidental properties at the moment t'. There is of course another important aspect of the problem of divine creation and conservation.If omnipotent God wills something to exist or happen, then it must exist or happen, and if He does not will something to exist or happen, it cannot be or happen.So, if x is F, then x cannot be non--F, because God wills that x is F. Perhaps, there are some indeterminate divine volitions and therefore God wills only that (x is F or x is non-F), but by willing that (x is F or x is non-F), he wills neither that x is F nor that x is non-F (van Inwagen 1988).Thus, if there are such indeterminate divine volitions, not all properties are necessarily determined by divine will.This is an important suggestion, but it does not solve the problem discussed above: If God wills x to exist, then x must be either F or non-F, and only God can bring it about that x is F or that x is non-F.However, the idea of indeterminate or indifferent divine volitions can still be useful in a probabilistic approach to the problem of continuous creation. Summing up our considerations in the preceding part of the paper, we should say that if God creates ex nihilo and continuously conserves all contingent beings, then he determines not only their existence (brings about their existence), but the existence of all their constituents (parts), both essential and accidental. Mere conservationism -or as some people say, "weak concurrentism" (Miller 2007, p. 158) -is untenable.I do not think that strong concurrentism can be an alternative to weak concurrentism.Strong concurrentism is the view that God not only continuously conserves all contingent things created ex nihilo, but also has a direct (intimate) though not exclusive causal contribution in every causal action of every created contingent thing (substance).This view, in spite of some interesting advantages (primarily the explanation of contra naturam miracles), ultimately leads either to occasionalism or deism.The argument for the latter has been formulated by Timothy Miller in his dissertation from 2007 (Miller 2007, pp. 143-158). I.3. Probabilistic metaphysics There is still another option for theism left open.God creates ex nihilo a set of substances {x, y, z, …}, every element of which is completely determined from the bottom up, including all essential and accidental properties {P, Q, F, G, …}, and has a common and compound property: "being unconserved by God and existent" ("SS property") 8 .There is no reason to think that it is impossible for an omnipotent God to create substances which have such a property.Substances created by God can act one upon another and bring about effects of different kinds: They can produce substantial and accidental changes, and they can even "produce" new kinds of substances and properties as a result of perhaps longstanding and numerous transformations and changes of the initially created set 9 .The substances and properties emerging in this way can be more complex and organized than the substances and properties at the very beginning of the universe.It is also possible that God did not determine in His creative volition what kinds of substances and which of them will exist (indeterminate divine volitions) 10 .It is also possible that at least some of the changes and transformations in the created universe are purposeless, meaning that they are not be intended by any mind, divine mind 8 J. Kvanvig and H. McCann called such a property "a self-sustaining feature" (Kvanvig, McCann 1988). 9 By "production" here, I mean that contingent beings can bring about that a certain kind K which had not been exemplified before a given moment t has some exemplifications at any subsequent moment t'. 10 This claim amounts to the rejection of the principle of divine control. included, and do not play an important role in the world.It is also possible that some of them are unpredictable even for the perfect mind 11 .God could issue a command: Let there be something unpredictable for my mind in the universe I decide to create ex nihilo.Thus, it is at least possible that there is no causal explanation for some events in the world.Such events or beings are simply chance events or chance beings.The crucial point is that chance events in the latter sense cannot exist in a world conserved by God.Divine conservation and chance exclude each other, but chance is not out of divine control and providence, because chance has a mathematical measure called probability.Chance events, which are more or less probable events, although not conserved by God, are part of His creative volition and a tool of His providence.Such a view on creation is called "theology of chance", or "theology of risk" (Bartholomew 2008).I prefer the label "probabilistic theism" (Łukasiewicz 2014). II. Inductive approach Speculative metaphysics is one way of considering divine creation and conservation, but there is another way which is less speculative and more empirical in the metaphysics of God.Doing theology in this empirical way, we can think about the possibilities God had before the creation of our world.Such an empirical, say, inductive, approach to the metaphysics of God is typical for the theology of chance.The result of this empirical method is a metaphysics of God which is based on scientific knowledge of the mechanics of our world.An important assumption of this probabilistic metaphysics is that knowledge about the work (the created world) can help us better understand the nature of the Creator. 11 The fact that some events are unpredictable for God does not mean that it is not logically or metaphysically possible for God to know them, but rather that God does not need to know all future events in advance to realize all his aims.Perhaps even all events are known for God, not by prediction but by a kind of divine (timeless) contemplation or eternal perception (Heller 2011). II.1. Basic empirical data for the probabilistic approach The most fundamental facts or scientific theories used by probabilistic theism are the following: cosmic and biological evolution, quantum mechanics, biographies of individual human beings and the known history of humankind.In regard to cosmic evolution, theologians of chance point out unintended coincidences of basic cosmological constants which have enabled the universe to develop in such a way that galaxies, stars and habitable planets could emerge (Bartholomew 2008, pp. 176-180).In regard to biological evolution, theologians of chance stress the purposelessness of many events, such as chance mutations, blind routes of evolution, large number of species, and natural catastrophes, such as the extinction of 96% of living species hundreds of million years ago (Haught 2007).In regard to quantum mechanics, theologians of chance point to the indeterminacy of some quantum objects (Polkinghorne 2007, p. 257). The metaphysicians of chance point to the probabilistic nature of scientific laws.Such probabilistic laws assert some dependencies and enable us to predict (with a given probability) the future of aggregates or collectives, but not the future of their individual parts.We also meet this kind of unpredictability in the case of human behaviour, individual as well as social.All these data give us evidence that our universe has not been created according to a very detailed and precise plan encompassing all substances and all of their properties.Protons, electrons, and genes, but also species, kinds, and particular human beings, are not part of a divine plan and creative volition (Bartholomew 1984, p. 145).How could it be that God brings about the existence of beings which are purposeless, unpredictable and, as such, not determined by his creative volition?If our non-deterministic universe has a Creator, He does not control every substance and every property, de facto, he is not the Creator of all contingent entities in our world.Thus, divine action consists in the creation of the universe in its initial stage, and the world is such that God need neither act continuously upon that world nor intervene from time to time in order to achieve His aims.God created the world in such a way that His providence does not have to control absolutely every contingent substance at every moment of its existence in order to realize all that divine will wills to be realized. Supposing that the above story told by a probabilistic theist is true, or at least neither contradictory nor fantastic, many questions arise.Let us Continuous creation in the probabilistic world of the theology of Chance consider two of them.The first problem is whether probabilistic theism is not a version of radical (pure) deism.The second problem concerns the risk for which God is responsible if divine creation is without divine conservation. II.2. The problem of deism An answer to the first question could be that probabilistic theism is not a form of radical deism, because according to its proponents, God is continuously acting on the level of human mind and if there are any other minds in the created world, then he is acting on their minds as well.This divine continuous action manifests divine care about every sentient being in the universe.This divine action, however, is very delicate and subtle, untainted by physical or metaphysical "compulsion".It is not even the kind of persuasion to which process theologians refer when they speak of God's involvement in the world 12 .It is rather a discreet fellowship and participation in all of our joys and sorrows.Perhaps it is sometimes inspiring illumination opening us to some unknown moral or intellectual possibilities and horizons.Acting in such a way, God is involved in the existence and fate of every being which needs divine involvement and is able to respond to it.Acting in such a way, God can directly influence individuals, groups, and all humankind, or even all existing species (by influencing the most developed species whose behaviour determines the rest of creation, or at least a part of it).The Creator can do all these things without continuous control and determination of everything and everyone.Probabilistic theism is not a radical deism and does not maintain, as the Epicurean school in ancient Greece did, that God, if he exists at all, is not interested in the world and human life. II.3.The problem of excessive risk If divine action in the world is so soft and minimized, then it is at least possible, if not inevitable, that God is a great risk-taker and may fail to realize His plan.The latter could undermine His omnipotence and providential care about the created universe and every being in it.If cosmic and biological evolution, as well as the whole history of humankind and every human being, always depend on countless and uncontrolled chance events and circumstances, then the possibility of God's failure is real, and even probable.Can the most perfect and omnipotent being take such a great risk?I think we can answer this question in the positive: Yes, He can, because He is the most perfect being and His omnipotence is absolutely unlimited.A very important premise underlying the answer to the last question is that the risk is not so great, or even that it is very small.It is so because the nature and mechanism of the created world ensure with a very high probability that all purposes intended by God will be attained without his causal action in the processes occurring in the world.The emergence of life in the universe is almost inevitable, because the universe is large and old enough, and biochemical mechanisms are very effective.The emergence of sentient beings was also almost inevitable because of longstanding and countless mutations and adaptations of living organisms to their environment.All this was very probable and hence in a sense necessary (inevitable).The great advantage of the non-deterministic world is its own creativity, which is possible because of the chance events happening in a way restricted only by the laws of nature.Thus, if one evolutionary path fails another one is opened.Perhaps a mutation suitable for the growth and development of a given species happened by chance and enabled it to survive in hard conditions and further develop.Elasticity and redundancy are very typical for the world of chance, but because of these properties, this world has a large number of possibilities and abilities to develop and regenerate after various natural catastrophes (Łukasiewicz 2006). III.3. The problem of evil Even if all the above scenarios are convincing or at least not incoherent, there is still another big question about whether God takes an excessive risk in a much more crucial matter: that is, in the matter of the salvation and condemnation of human beings.There remains the problem of evil and suffering in the non-deterministic and probabilistic world.All sensitive beings are at the risk of experiencing undeserved physical pain and spiritual suffering.Furthermore, there seems to be enormous, undeserved and pointless pain and suffering in the universe.How could it be that an omnipotent and morally perfect being allows sensitive creatures to exist in the world of chance?Only God, who has absolutely everything under His divine control, is morally justified in creating and conserving a world containing seemingly pointless and horrifying evils.Yet, in answer to the last question, a proponent of the theology of chance can ask another question in return: How could it be that an omnipotent, omniscient and morally perfect being created all contingent things and states of affairs allowing all suffering and evil to happen?It is God who not only created absolutely everything, but continuously conserves absolutely everything, and thus causally contributes to every suffering and evil of our world.It seems that the way in which a probabilistic theologian could cope with the problem of evil is more promising or, to put it better, is less disgusting than any way accessible to a defender of continuous creation. In conclusion, it is metaphysically and physically possible that probabilistic theism (a weak version of Christian deism) is right as to the true nature of divine action in our world.And even more, probabilistic theology is in a better position, because its metaphysical propositions find some empirical support in evidence provided by contemporary science and what is even more important, it finds support in our moral sensitivity. Summary The aim of the paper is to present and analyse the doctrine of continuous creation typical for theism.Continuous creation is conceived of as divine causal action consisting in God's bringing about the existence of any being at every moment of its existence.Such a definition of divine action, as N. Malebranche argued, leads to occasionalism -that is, to the view that God is the only cause in the world.In the first part of the paper, an attempt is made to demonstrate that Malebranche's conclusion is valid and that two alternative views, weak and strong concurrentism, are not tenable.In the second part of the article, the idea of continuous creation is discussed, which can be formulated from the point of view of probabilistic theism.
2018-12-02T04:42:01.116Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "51cdae08208235a350196e6d62905253027bc52a", "oa_license": "CCBYSA", "oa_url": "https://wnus.edu.pl/aie/file/article/view/505.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "51cdae08208235a350196e6d62905253027bc52a", "s2fieldsofstudy": [ "Philosophy" ], "extfieldsofstudy": [ "Philosophy" ] }
264981269
pes2o/s2orc
v3-fos-license
Assessing Corporate Financial Health. Evidence from the Agricultural Sector in the Republic of Serbia : A large number of studies have been conducted examining certain aspects of the financial situation of agricultural enterprises in the Republic of Serbia. However, the overall situation of these companies has rarely been the subject of analysis in previous studies. Therefore, this paper represents an attempt to comprehensively assess the financial conditions of large and medium-sized agricultural enterprises in the Republic of Serbia. Three models (Emerging market scoring, DF Indicator and G-Index) were used to analyse the key areas of financial security (liquidity and debt) and business success of 38 agricultural enterprises in the period from 2017 to 2021. By classifying the total values recorded into zones of financial health, an assessment of their overall position was made. The companies studied at the group level showed a satisfactory financial situation, due to favourable performance in individual cases, while the majority of enterprises were at risk in all business dimensions studied, with the exception of debt. During the analysis period, most of the companies under review recorded low liquidity, which was often accompanied by low profitability. The results of this work provide an insight into the key specifics of the activity of agricultural companies from the point of view of financial analysis and, in this sense, represent an important additional tool in the process of their management, but also contribute to creating an adequate basis for comparison with other business activities in the Republic of Serbia. Introduction A review of the relevant literature revealed that comprehensive assessments of the financial situation of agricultural enterprises in the Republic of Serbia were rarely performed, i.e. the subject of previous studies has more often tended to be individual aspects of the financial situation of enterprises, primarily liquidity, then debt, and business performance.The following review first presents researches in which the overall financial position of companies from the agricultural sector was analysed.Based on a sample of 50 financial reports of Serbian agricultural companies in 2008 and 2009, it was found that about 70% of companies had a poor financial position in both years (Jakšić et al., 2011).In the above study, various financial indicators were used to evaluate the debt level, solvency, ability to maintain the real value of equity, and the reproduction value of the company.The impact of the 2008 financial crisis on the activity of agricultural enterprises was studied by Andrić et al. (2011).Finally, the creditworthiness of the subjects from the sample of 30 observations was rated as poor, while the comparison of performance between different agricultural activities in the sample led to the conclusion that some enterprises recorded a significantly less favourable performance, which was attributed to worse production trends and insufficient support from the state.It was also pointed out that the creditworthiness of agricultural enterprises deteriorated in 2010 due to the lower volume and quality of sowing.Applying a set of commonly used financial indicators to a sample of active agricultural companies and enterprises undergoing restructuring (30 and 18 financial statements, respectively), it was found that both groups were characterized by a poor financial situation in the period from 2010 to 2012, i.e., impaired financial stability, threatened liquidity, acceptable debt and favourable state of solvency (Tomašević et al., 2014).The results of the study on the financial situation and profitability of 25 medium-sized agricultural enterprises from Zapadnobački district, which provided an overview of the main characteristics of profitable and unprofitable enterprises, are also considered an important contribution in the field of financial analysis of agricultural enterprises in the Republic of Serbia (Vučković, 2016).Among other things, profitable companies were found to have a similar speed of inventory turnover and a similar value of current and reduced liquidity ratios, which at the same time were significantly higher than the established reference values.However, the percentage of these companies varied from only 17% to 26% of the units in the sample, with a decreasing trend during the analysis period.According to the results of this research, unprofitable companies were also frequently insolvent. As mentioned above, the study of specific business dimensions, especially liquidity, has been more frequently the subject of research than the general assessment of the financial condition of agricultural enterprises.The state of liquidity of enterprises from the Autonomous Province of Vojvodina in the period from 2011 to 2015 was assessed as poor in 70% of enterprises, despite the satisfactory industry average (Vuković et al., 2018).The authors of this paper also conducted additional analyses of the differences in the recorded results by years of analysis (using the Friedman test) and concluded that they were not statistically significant, i.e. that the liquidity levels were relatively stable in the years under consideration (Vuković et al., 2018).According to the results of this research, between 2006 and 2015, based on the recorded values of the ratio of current liquidity, the percentage of companies with liquidity at risk (value less than two) ranged from 67.44% to 82.93% (Vuković et al., 2018).The analysis of the liquidity of 18 of the total 21 agricultural companies listed on the Belgrade Stock Exchange in the period from 2016 to 2019 leads to similar conclusions.The satisfactory level of liquidity of the studied group was accompanied by a large number of companies with unfavourable results, especially when the sources of total working capital were reduced to more liquid assets, as well as in the case of ratios based on net cash flow (Milašinović et al., 2021).The study of the structure of financing sources of three medium-sized agricultural companies in 2013-2015 highlighted the importance of accurately defining and ensuring a sufficient volume of permanent working capital, as these companies were often unable to cover fixed assets, long-term placements, and parts of inventories (Vučković et al., 2017).An analysis of profitability and indebtedness of agricultural enterprises from AP Vojvodina, covering the period from 2006 to 2015, found that the observed changes in the level of debt were not statistically significant and that debt was generally relatively low (Mirović et al., 2019).It was also found that the companies under consideration were characterized by very low profitability, which increased significantly after 2013. Based on previous research, this paper also examines the state of the elements of the financial situation considered so far.Therefore, the object of research of this paper is the financial position of agricultural enterprises in the Republic of Serbia, with the aim of its complete evaluation, i.e. taking into account the state of financial security (liquidity and debt) and business performance.In order to achieve the defined research goal, three models for assessing the financial health of companies (Emerging market scoring, DF Indicator and G-Index) are applied, which allow assessing the overall position of a given company based on predefined indicators describing specific company dimensions.By analysing the partial measurements that make up each model, an overview of the situation by individual company dimensions is provided.The study refers to the activity of agricultural enterprises in the Republic of Serbia in the period from 2017 to 2021. Company Financial Health Assessment Financial health assessment and bankruptcy prediction models that quantify the overall financial health of a given company based on a certain number of weighted indicators are often used to assess the financial health of a company.The best-known models of this type, such as the models of Altman (Edward Altman), developed on the basis of the analysis of the business activity of industrial companies in the United States (Rodić et al., 2017), by Kralicek (Peter Kralicek) based on financial data of companies from Germany, Switzerland and Austria (Vlaović-Begošević, 2020), by Beaver (William Beaver) based on companies of different size and activity (Beawer, 1966) are the most widely used in business and research practice and were created by the procedure of discriminant analysis. In the late 1960s, Edward Altman developed the Z-score model to predict bankruptcy of industrial firms using multiple discriminant analysis (Rodić et al., 2017).Among the selected firms, there were differences in size (measured by the value of total assets), with the smallest and largest firms missing (Altman, 1968).The exclusive representation of industrial firms in the sample studied meant that the original model was not commonly used to analyse the situation in non-manufacturing firms.In addition, the Z-score was intended only for publicly traded companies.Later models developed aimed to address these shortcomings and extend coverage to companies outside the United States.In 1983, Altman developed the new Z'-Score model with different weightings and a modified fourth indicator.The market value of capital was replaced by the book value of capital, in contrast to indicator X4, adapting the model to companies whose shares are not traded on the stock market (Altman, 1983). In 2005, Altman defined a special model that, in contrast to the original model, allows bankruptcy scoring for non-industrial firms in developing countries: EMS -Emerging market scoring (Altman, 2005).The previous model is considered the most appropriate alternative for assessing the financial condition of companies in the Republic of Serbia.Numerous studies have been conducted in the past with the aim of updating the original model and developing new models adapted to the specifics of companies in other countries (Bod'a, 2019).Following the example of Altman, who developed a model for predicting corporate insolvencies based on data from companies on the American market, Kralicek developed a model for European companies in the form of a discriminant functionthe DF Indicator (Rodić et al., 2017).Specialized models based on samples of companies in a particular industry have also been developed.Using a sample of 60 Slovak agricultural companies, Gurčik Lubomir (Gurčík Lubomír) in 2002 developed a specialized model to assess the financial health of agricultural companies, the G-Index (Gurčik, 2002).It can also be said that there are no better known and more frequently used models for this purpose in the agricultural sector (Bod'a, 2019). However, the results of a large number of authors point to the limited applicability of these models to significantly different conditions, whether due to differences between business activities or changes due to the passage of time (Kovacova and Tomas, 2017).Consequently, models based on financial indicators are not exempt from general limitations related to mutual (in)comparability of ratio analysis results.Particular emphasis is placed on the differences between companies' accounting data resulting from the use of different depreciation methods, the valuation of inventories and other assets, and the contributions of price changes and inflation (Engler, 1978).The way this information is presented and later interpreted critically influences the managerial decisions made by current and potential investors (Ionescu and Alin Haralambie, 2023).Part of the impact of the mentioned factors can be reduced by a longer period of analysis, but also by comparing the individual performances recorded with the trends at the industry level.In this context, their correct evaluation is the starting point to which the results of this analysis will contribute. Methodology The study sample consists of 38 companies from the agricultural sector in the Republic of Serbia.The analysis includes all large and about 35% of medium-sized enterprises in the sector.The financial situation is assessed for the period from 2017 to 2021.In terms of enterprise size, the Serbian Accounting Law distinguishes between micro, small, medium and large companies (legal entities and entrepreneurs), with the classification made while taking into account the average number of employees, the value of business income and total assets in a given year (Official Gazette of the Republic of Serbia, 2021).The average number of employees that a company must have in order to be classified as a small, medium or large company is 10, 50 and 250, respectively.The annual business income for each of the groups must be at least 700,000, 8,000,000 and 40,000,000 euros, respectively, while the book value of total assets must exceed 350,000, 4,000,000 and 20,000,000 euros, respectively.A company that meets the thresholds for at least two of the three criteria is classified in a specific size group.For example, a company whose average business income and balance sheet total exceed eight and four million euros, respectively, on the balance sheet date is classified as a medium-sized company.The average number of employees is estimated on the basis of the mean number of employees in the company during a given month (Official Gazette of the Republic of Serbia, 2021). Depending on their size, companies are subject to different accounting obligations relating to the application of international accounting standards, the scope of regular financial reporting, the obligation to audit financial statements and non-financial reporting.Large and medium-sized companies are subject to the greatest responsibility, and therefore their financial reports are assumed to be more reliable, which is why micro and small companies were not included in the sample for this study.The Altman model for Emerging market, the DF Indicator, and the G-Index are presented in more detail below and then applied.The Emerging market scoring consists of four indicators each representing the share of balance sheet and income statement items in total assets: The first indicator (X1) is the share of working capital (net current assets) financed from longterm sources in total assets, while other indicators in the numerator include retained earnings (X2), profit before tax (X3), and sales (X4 The Х1 indicator represents the inverse of the debt factor, i.e. the ratio of free cash flow to total liabilities.Another indicator is the ratio of total assets to total liabilities and also relates to the company's leverage ratio, i.e. a higher value indicates lower debt, as the company's total assets form a larger base to cover its liabilities.The Х3 ratio is an indicator of the company's return on assets.The ratio of profit before taxes to total revenues (Х4) is an indicator of the company's success.The ratio of inventories to total revenues (Х5) is a measure of liquidity, as inventories are one of the items of working capital that may be characterized by difficult conversion into cash as a result of a stoppage in the production cycle (Krasulja and Ivanišević, 2005).The last ratio included in Kralicek's DF-Indicator is the ratio of business revenues to total assets, which is a measure of the profitability of the company (Х6).The structure of the G-Index is shown below: The first (Х1) and second (Х2) indicators measure the profitability of the company based on the share of retained earnings, i.e. pre-tax profit in total operating assets.The third indicator (Х3) shows the achieved profitability, as it is a short-term performance indicator, which represents the share of profit before taxes in total sales.The fourth ratio (Х4) can be classified as a dynamic liquidity measure, as it uses the achieved effect on the level of cash flow of the current year for the sources of financing of total assets.The common feature of the four indicators is the positive effect on the financial health of the company, i.e. the growth of the indicator value contributes to the increase of the Index and vice versa.The share of inventories in total sales (Х5) only needs to be minimized, as a high share indicates a production or business cycle standstill, hence the negative weighting (Krasulja and Ivanišević, 2005).The calculated overall model measure is compared to reference values to make a final statement about the state of financial health (Table 1).If the value of EMS is below 4.50, the company is considered to be ripe for bankruptcy.Values between 4.50 and 5.85 are the gray zone, i.e. there is a financial threat and the company is at risk of bankruptcy but can recover, while a value of the indicator above 5.85 is characteristic of prosperous companies (Altman, 2005).However, some authors point out that the application of this model may or may not indicate the presence of structural dysfunction in the organization (Rodić et al., 2017).The DF Indicator knows eight different scores, which are combined into three larger intervals for further analysis, mainly for reasons of comparability of the assessment results with the Altman model and the G-Index.A DF Indicator value of more than 1.5 is interpreted as a sign of good financial stability, while a negative value indicates impaired stability, i.e. insolvency.Values between the indicated limits are interpreted as a state of satisfactory stability.The calculated value of the G-Index can also be divided into three zones of financial health.Companies whose index value is below -0.60 are classified in the red (poor) zone, gray (medium) if the value is in the interval from -0.60 to 1.80, i.e. green (good) if the Index is above 1.80 (Gurčik, 2002).The following is an overview of the defined zones of financial health (Table 2).The assessment of the general situation in the companies under consideration represents the final step in the evaluation of their financial health.By comparing the results between the models, a better insight into the weaknesses and strengths manifested in certain dimensions that characterize the financial situation of agricultural enterprises in the Republic of Serbia is gained. Results and Discussion The results of the analysis were considered at the level of the overall values of the model and the sub-indicators.The results of the application of the EMS model are first presented (Table 3).EMS consists of one indicator of static liquidity (Х1) and three indicators of company's rentability (Х2, Х3 and Х4).The results of the analysis show that net working capital represents 12-14% of total assets (median 10-15%).The recorded values of the static liquidity ratio (Х1) are due to low debt.The ratio of retained earnings to total assets (Х2) ranges from 30% to 32% (indicator value from 0.30 to 0.32) during the period of analysis, with a significantly lower median of 23% to 24%, which is due to the influence of individual companies that achieve higher business performance than the rest of the sample.The same is observed when analysing the distribution of the values of the other business success indicators that make up the model EMS, and it is also found that in 25 cases out of 190 (13.2%) the companies considered had a negative financial result.The percentage of profit before tax (indicator Х3) varies significantly at the level of individual units of the sample, ranging on average between 3% and 5% (median between 2% and 4%).Relatively high values are also observed in the indicators measuring the share of sales in total assets, with a slight tendency to decrease, as the recorded values range from 0.74 in 2020 and 2021 to 0.84 in 2017.The instability of income in enterprises from the agricultural sector can be attributed to the seasonal nature of agricultural production, but also to the great dependence of the level of yields on climatic conditions in a given year, especially in conditions of open-air farming.The results of the authors who studied the impact of seasonality on the price of agricultural products confirm this, which is an additional constraint in planning the long-term development of these enterprises (Ionuț et al., 2022). Apart from the aforementioned decrease in the value of the Х4 indicator, the values of the other sub-measures remained stable over the period of analysis.Satisfactory liquidity conditions and relatively high values of short-term profitability indicators influenced the approximately equal contribution of indicators Х1, Х2 and Х4 to the formation of the final value of EMS, which follows the dynamics of changes in indicator Х4 (on average from 6.41 in 2017 to 6.21 in 2021). In contrast to Altman's model, Kralicek's DF Indicator also includes measures of capital structure (Х1 and Х2) and profitability (Х4).In addition to indicator Х3, which is the identical third measure from the Altman model, and Х6, which uses business income instead of sales revenue from the numerator of the fourth indicator, the DF Indicator also includes the ratio of inventory value to total revenue as a measure of liquidity.The recorded values of the ratios Х1 and Х2 confirm the low indebtedness of the studied companies (Table 4).Total assets are 5.37 to 6.58 times higher than total liabilities (Х2), which corresponds to a debt share of 15.2% to 18.6% of total assets.The relatively low debt ratio also influenced the higher value of the ratio of total profit before taxes and depreciation from 0.34 in 2019 to 0.41 in 2017. The recorded values of both indicators are characterised by a pronounced deviation of the average from the median, which is due to the presence of several cases of companies with a particularly high proportion of debt (over 70%). In addition to the Х3 indicator, which is also represented in the previous model, a particularly high share of income in total sources is also observed (from 0.97 in 2017 to 0.87 in 2021), also with a slight downward trend, which corresponds to the dynamics of the Х4 indicator of the Altman model.Profit before tax as a percentage of total revenues (Х4), as a measure of profitability, has extremely low values ranging from 6% to 10% over the period of analysis, due to low performance of the company.The recorded value of inventories accounts for 38% to 41% of total revenues. The first three financial indicators that make up the G-Index are also included in the previous two models, namely the Х1 indicator as the second indicator in the EMS model, Х2 as the third indicator in both models, and Х3 as the fourth indicator of the DF Indicator (Table 5).The ratio of net cash flow to total assets (Х5), as a dynamic measure of liquidity, shows negligible average values.Once the total value of the model had been calculated for each company, a comparison was made with previously established reference values, based on the basis of which the companies were divided into individual financial health zones (Table 6).The companies under consideration were assessed using three models in each of the five years of analysis. The results for each year show a changing structure of representation of each zone, being the most favourable in the first year (2017).In the following two years, the number of companies in the gray and red zones increases as there are fewer cases in the green zone of financial health.From 2017 to 2019, the percentage of cases in the green zone decreases from 45.6% to 35.1%, while the percentage in the gray zone increases from 42.1% to 50.9%.After that, by 2021, there is an improvement in the overall result to a similar level as in 2017, with an increase in the percentage of cases in the red zone (12.3% in 2017 and 19.3% in 2021). Conclusions The analysis of the results of the assessment according to the individual models, as well as the values of the sub-indicators, leads to the following conclusions about the financial situation of agricultural companies in the Republic of Serbia: o the largest number of cases in the green and red range was recorded by the assessment of the value of the model EMS (105 cases or 55.3% and 50 cases or 26.3% of the sample); favourable values of the aggregated function are mostly a consequence of the high value of net working capital, which is largely determined by the low representation of current liabilities, and contributes to higher values of the liquidity indicator, as well as high business income; in the cases where the value of the function was recorded in the red zone, extremely low profitability was observed, which, together with the often low business income, contributes to a decrease in the value of the three indicators of business success; o the results of the evaluation according to the DF Indicator are characterized by the lowest percentage of cases in the red zone (3.2%), with almost equal participation in the green and gray zones (50.0% and 46.8%); favourable ratings according to the model are in most cases due to low debt and high value of inventories, which increases the contribution of the first two and the fifth sub-indicator; extremely high values of the indicators using these items ultimately led to positive ratings, while the measures of corporate success record a low contribution; o when applying the G-Index, the largest number of cases was found in the gray zone (66.4%), with a roughly equal share of companies in the green and red zones (18.9% and 14.7%, respectively); most of the value of the Index is the first indicator of the ratio of retained earnings to total assets.The value of inventories has significantly affected the reduction of the value of the model in individual cases through the fifth indicator, while the measurement of dynamic liquidity has practically not affected the value of the model due to the relatively low net cash flow.The performance indicators representing profit before tax also had a negligible impact. The results of this analysis also show that there are significant differences in the financial situation between the individual cases of the large and medium-sized agricultural enterprises considered.A smaller number of companies record high business success along with high liquidity.The largest number of cases has extremely low profitability ratios and low liquidity.Most of the analysed companies recorded low levels of indebtedness, especially in the case of long-term debt.In summary, the financial situation of most of the companies studied is at risk, even though the results at group level are mostly satisfactory. Table 2 . Reference values of the individual zones of the financial health of the company Table 3 . Application of the EMS model for Serbian agricultural enterprises in the period 2017 to 2021 Source: Author's own calculation based on the data from publically available financial statements of the selected agricultural companies published by the Serbian Business Registers Agency. Table 4 . Application of the DF Indicator for Serbian agricultural enterprises in the period 2017 to 2021 Author's own calculation based on the data from publically available financial statements of the selected agricultural companies published by the Serbian Business Registers Agency. Table 5 . Application of the G-Index for Serbian agricultural enterprises in the period 2017 to 2021 Author's own calculation based on the data from publically available financial statements of the selected agricultural companies published by the Serbian Business Registers Agency. Table 6 . Comparison of model application results by caseload in specific financial health zones from 2017 to 2021 Source: Author's own systematization.
2023-11-04T15:15:30.728Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "ceb3947984e656a23ed47c53f5f59ab50ea7e6e8", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.51865/eitc.2023.03.02", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "48adff875f518cfa9072789f4b450d00dbfb2984", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Business", "Economics" ], "extfieldsofstudy": [] }
17845925
pes2o/s2orc
v3-fos-license
Reporting of harms outcomes: a comparison of journal publications with unpublished clinical study reports of orlistat trials Background The quality of harms reporting in journal publications is often poor, which can impede the risk-benefit interpretation of a clinical trial. Clinical study reports can provide more reliable, complete, and informative data on harms compared to the corresponding journal publication. This case study compares the quality and quantity of harms data reported in journal publications and clinical study reports of orlistat trials. Methods Publications related to clinical trials of orlistat were identified through comprehensive literature searches. A request was made to Roche (Genentech; South San Francisco, CA, USA) for clinical study reports related to the orlistat trials identified in our search. We compared adverse events, serious adverse events, and the reporting of 15 harms criteria in both document types and compared meta-analytic results using data from the clinical study reports against the journal publications. Results Five journal publications with matching clinical study reports were available for five independent clinical trials. Journal publications did not always report the complete list of identified adverse events and serious adverse events. We found some differences in the magnitude of the pooled risk difference between both document types with a statistically significant risk difference for three adverse events and two serious adverse events using data reported in the clinical study reports; these events were of mild intensity and unrelated to the orlistat. The CONSORT harms reporting criteria were often satisfied in the methods section of the clinical study reports (70–90 % of the methods section criteria satisfied in the clinical study reports compared to 10–50 % in the journal publications), but both document types satisfied 80–100 % of the results section criteria, albeit with greater detail being provided in the clinical study reports. Conclusions In this case study, journal publications provided insufficient information on harms outcomes of clinical trials and did not specify that a subset of harms data were being presented. Clinical study reports often present data on harms, including serious adverse events, which are not reported or mentioned in the journal publications. Therefore, clinical study reports could support a more complete, accurate, and reliable investigation, and researchers undertaking evidence synthesis of harm outcomes should not rely only on incomplete published data that are presented in the journal publications. Electronic supplementary material The online version of this article (doi:10.1186/s13063-016-1327-z) contains supplementary material, which is available to authorized users. Background There are two driving concerns that continue to grow when relying on published medical research to reflect the truth [1]. First, trials often remain unpublished years after completion, and the results are, therefore, unavailable to the public. Second, trials often display a distorted representation, where publications present a biased or misleading description of the design, conduct, or results of a trial [2,3]. Journal publications and registry reports currently represent the main information source for obtaining summaries of clinical trial data for the purposes of clinical and health policy decision-making [4]. Results in the past have found reporting in journal publications to be inadequate and inconsistent [5], and although clinical trial registries have been responsible for making major strides in improving the transparency of trial data, a recent study suggested that the results from trial registries often remain unavailable [6]. The clinical study report (CSR) is a structured document that summarises the analysis methods and results of a clinical trial submitted for marketing authorization of an investigational medicinal product in the European Union, Japan, or the United States. CSRs are an 'integrated' full report, which can be up to a thousand pages in length, and include extensive detailed information on the efficacy and harms of interventions. The information in these documents relating to harms is usually separated individually by adverse event (AE) and serious adverse event (SAE) terms in summary tables and listings. In the past, researchers have made major efforts to gain access to CSRs, with the intention to inform regulatory decision-making [7]. The information contained in the CSRs has proved vital when evaluating both the efficacy [8] and safety [9] of clinical interventions. Evidence from journal publications has previously been questioned, and even overturned, by findings from unpublished information reported in the CSR [10]. In December 2009, Roche was the first global healthcare company to release 'Clinical Study Reports' after growing concerns over their product Tamiflu [8]. Their policy now allows researchers to access the CSRs and summary reports used for regulatory purposes since 1 January 1999. In 2010, the European Medicine Agency (EMA) [11] became the first major regulatory agency to agree to an open-access policy to confidential documents, including CSRs. However, in 2013, the EMA was forced to step backwards when the general court of the European Union (EU) ordered them to limit the access to their reports due to legal cases from two drug companies [12]. In October 2014, the EMA published their final policy on the access to documents and CSRs [13]. Orlistat (trade name: Xenical) is marketed by Roche in most countries. It is used in the treatment of obesity as a selective inhibitor of gastric and pancreatic lipase [14]. Mild, but unpleasant, gastrointestinal (GI) side effects are commonly reported with orlistat use. A recent review [15], including 16 randomized placebo-controlled trials of orlistat, estimated an increased risk of discontinuations due to AEs of 3 % (95 % CI 1-4 %) with orlistat. The most common AEs leading to withdrawal were GI (40 %); only eight (50 %) trials specified the number of AEs due to GI problems. Another study [16] of 29 trials of orlistat indicated an increase in the risk for diarrhoea, flatulence, abdominal pain, and dyspepsia in orlistattreated patients compared with placebo. No SAEs were reported in these reviews. Concern exists that there may also be an associated increased risk of serious hepatic events, as indicated in a case series study using primary care data from the Clinical Practice Research Datalink (CPRD) [17]. We aim to carry out an exploratory review consisting of two main analyses: (1) to compare the number(s) of reported harms (AEs and SAEs) and (2) the structured reporting of harms. Both analyses will be assessed between CSRs and journal publications using a case study of Roche-sponsored orlistat trials to provide a summary of the added value, if any, from the CSRs. To our knowledge, an in-depth exploration that includes a detailed meta-analysis of this type has not been published in previous CSR-related research. Methods We planned to identify independent trials, each of which were reported within two different trial summary reports: CSRs and publically available journal publications. The aim was to compare these document types and determine whether there were inconsistencies in the quality and quantity of reporting of harms. The CSRs were released by Roche (Genentech; South San Francisco, CA, USA). Identifying the studies A search was implemented by one researcher (AH) in the Cochrane Central register (final search 6 July 2013) and Ovid MEDLINE (final search 2 July 2013) to obtain all relevant published, randomised, controlled trials comparing orlistat against a placebo for obesity treatment. The search strategies are provided in Additional file 1. Each full article was assessed independently by one reviewer (AH) to determine eligibility. We included published and unpublished RCTs investigating the use of orlistat. No restriction was placed on the clinical area. Observational studies and those studies that did not specify orlistat as their primary intervention were excluded. Data collection and extraction Roche was contacted and asked to provide the corresponding CSRs for each of the publications identified. A Roche CSR consists of the following five modules of information: Module I: The 'Core report'background and rationale, objectives, materials and methods, efficacy results, safety results, discussion, conclusion and appendices Module II: 'Study documents'protocol and amendment history, blank case report forms (CRFs), subject information sheet and consent form, glossaries of original and preferred terms, randomization list, reporting analysis plan (RAP), certificates of analysis, list of investigators and list of ethics committee members Module III: 'Listings of demographic and efficacy data' Module IV: 'Listing of safety data' Module V: 'Statistical report and appendices'statistical analysis and efficacy results For each matching document pair (CSR and journal publication), the following data were extracted: Content and characteristics of both document types, including whether a clear primary objective of safety was defined, a word count of the information relating to harms in both the journal publication (including any online supplementary material) and in the CSR documents of text only (word count performed using the software AnyCount version 7.0 [18]). Missing pages relating to safety due to redactions were noted in the results; we managed to obtain these on further request. Name of each reported AE and SAE term recorded for both placebo and orlistat, with the overall number of patients in the safety population, as defined in the respective document. The intensity grading (i.e. mild, moderate, or severe), relationship to orlistat, and definition of the SAEs were also observed where possible. SAEs were defined as any event that was fatal or life-threatening, requiring hospitalization or prolongation of hospitalization, or an overdose. The AE coding system was also detailed. Reporting structure of harms (CONSORT-harms [19] used as a benchmark). The CONSORT extension for reporting harm outcomes extends ten checklist items of the CONSORT (2001) checklist to help support the reporting of harms-related data from RCTs. This includes guidance on how to report harms in the title and abstract, introduction, methods (definitions, collection, and analysis), results (withdrawals, denominators, and type), and the discussion. One researcher (AH) extracted, and a second reviewer (CTS) checked the data extraction. Discrepancies in the rates of agreement were resolved through consensus or recourse to a third reviewer (CG), where necessary. As there were no disagreements in the data extraction for the first three trials (NM16189, M37013, and M37002), extraction for the final two trials was only carried out by one reviewer (AH). AEs and SAEs For a particular trial, all harms (AEs and SAEs) reported in either the journal publication or the CSR were extracted and compared across the two document types. The clinically validated medical terminology dictionary MedDRA is commonly used during the regulatory process by all stakeholders in healthcare; it is used for coding harm outcomes. These reported outcomes were then organized into each of the five levels of the MedDRA dictionary: the system organ class, high-level group term, high-level term, preferred term, and lowest level term. Outcomes are usually reported in the journal publications and CSRs as MedDRA preferred term level events. Therefore, we compared the total number of reported MedDRA preferred terms, and if a preferred term was reported in both the CSR and journal publication, the numerical data were compared, and any discrepancies, noted. For each MedDRA preferred term (AE and SAE), the data extracted from the CSRs were used to estimate risk differences, which were pooled across trials using fixedeffect meta-analysis. A corresponding meta-analysis was performed using the data extracted from the journal publications wherever relevant. The pooled risk difference (RD) with 95 % confidence interval [20] and the I 2 statistic [21] were compared between the CSR-based and the journal publication-based analyses. As the SAE data were sparse, a sensitivity analysis was undertaken to pool the relative risk (RR). We stress that these meta-analysis results are based on a subset of the eligible trials of orlistat and are presented for the purpose of methodological comparison rather than definitive clinical results. Structured reporting of harms Using the CONSORT-harms extension [19] as a benchmark for reporting harms data from a randomised controlled trial, documents were assessed across 15 adapted criteria (see Table 1) that focus on the methods and results. Each trial was classified as follows for each individual criteria: BOTHboth documents report the criteria CSRonly reported criteria in the clinical study report Pubonly reported criteria in the trial publication NRcriteria not reported in either document The total number of criteria satisfied in each CSR and journal publication for a particular trial was calculated and expressed as a percentage of 15 criteria. When both document types reported on any particular individual criteria (i.e. BOTH), the reported information was compared and classified as follows: CSR (+) -The CSR provides more information than the journal publication Similar (O) -Both document types provide equal and similar information CSR (-) -The journal publication provides more information than the CSR Results Thirty-one journal publications related to 31 randomised controlled trials of orlistat were identified in our search ( Fig. 1). We requested access to the full CSRs from Roche corresponding to each of these trials. The CSRs could not be provided for 26 of these trials. Of the 26 trials, 17 were not Roche-sponsored, and therefore, the CSRs were not held by Roche. Nine trials pre-dated Roche's policy extension, which only allows access to trials dating back to 1 January 1999. CSRs were obtained and matched with the corresponding journal publication for five trials (NM16189 [17], M37013 [18], M37002 [19], M37047 [20], and BM15421 [21]). Module I of the CSR was provided for all trials. Module II was not provided for one trial (BM15421), and module V was not provided for one trial (NM16189). We contacted Roche to provide reasons for these missing modules and for the four missing pages. Roche informed us that these sections contained confidential information and had to be removed. Modules III and IV were not provided for any of the trial CSRs because they contained individual patient data listings. Table 2 shows the content and characteristics for each trial document pair. Safety was not the primary objective for any of the five trial journal publications but was defined as a secondary objective in three journal publications [22][23][24] and was not specified in two journal publications [25,26]. Two trials [23,25] were published in the Journal of Diabetes, Obesity and Metabolism; two trials [24,26], in the Journal of Diabetes Care; and one trial [22], in the Journal of the American Medical Association (JAMA). The mean word count across the five trial journal publications was 7,265 (standard deviation (sd) 1,894), with an average of 10 % of words (mean (sd) 757 (287)) dedicated to safety. The CSRs had a mean (sd) of 163,411 (96,872) words across all trials, with approximately 3 % (mean (sd) 4,663 (1,446)) related to safety. The mean difference between the CSR and journal publication was 3,906 (95 % CI (1,756; 6,056)) words. Comparison of reported AE and SAE event data MedDRA version 2.3 had been used to code AEs and SAEs in all five trials. CSR Clinical study report, Pub Journal publication †Safety secondary objective in both the CSR and journal publication; ¥Objective to assess improvements in glycaemic control and cardiovascular disease risk in both CSR and Journal publication; ɸ Module: I = Core report (background and rationale, objectives, materials and methods, efficacy results, safety results, discussion, conclusion and appendices); II = Study documents (protocol and amendments history, black case report form (CRF), subject information sheet and consent form, glossaries of original and preferred terms, randomization list, reporting analysis plan (RAP), certificates of analysis, list of investigators, list of ethics committee members); III = Listing of demographic and efficacy data; IV = Listing of safety data; V = Statistical reports and appendices (Statistical analysis, efficacy results). ✓Module provided in CSR; *Roche did not provide these modules, since they contained individual patient data listings and therefore were deleted. ϵ We could only count words for modules that were made available by Roche, so the actual number would be greater than this. The percentage of words relating to harms would therefore differ; Π CSRs each had one missing page in module I, which Roche provided upon further requests. Any additional information from this was used in the results. Adverse events The total number of MedDRA preferred terms for AEs varied across trials (Fig. 2) (Forest plots are provided in Additional file 1). The journal publications did not always report the complete list of terms identified in the corresponding CSR, but all of these 'missing' AEs were of mild to moderate intensity and were unrelated to the intervention. For instance, in one trial (M37013), very good consistency in reporting was observed between the CSR and journal publication, with 18 AEs reported in total, 18 (100 %) of which were listed in the CSR and 17 (94 %) in the journal publication. However, very poor consistency was observed for the three trials (M37002, M37047, and BM15421), with 5 % or fewer of the total AEs being reported in the journal publication (M37002, one (5 %); M37047, one (4 %); BM15421, 0 (0 %)). When a MedDRA preferred term was listed in both the CSR and journal publication, complete agreement was observed in the numerical results (Additional file 2) except for one trial (M37013), where three additional patients with 'abdominal pain' on orlistat were identified within the journal publication. In the meta-analysis (MA) for the AEs (Table 3), 61 individual MedDRA preferred terms were reported in either the CSR or journal publication across the five trials (Additional file 1). Thirty (49 %) of these terms were reported in the CSR and corresponding journal publication for at least one trial, thereby allowing a comparison of the pooled results. In six (20 %) of the 30 MA comparisons, the magnitude of the effect differed (the 95 % CI for the pooled risk difference (RD) did not overlap between the CSR and the journal publication results). These include the AE terms: 'increased defecation' , 'oily spotting' , 'oily evacuation' , 'faecal incontinence' , 'soft stools' , and 'faecal urgency'. For the 31 AE terms that had only been reported in a CSR, 23 (74 %) analyses suggested an increased risk of an AE on orlistat, two (6 %) of which were statistically significant (faeces discolouration and dry skin); these AEs were mild and were unrelated to treatment. For four (13 %) terms, an increased risk of an event occurred with the placebo, one (3 %) of which was statistically significant (haemorrhoids) and of a mild grade. Serious adverse events The total number of MedDRA preferred terms for SAEs were generally poorly reported in journal publications ( Fig. 3; Additional file 3). For the four trials (M37013, M37002, M37047, and BM15421) only 11 % or fewer of the total SAE terms were reported in the journal publication with 11 %, 0 %, 0 %, and 0 %, respectively. All SAEs that were reported only in the CSR were of mild intensity grading and were unrelated to the treatment. In trial NM16189, 19 SAE terms were reported across the CSR and journal publication. Thirteen of these were reported in both documents, either with full numerical agreement (12 SAE terms) or with disagreement in numerical results (one depression SAE on orlistat reported in the CSR, and two depression SAEs reported in the journal publication) (Additional file 3). Five SAE terms were only reported in the CSR (demyelination (one) and bronchospasm aggravated (one) on placebo, and convulsions (one), suicidal ideation (one) and liquid stools (one) on orlistat). Encephalomyelitis as an SAE was reported for placebo in the publication but not the CSR. Trial M37013 reports nine SAEs, with only "diarrhoea and dehydration" on orlistat reported in both documents. The remaining eight SAEs were only reported in the CSR; death (one), diabetes mellitus (one), hysterectomy and perineoplasty (one), mitral lesion (one) on placebo and Chronic cholecystitis (one), nephrectomy due to previous renal carcinoma (one), nephrectomy and lithotripsy due to previous nephrolithiasis (one), ovary carcinoma and ascites (one) on orlistat. The three remaining trials (M37002, M37047, and BM15421) report a high number of SAEs (40, 53, and 255) within the CSR that have not been reported in the corresponding journal publication. In the MA for the SAEs (Table 4), 326 individual terms were reported in either the CSR or journal publication across the five trials (Additional file 4). Fourteen (4 %) of these terms were reported in the CSR and corresponding journal publication for at least one trial, allowing a comparison of the pooled results. For the 311 (95 %) terms that had only been reported in a CSR, 16 (5 %) analyses suggested an increased risk of an SAE on orlistat, two (13 %) of which were statistically significant (carotid artery stenosis and varicose veins), but all were mild and unrelated. In the sensitivity analysis, pooling relative risk rather than risk differences, no SAEs were found to be statistically significant. However, we were unable to estimate the pooled relative risk for ten AEs (including carotid artery stenosis and varicose veins), as they include multiple studies reporting no events in the placebo group. Structured reporting of harms The quality of reporting harms-related information, as assessed against the 15 criteria adapted from the CONSORT-harms checklist, are displayed in Table 5. The CSRs satisfied 70-90 % the methods related criteria across the five trials compared to the journal • two (6 %) of the 23 AEs were statistically significant; faeces discolouration, dry skin a • one (3 %) of the four AEs with increased risk on placebo was statistically significant; haemorrhoids a Footnote: a These adverse events were mild and unrelated to treatment Fig. 3 The total number of serious adverse events reported in the clinical study reports (CSRs) and journal publications across all five trials. Footnote: Total: Total number of individual MedDRA preferred terms related to SAEs reported across the CSR and journal publication for a trial publications, which satisfied between 10 % and 50 %. The CSRs consistently provided much greater detail regarding planned analyses than the journal publication, and on only one occasion did the journal publication provide greater detail than the CSR (trial M37013; item 3 timing and time frame of surveillance for AEs). Both the CSRs and the journal publications satisfied 80-100 % of criteria in their results sections, but greater detail was generally provided in the CSR. This included full summary tables of the AE and SAE data, including withdrawals due to harm, severity grading, and denominators for the numbers included in the safety population. Discussion This case study has shown differences in the completeness and quality of reporting harms-related information between journal publications and CSRs for five orlistat trials. Information on the patient-relevant harm outcomes, including SAEs, which is required for unbiased trial evaluation, was missing from the publicly available journal article. Including these missing data from the CSRs altered the magnitude of the pooled risk difference estimates in a few cases and even resulted in five statistically significant differences (including three AEs and two SAEs). The statistically significant risk differences for AEs were faeces discolouration, dry skin, and haemorrhoids, and for SAEs, carotid artery stenosis and varicose veins. However, the statistical significance of these SAEs could not be confirmed in a sensitivity analysis pooling relative risks [27,28] due to zero events. The events were graded mild and were classified as unrelated to treatment. Overall, the results from the journal publications in this study follow findings from past studies [15,16], with a more detailed meta-analysis showing predominantly mild gastrointestinal harm outcomes. The quality of reporting between journal publications and CSRs showed inconsistencies when assessed by the CONSORT-harms reporting criteria. At 70-90 %, the methods section criteria were more often satisfied in the CSRs, compared to only 10-50 % of the criteria in the journal publications. However, both document types satisfied 80-100 % of the results section criteria, albeit with greater detail being provided in the CSR. The journal publication was often incomplete when reporting planned analyses and summary tables of AEs and SAEs, which were missing information on withdrawals, severity grading, and numbers in the safety population. Journal publications are often impeded by word count restrictions. However, inadequate reporting of harms is still noticeable, even after the release of the CONSORT-harms extension [19], as the findings from our recent review [29] suggest. In contrast, CSRs have no such word restrictions imposed, and theoretically, all relevant information should be included. An alternative and more viable solution appears to be that journals should require more thorough reporting of harms via online supplements (e.g. de-identified CSRs, study protocols, and complete tables of AE-related information) [30]. In a recent study [4], findings on harms information obtained from the CSRs were found to be more complete and robust compared with the corresponding publically available sources (journal publications and registry reports). More than 86 % of all harm outcomes (AEs and SAEs) were available from the CSRs, compared to only 26 % from the journal publications. Combining harms data from registry reports and journal publications increased the proportion of outcomes to 43 %. Furthermore, withdrawals due to AEs were detailed completely in 91 % of the CSRs, with only 51 % of the journal publications providing complete information. In another study [31], inadequate reporting of the harms was shown in the Medtronic manufactured product, recombinant human bone morphogenetic protein 2 (rhBMP-2), used in spinal fusion surgery. As in our investigation, harms data were found to be missing from the publications, with considerably more data found in the corresponding trial CSRs. Further evidence of poor reporting of benefits and harms was found in a recent investigation of the product duloxetine in patients with major depressive disorder [32]. The CSRs contained extensive data on major harms that were unavailable in the journal publications and in trial registry reports. Restricting evidence synthesis to journal publications would effectively miss these important harms. Further empirical comparisons such as ours, in different clinical areas, would be valuable. The drive to make clinical trial data more accessible has garnered widespread international support, with funders, academics, pharmaceutical industry, publishers and regulators supporting the move towards greater transparency. For example, the BMJ recently stated that it will no longer publish trials of drugs or devices where the authors do not commit to making the relevant anonymised patient-level data available; this was to be extended to all submitted clinical trials beginning 1 July 2015. In addition, the EMA has now adopted their new policy, making clinical trials data more accessible [13], including access to full CSRs. Roche should also be commended for voluntarily submitting their data and allowing further access to their CSRs. The new EU clinical trial regulation [33] published on 27 May 2014 also states under section (67) that 'trial data should be publically accessible and presented in an easily searchable format, with related data and documents (including trial protocol and CSR) linked together by the EU trial number'. Our study has a number of limitations. First of all, the meta-analysis results do not provide comprehensive unbiased clinical results, as they are based only on a subset of the five eligible orlistat trials, due to the inability to obtain CSRs for the remaining 26 identified trials, which were not Roche sponsored or pre-dated Roche's policy (dating back to 1 January 1999). The meta-analyses were conducted without any adjustment for multiplicity, meaning that there is an increased chance of a false positive result, and the results should be interpreted with caution. In addition, for the five CSRs obtained from Roche in this study, some of the reports failed to include any information from modules II, III, IV, and V, and some had missing pages. Individual participant-level data and potentially other important information on harms are often presented in Roche's CSR modules III-V. Access to these modules and confidential patient listings may have been restricted due to privacy violations, and these missing sections could present a possible cause of bias in the results. In a recent study [34], reviewers re-analysed one of SmithKline Beecham's studies by requesting and accessing the full individual participant level data sets to compare the efficacy and safety of paroxetine. The findings from this study support the necessity of making trial individual participant-level data and protocols available to help evidence-based decisions. In module I of the CSRs, they also detailed that only commonly observed AEs (defined as those events with incidence rate in orlistat group of ≥ 5 %) were summarized, indicating that there are potentially more unreported AEs missing from the primary trial data. Therefore, the results in this study were based only on the information available. Conclusions This case study confirms that CSRs can provide more complete and robust information on harms data collected in clinical trials, compared to publically available journal publications. CSRs often provide extensive information about the study methods, including design, conduct, and analysis of the trial. On the other hand, these reports are able to supplement journal publications to help facilitate the assessment of risk of bias in evidence synthesis of harm outcomes. Consequently, restricting an evidence synthesis to journal publications could have implications to systematic reviewers and other stakeholders involved in healthcare research when reaching reliable conclusions about the harmful effects of medical interventions.
2016-05-12T22:15:10.714Z
2016-04-22T00:00:00.000
{ "year": 2016, "sha1": "0e05754fad4d4024a62578a997981cb685fe115b", "oa_license": "CCBY", "oa_url": "https://trialsjournal.biomedcentral.com/track/pdf/10.1186/s13063-016-1327-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0e05754fad4d4024a62578a997981cb685fe115b", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
56430698
pes2o/s2orc
v3-fos-license
The Relationship between the Perception of Meritocracy and Productivity of Education Staff of West Tehran Background/ Objectives: This study was conducted with the aim of investigating the relationship between the perception of meritocracy and productivity of education staff of West Tehran. Methods/Statistical Analysis: The study population consisted of 516 education staff of West Tehran. According to the Morgan and Gorjsi, sample size included 217 education staff of West Tehran and sampling method is stratified sampling. Findings: To measure the perception of meritocracy, meritocracy standard questionnaire of Moslehi which contains 51 items and eight components (communication skills, decision-making, encourage, innovation and change, working relationships, leadership skills, professional skills, the use of positive capabilities of self and others and develop team activities) and to measure the productivity, the questionnaire of Hersey was used. According to supervisor view, validity of the questionnaire was approved and reliability was obtained by Cronbach’s alpha test for the perception of meritocracy 0.715 and for staff productivity 0.771. Data was described in the descriptive level by frequency, mean and standard deviation by tables and diagrams. Application/Improvement: To investigate the hypotheses, Pearson correlation coefficient was used. Results showed that there is a significant positive relationship between the perception of meritocracy and productivity of education staff in West Tehran. The Relationship between the Perception of Meritocracy and Productivity of Education Staff of West Tehran Hamid Taboli1*, Ehsan Mahdavi2 and Taghva Khosravi2 1Payam Noor University, Department of Public Administration, Tehran, Iran; htaboli@yahoo.com 2Public Administration Decision-making and Policy, Islamic Azad University, Kerman, Iran; Mahdaviehsan7@gmail.com, taghva_khosravi@yahoo.com Introduction Productivity improvement is one of the most important strategies for economic and social development; productivity improvement can follow reform and improve processes improve working relationships, reform individual and group behaviors, increase motivation, increase quality of life, increase prosperity, increase employment, increase salary level and wage (due to improvement of production and profit in the organization) when the countries of the world, whether non-developed, developing or developed are damaged from economic problems such as inflation, economic recession, realized the importance of productivity improvement. Competency and skills depend heavily on efforts of organization to empower work force to enhance competitive advantage, innovation and effectiveness 1 . is the process of creating attitude, behavior and moral patterns with stable values through the application of methods and standards for absorbing, using and fostering human resources; in a manner that competencies, talents and capabilities of human resources match with the future needs of the organization, also meritocracy is a dynamic process that must continually be investigated and to fit the need of organization, always evolves 2 . Competencies are defined as the skills, knowledge, abilities, and other characteristics that a person needs to do a job effectively 3 . Productivity is always the cause of interfering variables that tarnish its brightness, factors such as the low qualitative level of doing affairs, the inefficiency of the structure, weak of management systems, lack of meritocracy in management, lack of appropriate cultural contexts for the implementation of related projects with productivity, job Work communication should be effective to in the organization and its management be effective and plays its key role. In fact, effective communication can be considered as the foundation of modern organizations. Effective business communication, ie, all what sender has sent the message, verbal or nonverbal, destination or recipient receives the message, so that receiving the message interprets it as sender considers it and the sender waiting and response of recipient match to each other. Communication refers to the blood that flows in the vein of organization and lack of communication will cause a breakdown in the heart of the organization 9 . Decision-making is the process of thinking that is done around a problem and leads to a choice or judgment. Therefore, decision after a period of discussion, as an option for completing a task or action will appear. Hard decision makings surround all our social issues 10 . One of the main activities of management is decision making. Decision making is dealt with recognizing issues, determining substitutes for problem solving, choosing among them and implementing the chosen solution 11 . The decision making is the first duty of any director. Action of decision making in administering organizations' affairs is very important that some writers define organization the network of decision and management as the action of decision making 12 . Decision making is to select a solution among solutions that are at the center of the planning of organization. Decision-making can be imagined a perfectly rational process that goals are created in it, the issue is expressed, solution is identified and evaluated and a selection is carried out and implemented and supervision will be applied Syrtomarch believes that search in the decision making process usually leads to explore in the vicinity of obvious solutions, of course, should not be forgotten broad searches for new and innovative solutions 13 . Productivity is a comprehensive concept and one of the most important indicators of efficiency of various economic sectors and activities and suitable criteria for evaluating the performance of firms, organizations and determine how successful they are in achieving their goals 14 . Productivity is among factors that ensures durability and survive of organizations in the current competitive world. Prevailing culture of productivity, leading to efficient use of all spiritual and marital resources of organizations and always powers, talents and the potential possibilities of organization are flourished and without adding new technology and manpower can utilize the possibilities, dissatisfaction of staff, lack of job stability of managers and staff, lack of trust between managers and staff, lack of proper trainings and update in the field of productivity system 4 , lack of long-range vision in management, lack of work ethic in staff, not qualitative of management system and other reasons, has brought productivity as a complex variable 5 . The most important aspects affecting productivity include meaningful and challenging work, self-management, supportive leadership, multi-dimensional skills, and priority for reward system of individual and group-based. So given the importance of meritocracy in labor productivity, it is necessary to be investigated the relationship between the perception of meritocracy and productivity of education staff of West Tehran. Competence, Communication, Decision Making, Productivity Mohammadi 6 define competency as: a group of knowledge, skill and attitudes that affect a major part of a person's job and are correlated with job performance 7 . In 5 also look at competency as distinguishing feature. Revealed a definition that was implied in Mac keli Land works, and defined competency as the features that are related to superior performance or effectiveness on the job considered. In other words, in the view of the experts, competencies are evidence, implies that a person has features for superior performance or effectiveness. Competencies can be among the motivations, behaviors, skills, social role or knowledge that one uses those. Literally competency means worthy of merit, deserve, adequacy, acceptable, mighty, having enough readiness to enter into a certain profession and has a direct relationship with having a certificate in that profession, the word of competency is derived of Latin word «Competere» which means appropriateness. The concept of competency developed at the beginning of psychology and refers to an individual's ability to meet the environmental specific demands 7 . National Park Service Institute considers competency a set of knowledge, skill and abilities in a particular job that allows the individual to achieve success in accomplishing tasks. As can be seen, this definition has added component of ability to the components of competancy 8 . Communication in terms of process is called to all activities of spoken, written and motor that is used to convey the meaning of someone to another or from one group to another or from one or group to a mass range. Sub Hypotheses -There is a relationship between communication skills and productivity of staff. -There is a relationship between decision-making and productivity of staff. -There is a relationship between encourage, innovation and change with productivity of staff. -There is a relationship between working communications and productivity of staff. -There is a relationship between leadership skills and productivity of staff. -There is a relationship between professional skills and productivity of staff. -There is a relationship between use positive capabilities of self and others and productivity of staff. -There is a relationship between the development of team work and productivity of staff. Validity, Reliability, and Scale of Measuring Questionnaire The present research questionnaire because is taken of other research is standard. It is worth noting that researcher has given the mentioned questionnaires to a group of experts including university professors and directors in studied companies and imposed their comments. Therefore, the questionnaire used in this research has required validity. Cronbach's alpha coefficient with the help of software SPSS21 is used for measuring reliability of the questionnaire. The amount of Cronbach's alpha obtained for the variables according to Table 1 is 0.7, which indicates that questionnaires of research have a high reliability (Table 1). Population and Statistical Sample The study population consisted of 516 staff of departments of education of West Tehran. According to 9 , sample size included 217 education staff of West Tehran and sampling method is stratified sampling. To measure the perception conditions, capacity and manpower capabilities available with reproductive vitality and creativity towards realizing organizational goals. Optimal productivity not achieved by changing structures and adding technology, set the agenda and issued circular but human is the center of any personal productivity, social and organizational. So the most attention should be paid for human factors in organizational productivity and in the field, motivation is considered one of the important factors. Technical definition of productivity is simple and solely is the relationship between the used output and input amount to produce output in other words, productivity of output / input tells us that from a single entity or multi-unit of output can be achieved 15 . Productivity Institute of Europe defines productivity as degree and severity of effective use of each of the factors of production and claims that "productivity is a kind of thinking and vision that each person can conduct his tasks every day better than the day before. Belief to improve productivity, means having faith to human progress ". Davis has defined productivity" obtained change in the amount of product to consumed resources ". Mandel considers productivity as "ratio between production return to the unit of consumed source compared to the base year". Labor productivity is the actual output rates (hours worked) provided by the staff of organization. In most organizations, in order to measure the productivity of human resources divide physical quantity of produced goods or monetary value of goods and services and in some cases added value on the number of manpower; because it is difficult to measure the actual working hours. If for the calculation of labor productivity, added value divided on number of staff, then the index shows that on average, each staff how much added value created 16 . Method The method used in this research is applied objectively and in terms of data collection is descriptive and correlational. Hypotheses Based on the theoretical foundations of research, hypotheses are: The Main Hypothesis There is a relationship between perception of meritocracy and productivity of staff. to reject the null hypothesis. Thus, we conclude there is a significant relationship between encourage, innovation and change and productivity of staff. The obtained correlation coefficient (R = 0.801) indicating strong and direct correlation between two variables. (Table 5). Since the significant level is smaller than the considered significance level (α = 0.05), there is sufficient reason to reject the null hypothesis. Thus, we conclude there is a significant relationship between working communications and productivity of staff. The obtained correlation coefficient (R = 0.781) indicating strong and direct correlation between two variables. (Table 6). Since the significant level is smaller than the considered significance level (α = 0.05), there is sufficient reason to reject the null hypothesis. Thus, we conclude there is a significant relationship between leadership skills and productivity of staff. The obtained correlation coefficient (R = 0.605) indicating strong and direct correlation between two variables. (Table 7). of meritocracy, meritocracy standard questionnaire of 15 which contains 51 items and eight components (communication skills, decision-making, encourage, innovation and change, working relationships, leadership skills, professional skills, the use of positive capabilities of self and others and develop team activities) and to measure the productivity, the questionnaire of 9 was used which contains 26 items with a 5-point Likert scale (very low, low, somewhat, high and very high) ( Table 2). Results and Discussion Since the significant level is smaller than the considered significance level (α = 0.05), there is sufficient reason to reject the null hypothesis. Thus, we conclude there is a significant relationship between communication skills and productivity of staff. The obtained correlation coefficient (R = 0.623) indicating strong and direct correlation between two variables. (Table 3). Since the significant level is smaller than the considered significance level (α = 0.05), there is sufficient reason to reject the null hypothesis. Thus, we conclude there is a significant relationship between decision-making and productivity of staff. The obtained correlation coefficient (R = 0.527) indicating strong and direct correlation between two variables. (Table 4). Since the significant level is smaller than the considered significance level (α = 0.05), there is sufficient reason Table 9. The eighth hypothesis: There is a relationship between the development of team work and productivity of staff Given the amount of tilt, the more the components values of perception of meritocracy is more, predicts more value to pay staff. As a result, there is a positive and meaningful relationship between the perception of meritocracy and productivity of education staff of West Tehran. Based on this analysis, it can be concluded that useful predictor is working communications. Conclusion The obtained correlation coefficient (R = 0.623) indicates a direct and strong correlation between two variables and it can be said that there is a relationship between communication skills and productivity of staff. This result shows that the more staff has the communication skills, they can do better. The result of this hypothesis is consistent with the results of 9 . Communication is a vital and dynamic process in the organization. An organization that its staff with each other, clients and other organizations do not have effective communication, cannot acquire the necessary capabilities to perform their duties and in any case their motivation is also gradually reduced because Since the significant level is smaller than the considered significance level (α = 0.05), there is sufficient reason to reject the null hypothesis. Thus, we conclude there is a significant relationship between professional skills and productivity of staff. The obtained correlation coefficient (R = 0.627) indicating strong and direct correlation between two variables. (Table 8). Since the significant level is smaller than the considered significance level (α = 0.05), there is sufficient reason to reject the null hypothesis. Thus, we conclude there is a significant relationship between use positive capabilities of self and others and productivity of staff. The obtained correlation coefficient (R = 0.329) indicating average and direct correlation between two variables. (Table 9). Since the significant level is smaller than the considered significance level (α = 0.05), there is sufficient reason to reject the null hypothesis. Thus, we conclude there is a significant relationship between the development of team work and productivity of staff. The obtained correlation coefficient (R = 0.215) indicating weak and direct correlation between two variables. (Table 10). According to the table and the value of t, regression equation with all eight predictors of the perception of meritocracy is significantly associated with staff productivity. Due to the slope coefficients (column B), regression equation will be as follows. Υ = a + b 1 x 1 + b 2 x 2 + b 3 x 3 + … So by replacing the coefficients in the formula above, prediction equation of staff productivity from the components encountered a problem in their work and better do their job duties this in its turn increases their productivity. The obtained correlation coefficient (R = 0.605) indicates a direct and strong correlation between two variables and it can be said that there is a relationship between leadership skills and productivity of staff. The result of this hypothesis is consistent with the results of 11 . Leadership skills enabling managers to better influence on their staff and conduct them toward organizational goals, a manager who has the leadership skill will be able to conduct them in line with organizational objectives with the least cost and just by influence in staff and thus increases the efficiency and effectiveness of staff. This in turn will lead to high productivity of staff. The obtained correlation coefficient (R = 0.627) indicates a direct and strong correlation between two variables and it can be said that there is a relationship between professional skills and productivity of staff. The result of this hypothesis is consistent with the results of 11 . Staff with job skills fit to their job will be able to do the work assigned to them better and obtain higher productivity in their work and on the contrary, the staff who their professional skills is low and weak will not be able to do well. The obtained correlation coefficient (R = 0.329) represents the average and direct correlation between two variables and it can be said that there is a relationship between use positive capabilities and productivity of staff. The result of this hypothesis with the results of 9 consistent. A staff that is able to utilize his positive capabilities in line with organizational goals can better provide organizational goals and achieve higher productivity in the organization. The obtained correlation coefficient (R = 0.215) indicates a direct and weak correlation between two variables and it can be said that there is a relationship between the development of team work and productivity of staff. The result of this hypothesis is consistent with the results of 10 . Prevailing culture of teamwork causes efficient use from all spiritual and material facilities of organization and constantly powers, talents and potential possibilities of organization will flourish and without adding new human force and technology could utilize the facilities, conditions, capacities and capabilities of human resources available with reproductive feature and creativity towards achieving organizational goals. Optimal productivity not achieved with the restructuring, adding technology, set the agenda and issued circular but human is center of any kind of social, individual and organizational productivity. communication is a proper context for the exchange of information, knowledge and experiences. The more staff have more communication skills, they will be able to communicate better with each other, clients and other organizations and enhance their productivity. The obtained correlation coefficient (R = 0.527) indicates a direct and strong correlation between two variables and it can be said that there is a relationship between decision-making and productivity of staff. The more the staff has better skill to make decision, their productivity rises. The result of this hypothesis is consistent with the results of 15 . Review writings experts in management reflects the fact that the decision making is very close to the management issue and in some places it is equal to it, in this context, management means the decision-making process in order to meet organizational objectives appropriately through the effective use of scarce resources in a changing environment .The more the staff has better skill to make decision, they will be able to provide more and better organizational goals and increase their productivity. The obtained correlation coefficient (R = 0.801) indicates a direct and strong correlation between two variables and it can be said that there is a relationship between encourage innovation, change and productivity of staff. The result of this hypothesis is consistent with the results of 15 . Increase innovation in organizations could lead to improve the quantity and quality of service, reduce costs, avoid waste of resources, reduce bureaucracy and enhance the efficiency and productivity and motivation in job satisfaction in staff. Since the major work and human activity is done in the organizations and the causative agent of encourage, innovation and change is in range of art of management. Fostering creativity along with the product of this process, innovation, in staff causes to increase the effectiveness and efficiency of the staff, especially in educational institutions and ultimately staff productivity increases. The obtained correlation coefficient (R = 0.781) indicates a direct and strong correlation between two variables and it can be said that there is a relationship between working communications and productivity of staff. The result of this hypothesis is consistent with the results of 9 . The more the staff improves their working communication with each other in horizontal, vertical level in the organization; they will be able to do their job better. Having effective communication with other staff in the workplace causes that staff be able to get help from others when they Development of team activities makes staff increase their productivity. Regression equation with all eight predictors of the perception of meritocracy significantly associated with staff productivity. The more the components values of perception of meritocracy are more, the more value is predicted to pay staff. As a result, there is a significant and positive relationship between perception of meritocracy and productivity of education staff of West Tehran. Based on this analysis, it can be concluded that useful predictor is working communication. The result of this hypothesis is consistent with the results of 10 . This result suggests that the more the perception of meritocracy, staff productivity is also more. Recommendations and Solutions -According to the approving of main hypothesis, it is suggested to the selection of qualified staff should be done to authority of administrative jobs. -With regard to the relationship between the development of team activities and staff productivity, it is proposed to be promoted teamwork among employees. -With regard to the relationship between the use of capabilities of the team and staff productivity, it is proposed to be given more authority to staff to realize their capacities and abilities. -With regard to the relationship between encourage and staff productivity, it is recommended to be appreciated the staff who have achieved high score in productivity and given them career points. -With regard to the relationship between professional skills and productivity of staff, it is proposed to be placed on the agenda the presenting in-service trainings fit to organizational post for each staff.
2019-05-09T13:13:05.279Z
2016-07-27T00:00:00.000
{ "year": 2016, "sha1": "708c15d91a3812e41fb14003df07f858154c79ad", "oa_license": null, "oa_url": "https://doi.org/10.17485/ijst/2016/v9i27/97589", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "6bf2c7ac042c075fc127767119305d65d72faa98", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
219062727
pes2o/s2orc
v3-fos-license
FACE RECOGNITION WITH HYBRID TECHNIQUES : Face recognition framework is still in test by numerous applications particularly in close perception and in security frameworks. Generally all utilizations of face recognition utilize enormous information sets, making challenges in present time preparing and effectiveness. This paper contains a structure to enhance face recognition framework which have a few phases. For good result in face recognition framework a few upgrades are critical at each stage. A novel plan is displayed in this paper which gives the better execution for face recognition framework. This plan incorporates expanding in datasets, particularly huge datasets which are required for profound learning. Changing the picture differentiate proportion and pivoting the picture at a few edges which can enhance the recognition precision. At that point, trimming the proper territory of face for highlight extraction and getting the best element vector for face recognition finally. The last after effect of this plan will demonstrate that the given structure is able for distinguishing and perceiving faces with various postures, foundations, and appearance in genuine or present time Introduction Face recognition is a one of the best biometrics acknowledgment system which has been utilized generally as a part of our business field and our day by day life. It has been moreover a standout amongst the most famous research heading in the field of computer vision and a characterization issue in machine learning. Face recognition has numerous focal points like none contact, all the more well-disposed, continuous and more satisfactory look at to different biometrics Acknowledgment like Fingerprint, Iris acknowledgment, more friendly also, step Acknowledgment. The exploration of face acknowledgment has been created for around 50 years, though it is likewise have high research esteem. The advancement street of face Acknowledgment is viewed as two stages. The principal stage is utilizing the traditional technique like Eigen Face, Fisher Face and Local binary histogram (LBPH) for face acknowledgment when the 2nd stage is utilizing the profound learning technique that exceptionally celebrated lately. The traditional strategies are utilized for little dataset and can as it were conquer a few issues. Taking Eigen Face for instance, the fundamental thought of this technique is Principle Component Analysis (PCA), it is straight forward and get a decent execution in little dataset. Be that as it may, it is touchy to enlightenment when the appearances in the same light can be view as a similar individual. Contrasting with the traditional face acknowledgment techniques, profound learning appears have a superior preferences in enormous dataset, can defeat the assorted circumstances and can separate the ideal confront highlights. It is accounted for that the best execution in Labeled Face in the Wild (LFW) test dataset is around 60% exactness utilizing traditional strategies while getting 99.47% precision utilizing profound learning. Some prominent variables for face acknowledgment innovation not utilized generally today are speed and exactness. Indeed, even profound learning gives a decent execution of exactness, the vast measure of figuring make it hard to use in ordinary PC or implanting gadgets and the speed of acknowledgment is likewise a major issue. Also, taking in a profound system for face acknowledgment require a enormous dataset, it is additionally hard for us in the ordinary research work. For such issues, we have proposed a novel technique for dataset enlarge and a streamlined face acknowledgment framework that can enhance the exactness and speed for face acknowledgment. Whatever remains of this paper is composed as takes after: in segment 2, firstly, we have presented the pipeline of face acknowledgment framework. At that point, we utilize some picture preprocessing strategies prior to the face recognition. In segment 3, we have portrayed the confront acknowledgment framework and the technique for dataset expand. A profound learning calculation demonstrate called Convolutional Neural Network (CNN) have been presented taken after. In segment 4, we give the outcomes for face acknowledgment utilizing CNN calculation. The Face Recognition System The face acknowledgment framework ought to have the accompanying stages: picture catch, picture reprocessing, face identification and face acknowledgment. As shows in Figure 1, we ought to catch a picture at to start with, then preprocessing this picture for enhancing the nature of the picture and identify the picture whether have a face. In the event that it has identified one face, we will edit it and send it to the profound net to extricate the component vector for acknowledgment. Finally, we will contrast the element vector and the element dataset that was manufactured in the past. The techniques that can calculate the likeness of two element vectors we regularly utilize have Euclidean separation and angle cosine strategy. In this paper we will utilize the point cosine to calculate the comparability of two component vectors. We will view two faces as a similar individual if the similitude of their element vectors is higher than the edge. Image Capture and Preprocessing The pictures which are caught regularly frequently have numerous arbitrary noises. On the off chance that this picture send to face discovery and acknowledgment frameworks straight forwardly, the outcomes are constantly extremely poor. A sensible preprocessing for pictures will enhance the outcomes exceedingly. But a novel algorithm for the light impact or countenances, a picture preprocessing calculation for multi poses impact in face recognition and a cross age preprocessing technique for face acknowledgement frame work have proposed to overcome the above problem. Here, we utilize the quickest face location calculation in light of Haar highlight. As Haar components can just use in the dim space of pictures, we change the picture from RGB space to dark space firstly. Keeping in mind the end goal to make the differentiate proportion more grounded and enhance the nature of the pictures, we pick the least complex picture improvement calculation: the direct dim change calculation. By and large, if the picture was impacted by an outside environment, we will deteriorate picture quality and the range of dark space was extremely tight. With a specific end goal to develop the scope of dark space, we utilize the direct dim change. Expecting the pixel of unique picture is f(x,y). The pixel estimation of target picture is g(x,y). The dark scope of unique picture is (fmin,fmax) and the objective range is (gmin,gmax) .The social diagram as shows in Figure 2. Utilizing the addition change work appears in equation (1). Where (x,y) is the area of pixel. fmax ,fmin is the least and most extreme pixel estimations of unique picture. gmin ,gmax are the base and most extreme pixel estimation of target picture. The picture we utilized ordinarily is 8 bit, so the greatest pixel esteem is 28=256, so the scope of target picture is [0,255]. It implies that gmin=0 and gmax=255. So the equation (1) can be changed to the formal of equation (2). As shows in Figure 2, the nature of picture is enhanced and the scope of histogram is all the more to a great extent. The picture we utilized ordinarily is 8 bit, so the greatest pixel esteem is 28=256, so the scope of target picture is [0,255]. It implies that gmin=0 and gmax=255. So the equation (1) can be changed to the formal of equation (2). As shows in Figure 3, the nature of picture is enhanced and the scope of histogram is all the more to a great extent. The improvement of Face Detection Algorithm In this paper, despite everything we utilize the face recognition framework that in light of Haar elements. For the most part, the face indicator prepared by Haar includes dependably have a decent execution for front countenances what's more, bed execution for the appearances that turn points. The Haar highlights we regularly utilized as show as a part of Figure 4. Another issue is that face just hold little zone of a picture. While, It will take quite a while when indicator identify the entire territory of the picture. With a specific end goal to conquer these issues, we propose a new discovery pipeline here. The location pipeline appears in Figure 5. For an info picture, subsequent to preprocessing the first picture, confront finder will recognize the range of the picture, if not identify one face, the picture will turn (±10deg., ±20deg.) and recognize again to beat the revolution confront that can't distinguish. As appears in Figure 6, the face in unique picture is not. frontal, through these changes, we can see the locator can distinguish the picture in the first and turn 10 edge, 20 point pictures when can't recognize in the pictures that have pivot -10 what's more, -20 edges. For an info picture, we preprocess it including change RGB space to dark space and upgrade the quality. At that point, separate the picture skin territory and the locator just finder the skin region that additionally the conceivable face range. Through this operation, the speed of recognition enhanced profoundly. For skin extraction, we ought to change the RGB space to Y Cr Cb space. The change work appears in equation (3). Became very small compare to the original one. The skin extraction sketch shows in Figure 7. Through the skin extraction and rotation, the accuracy and speed of face detection have improved synchronously. The discussion of speed and accuracy will show in part IV. Face Cropping After face detection, we will crop the face area into the face recognition network for feature extraction. Then, compare with the feature dataset and identify the face. There are two plans that can be picked. The one is just trimming the face zone and resize to the predefined estimate when another is restrict the eyes as focus and grow to the predefined measure along upward, descending, leftward and rightward bearing. As shows in Figure 8, (a) is the first pictures, (b) is the trimmed pictures that lone have confront territories and (c) is edited pictures utilizing technique. Big Datasets Augment and Deep Face Recognition Deep face acknowledgment turned out to be exceptionally famous as of late for its advances in pattern acknowledgment and huge information arrangement. One basic condition for profound inclining is huge dataset. Scientists observed that utilizing more information when preparing the acknowledgment model will get a more pleasant execution. Facebook utilizing 4.4M face pictures for training. Furthermore, VGG display likewise have 2.6M pictures for preparing. They are all huge dataset. The pioneer of web like Google, FaceBook possess the huge dataset however they don't contribute them. VGG have proposed a strategy for information gathering however additionally cost huge work for people and need quite a while. Table 1 demonstrates the status of the enormous face dataset. They are all enormous face dataset however few of them share the assets. Deep Face Recognition Algorithm Deep learning is the most prevalent machine learning technique as of late that utilized as a part of face acknowledgment challenge. In this paper, we will utilize the Convolutional Neural Network (CNN) for face acknowledgment and reference to the VGG arrange. We utilizing this VGG-NET to separate the face highlight and contrast with the exist dataset. The primary layers of CNN have convolutional layers (CL), pooling layers and full association layers. In CL, we can though the time costs a great deal. Include the face skin extraction contribute little to the identify precision furthermore squander additional time. In the second analysis, we extricate the face include in light of the VGG-Net. We select two pictures that have a place with same individual and one picture that have a place with other. We measurable the highlight vector in FC7 layer. As shows in Fig. 10, we can see the measurable figures of second and third pictures are fundamentally the same as at the point when are distinctive to the primary picture. In this paper, we utilize the edge cosine esteem to gauge the closeness of two vectors. Accept x and y are tow vectors, utilizing edge cosine esteem work as shows in (16). The greater the esteem is, the more similitude the two vectors are. Utilizing equation (16), we ascertain the similitude of these three pictures appears in Figure 10. The similitude between first picture and second picture is 0.4256, the first and third is 0.3652. Here, we take the info information, information layer, conv1-1 layer, conv1-2 layer, pool 1 layer, pool 2 layer, conv 5-1 layer and FC7 layer for example. The features show in Figure 12. In the third experiment, we fine-tune the VGG-NET in our own dataset. We choose four famous stars and select 50 images for each of them on the internet. We can get 200 images for each star after using the dataset augment algorithm we have described before. Here, we choose 150 images per person for training and 50 images for testing. The initial learning rate we set as 0.001, and decrease the learning rate 10 times after 100 iterations. As shows in Fig. 13, the loss value decrease frequently and get the accuracy about 0.9. Conclusion This paper proposed a cross breed preprocess of pictures. That enhanced the exactness of face recognition and acknowledgment to some degree. The technique to enormous dataset enlarge also, utilizing VGG-Net for calibrate gives a decent execution. Be that as it may, a perfect face acknowledgment framework ought to get a higher precision and speed. A lot of calculation make the framework can't have any significant bearing on the inserting frameworks. In addition, a great framework ought to hearty to the brightening, multi-postures, expression and cross-age. We trust that, with the improvement of equipment and enhancing calculation, shutting the crevice between the human and machine is not a fantasy.
2020-05-07T09:10:00.296Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "2629c877718048b09324fff8fe18284517ffa3ce", "oa_license": "CCBY", "oa_url": "https://www.granthaalayahpublication.org/ijetmr-ojms/index.php/ijetmr/article/download/IJETMR18-CINSP-25/530", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "223fc691a6141c1917f84991e86e33d7ce386962", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
7476750
pes2o/s2orc
v3-fos-license
Learning and coordinating in a multilayer network We introduce a two layer network model for social coordination incorporating two relevant ingredients: a) different networks of interaction to learn and to obtain a pay-off, and b) decision making processes based both on social and strategic motivations. Two populations of agents are distributed in two layers with intralayer learning processes and playing interlayer a coordination game. We find that the skepticism about the wisdom of crowd and the local connectivity are the driving forces to accomplish full coordination of the two populations, while polarized coordinated layers are only possible for all-to-all interactions. Local interactions also allow for full coordination in the socially efficient Pareto-dominant strategy in spite of being the riskier one. S everal mechanisms and models have been implemented to explain the collective social behavior that arises from the interactions among individuals. The own experience and the experiences of others play an important role in determining the people choices in almost all human interactions. Imitation has been a widespread mechanism of human decision-making. Imitation of of common behavior reflects social influence in the individual, while imitation by others of a successful individual is of strategic nature [1][2][3][4] . Strategic interactions are often modeled by Game Theory. A relevant game theoretical model that describes many real-life interactions in which the best course of action is to conform to a consensus is the coordination game. The challenge of such model is how to coordinate among its multiple Nash equilibria 5 . This issue has been addressed in several works focusing on coordination games in a network framework [6][7][8][9] . However, two relevant aspects of this context have been largely unexplored. First, the study of a kind of interactions in which individuals distinguish according to their roles between people with whom they play to obtain a payoff and those from whom they learn to update their strategies. An appropriate framework is needed to deal with the possibility that people may identify the kind of interaction they have with their partners. Such situations are very common and pertinent in real-life interactions. For example, the interactions between and within firms and consumers, employers and employees, governments and citizens, teachers and students, parents and children, medical doctors and patients. There individuals interact across groups and receive a payoff for such interactions (for instance parents with children) and look inside their group to learn and update their strategies (for instance parents learn from other parents and children learn from other children). What we have are the situations in which two populations are differentiated by the role that their individuals perform. In simple models of social networks individuals are unable to encompass different types of relationships. They play with and learn from the same set of neighbors. A different class of networks that have layers in addition to nodes and links, has been growing in popularity because of being a better description of a real networked society. The study and analysis of multilayer networks is relatively recent even though layered systems were examined decades ago in disciplines like sociology and engineering [10][11][12] , for a complete review see 13 . Here we propose a two-layer network in which inside each layer, individuals update their strategies by a rule of learning and across layers individuals receive an aggregate payoff by playing a coordination game. Most previous studies of games in multilayer networks [14][15][16][17] consider playing the game inside the layers while we consider game theory interactions across layers. In a recent work 18 the authors consider a two-layer network wherein one network layer is used for the accumulation of payoffs playing a social dilemma game and the other is used for strategy updating. There, each agent is simultaneously located on both layers. In contrast, in our two-layer network, each agent is located in just one layer. Therefore, there are two learning networks, one in each layer, and a playing network across the two layers. The second aspect refers to elucidate what happens when people make decisions heeding simultaneously social and strategic motivations 4 . In situations that call for accomplishing social efficiency and consensus two forces influence agent's choices: the strategic reasoning and the social pressure of the environment. In the sociological context, Granovetter 19 proposed a model in which a certain amount of social pressure is necessary for a person to adopt a new idea, product or technology. Opinion, innovation spreading and social learning models have been dealing with this issue measuring the social pressure as the number of contacts that have already adopted the newness [19][20][21][22] . Here, we consider that the influence of social pressure is related with the degree of doubts about the strategies currently being played. Traditionally, the degree of doubts is measured as the subjective belief about the consequences of a certain action 23 . However, we assume doubts as a social factor influencing choices in strategic environments. Then, the doubts of an agent about how well she is playing depend on the popularity of her current strategy in her learning network. Our approach of doubts is inspired by the work of 24 . They introduce an evolutionary model of doubt-based selection dynamics. As well as 24 , we assume that the agents measure their doubts by observing the choices made by their fellow agents. Real-life interactions and laboratory experiments [25][26][27] provide clear evidence of the importance of analyzing evolutionary dynamics based on social and strategic factors. For instance, in 4,28 the authors explore the interplay between strategic and social imitative behaviors in a coordination problem on a social network and in a networked Prisoners' Dilemma respectively. In these works agents can evolve by a mixed dynamics of the voter model 22,29 and the unconditional imitation. One of the main results in coordination games on complex networks is that the interplay of social and strategic imitation drives the system towards global consensus while neither social or strategic imitation alone does. Our approach aims to deal with these two important aspects mentioned above and verify the circumstances in which the complexity of such social and strategic behavior leads to the consensus on the whole society. Results Model description. In this paper we consider a two-layer network in which each individual is connected to two different social networks, the interlayer network or playing network, and the intralayer network or learning network, see Fig. 1. In the playing network, each player interacts according to a coordination game with each of her neighbors using the same action for all those games. A normal form representation of this two-person, two-strategy coordination game is shown in Table 1. We focus our analysis in two parametric settings, a pure or symmetric coordination game in which a 5 d 5 1 and b 5 0 and a general or asymmetric coordination game in which a 5 1, d 5 2 and b . 0. The profiles (L, L) and (R, R) are the two Nash equilibria in pure strategies in both settings. Now, in the general coordination game the agents get a higher payoff by playing (R, R), the Pareto (payoff) dominant equilibrium while for b . 1 they risk less by coordinating on (L, L), called the risk dominant equilibrium. Games of this type are more interesting than their fully symmetrical versions as it is added a confidence problem when the socially efficient solution is also the riskier one. Doubts and the parameter T. In the learning network, we propose an evolutionary update rule that heeds strategic thinking and the doubts that are generated by the popularity of the strategies. In order to describe this aspect in detail we provide some definitions. As Cabrales and Uriarte 24 , we assume that the doubts felt by an agent are related to the proportion of individuals with whom they interact who are equally using the same strategy. Our approach differs from 24 since while those authors assume that the agents are endowed with a doubt function, we assume that they are endowed with a quantity T that calibrates their level of doubts about the collective wisdom of crowd, T g [0, 1]. This parameter T is in the same line of the threshold value in 19 . Just as in 24 , we may distinguish two broad types of population, each corresponding to a doubtful behavior. A herding population, for T , 0.5, is a population in which agents rely on the wisdom of crowd. As a consequence, they are strongly influenced by the popularity of the current strategies of their partners. A skeptical population, for T $ 0.5, is a population in which agents are very suspicious of the wisdom of crowd: they are slightly influenced by the popularity of the current strategies of their partners. In the updating process, each player i observes the proportion of agents, d i , who are playing the opposite strategy to hers in her learning neighborhood. Then, she measures how popular her strategy is, comparing d i with T. For instance, when d i . T, player i has doubts about the popularity of the strategy she is currently playing. The degree of dissatisfaction. The evolution on time of the strategies derives from the levels of dissatisfaction felt by the agents. The criterion that defines the level of satisfaction of an agent is based on two key points: how well she is doing in terms of the payoff obtained in her playing network and how popular her current strategy is in her learning network. Our approach of satisfaction is quite different from 24 where they justify the choice of an index of dissatisfied agent via a model of (correlated) similarities relations and from 9 that define a quantity called satisfaction based on the strengths of the links. In our approach we distinguish four categories of agents as described in Table 2, where p i is the aggregate payoff of agent i and n i is her degree in the playing network.: The value of b is derived from the parametric setting of the class of coordination game played. Since in the pairwise interaction of pure coordination games each player gets a payoff of 1 by coordinating and 0 otherwise, then b 5 1 for such game. The equality p i 5 n i means that the player i coordinates with all her neighbors in the playing network: then we say that agent i is strategically satisfied. In the case of a general coordination game, b 5 2 and an agent is strategically unsatisfied when she fails to coordinate with all her The nodes are connected to each other in a pairwise manner both inside of the layers and between the layers for two populations A and B. Dotted lines describe the playing network (i.e. interlayer edges) and the solid lines describe the learning network (intralayer edges). The black nodes describe the agents playing strategy L and white nodes the agents playing strategy R in a coordination game. neighbors on the socially efficient solution, i.e. the Pareto dominant strategy. This will happen when in a time step p i , 2n i . When d i , T the proportion of neighbors in her learning network who play the same strategy as she does is high enough so that player i feels socially satisfied with her current strategy. Then, the level of satisfaction of an agent i is: S (satisfied) when she is both socially (d i , T) and strategically (p i 5 bn i ) satisfied, is P1 or P2 (partially satisfied) when she is either socially (d i . T) or strategically (p i , bn i ) unsatisfied and is U (unsatisfied) when she is both socially (d i . T) and strategically (p i , bn i ) unsatisfied. The strategic update rule. We propose a synchronous update rule in which each player can change her current strategy according to her level of satisfaction. Namely, 1. If her level of satisfaction is S, she remains with the same strategy. 2. If her level of satisfaction is P1 or P2, she imitates the strategy of her best performing neighbor in her learning network when such neighbor has received a larger payoff than the player herself, otherwise she remains with the same strategy. 3. If her level of satisfaction is U, she changes her current strategy. This rule might resemble the well-known unconditional imitation (UI) update rule introduced in 30 . When agents follow the (UI) update rule, they seek to maximize their payoffs imitating the most successful individuals. However, the first important difference in our update rule is that individuals change their strategies conditional to their social or strategic dissatisfaction. Some experimental results show evidence of the use of the (UI) rule by individuals but also provide evidence that other social factors are influencing the updating process [25][26][27] . Other important difference is the environment in which learning takes place. Since individuals discriminate from whom they learn and with whom they play, this update rule only takes place in the learning networks. The proposed update rule aims to capture the individual behavior in a complex real life situation. Having setting out our strategic and social framework, we now turn to describe the evolutionary dynamics. At each elementary time step, each player plays the coordination game with each one of her interlayer neighbors. Once the game is over and a payoff is assigned to each player, each agent, observing her intralayer neighbor,s might change her strategy according to her level of dissatisfaction. The process is repeated setting payoffs to zero. Simulation settings. The size of the populations A and B during simulations is N A 5 N B 5 1000. The numerical results are obtained for random (Erdös-Rényi, ER) networks and fully connected networks. In the learning networks, k AA and k BB represent the mean degree (average number of links per node) for population A and B respectively. In the playing network, the mean degree k AB corresponds to the average number of links per node across populations A and B. The two strategies of the coordination games are L and R which are initially uniformly randomly distributed with proportion 0.5. Results for pure coordination games. As a benchmark, it is helpful to remind the final configuration of a structured population playing a pure coordination game with the (UI) as update rule. The topology will define the outcome of such population. For instance, for a complete network, referred also as a fully connected network, in which each agent interact with every other agent, full coordination is reached in one time step, while for a social network displaying local connectivity, such as the random (ER) network, the system evolves to a non-coordinated frozen state. For the study of our model we focus on these two network topologies. Our simulation results show that the combination of strategic and social factors in a multilayer network drives the system to quite different outcomes than those ones. Before displaying the results, we need to clarify what a complete network means in our context of multilayer network. A complete network here implies that every agent plays with every other agent in the playing network and learns from every other agent in the learning network. Agents still discriminate between with whom they play and from whom they learn. Moreover an absorbing state in this framework is a state of intralayer coordination. In this state the agents are socially satisfied since inside each layer the same strategy is spreading all over the network. A state of interlayer coordination is a state of intralayer coordination in which the strategy displayed in one layer coincides with the strategy reached in the other layer: agents are socially and strategically satisfied. However, when the strategy in one layer is the opposite to the one in the other layer, the social satisfaction of agents makes the strategies to remain unchanged, and the configuration of a polarized two-layer network is an absorbing state of the dynamics. In summary, a state of interlayer coordination implies a state of intralayer coordination. but the reciprocal is not necessarily fulfilled. Both interlayer coordination (or full coordination) and intralayer coordination are absorbing states of the dynamics. The final configurations of the system can be described by the intra (inter) active links defined as the number of links connecting agents with different choices in the playing (learning) network. Figure 2 shows the average of the proportions of active links n A (T), inter layers between populations A and B and intra layer for each population, A and B, for T g [0. 4,1] in the fully connected network (left panel) and in the random (ER) network (right panel). We find that for herding populations, T , 0.5, the final configuration of the system is a state of non-coordination in both the learning network (intralayer) and the playing network (interlayer) for the fully connected network and the random (ER) network. Too much sensitivity to the social pressure plays against the intralayer, and therefore, the interlayer coordination in any of these two network topologies. Such non-coordination state is the one in which the proportions of the strategies in population A and B fluctuate over 0.5, see the left panel of Fig. 4. However, in the case of skeptical populations, T $ 0.5, the system always reaches intralayer coordination both in the fully connected and in the random (ER) networks. However, for interlayer coordinations, we observe coordination on all realizations of the process in the case of random (ER) networks, while interlayer coordination is only reached in half of the realizations in the fully connected network. Figure 3 shows the number of realizations in which the system reaches a state of interlayer coordination on the strategy L and R and a interlayer non-coordination state for T g [0.4, 1]. For T . 0.5, we observe that in the fully connected network (left panel), agents fully coordinate either in L or R half of the realizations. The steady state of non interlayer coordination is a completely polarized multilayer network in which all agents in population A play the opposite strategy of all agents in B, see the right panel of Fig. 4. In the case of random (ER) networks (right panel of Fig. 3) a state of interlayer coordination either in L or R is always reached for T . 0.5. Comparison of this result with the one for fully connected networks highlights the role of local interactions to reach consensus or full coordination: While with all-to-all interactions (fully connected networks) interlayer coordination is only reached in half of the realizations, the presence of local interactions (ER networks) leads always to full (interlayer) coordination for skeptical populations (T . 0.5). Results for general coordination games. To cover a better understanding of this multilayer model, we extend our analysis to a general coordination game setup whose normal form representation is shown in Table 1 with a 5 1, d 5 2 and b . 0. Due to their social and strategic implications this class of games has been studied analytically in an evolutionary framework 31,32 and by the numerical simulations on several network topologies [6][7][8] . Previous numerical results have shown that in a fully connected network, the agents using the (UI) update rule tend to coordinate on (L, L), the risk dominant equilibrium whenever b . 1 and in the case of a complex network, the (UI) update rule leads to frozen disordered configurations. In our multilayer model with the dynamic update rule based on social and strategic implications, our numerical results are again quite different from these previous results and also are determined by the doubtful behavior of the populations. The same analysis made in the last section for pure coordination games leads too to the same conclusion that states of intralayer coordination are absorbing states of the dynamics. The state of interlayer coordination is another absorbing state that implies intralayer coordination. As already seen in the previous section of pure coordination games, also in the general coordination games the herding populations are not able to reach intralayer coordination neither for fully connected nor for random (ER) networks. In contrast to Ref. 33 where the ''wisdom of groups'' promotes cooperative behavior in social dilemmas, in coordination games the sensitivity to the social pressure is a detrimental factor in any of the two network topologies. Similarly, for skeptical populations, the final configuration of intralayer coordination is always reached and depending on the network topology the state of interlayer coordination is also accomplished. As an example, Fig. 5 shows the densities of intralayer and interlayer active links for a general coordination game with b 5 1.1. For T . 0.5, the system reaches interlayer coordination almost 70% of the realizations in the fully connected network (left panel of Fig. 5). This proportion is higher than the 50% observed in the case of the pure coordination games. In the random (ER) network, the final configuration of the system is always of interlayer coordination, see right panel of Fig. 5. The main point at issue here is whether Pareto-dominant equilibrium can be coordinated by the agents. In the game theoretical approach, the coordination on the risk-dominant equilibrium (L,L) is unavoidable whenever b . 1. In our framework, skeptical individuals are those able to reach intralayer or interlayer coordination, however the key point is to find out whether such coordination favors the desirable socially efficient outcome, that is the (R,R) Pareto dominant coordination. First, let us analyze what happens in the complete multilayer network. As the initial strategies are uniformly randomly distributed with proportion 0.5, almost all individuals are at least strategically unsatisfied and willing to change their strategies. According to the update rule, an unsatisfied agent who is playing L in a fully connected network will change her strategy to R only when bv 2 p L {3 where p L is the proportion of agents playing L in her learning network. Due to the initial conditions p L < 0.5 the parameter b must be approximately lesser than 1 to make agents who are playing L change to R. Panel (a) of Fig. 6 shows, for a fully connected network, the number of realizations that the system reaches interlayer coordination on L, on R, and intra but not interlayer coordination as function of b. We observe that as b increases, the number of realizations reaching interlayer coordination on L increases. As a consequence, the rate of coordination on the Pareto dominant equilibrium (R, R) decreases with b, with the most likely coordination shifting from Pareto dominance to risk dominance around b* 5 1, as expected. Noteworthy that the range of values of b in which the state of polarized layers can be reached is also around b 5 1, where the two Nash equilibria have the same expected payoff. In panels (b) and (c) of Fig. 6 for ER networks, we show that such threshold b* in which the chance of coordination on R starts to decrease is higher the lower the average number of links per node is. The effect of locality not only favors interlayer coordination over only intralayer coordination (polarized layers) but also favors Pareto dominant coordination. In our numerical simulations (not shown) we find that already for k AA 5 k BB 5 k AB 5 10 the agents manage to coordinate on the Pareto dominant equilibrium (R, R) for any value of b g [0.5, 2], overcoming the frozen disordered configurations reported in previous works. The strong effect of locality is due to the possibility that p R . T for an agent who is playing L. In such case she will be totally unsatisfied and will switch her strategy to R. In our multilayer model, the locality for skeptical populations is the driving force that favors interlayer coordination on the socially efficient outcome, that is the Pareto dominant strategy. Discussion In this paper we have introduced a multilayer network model in which agents of two populations play and learn in two disaggregated networks and update their strategies heeding social and strategic motivations. A network between the two populations is for playing according to a coordination game. There each agent receives an aggregate payoff as a result of her interaction with each of her playing neighbors. The other network is for learning in which each agent can update her game strategy motivated by a feeling of social or strategic dissatisfaction. When an agent is unsatisfied either socially or strategically, she can update her strategy imitating the strategy of the most successful neighbor. The agent searches for such neighbor looking inside her own population. We have shown that the degree of social pressure calibrated by the level of doubts plays an important role in the networks topologies considered. The skepticism about the wisdom of crowd and the locality of interactions are the driving forces for collaboration and social efficiency in both pure and general coordination games. For pure coordination games in a skeptical environment, each population evolves towards a coordinated state in both fully connected and random (ER) networks. However, in fully connected networks (non-local interactions) the populations eventually may coordinate each other in the opposite strategy leading to a polarized multilayer network. In the case of general coordination games the challenge is to elucidate whether the Pareto-dominant strategy, the socially efficient outcome, can be established in the populations. Previous results in well-mixed and structured populations tend to favor the risk-dominant equilibrium in the parametric setting in which the Pareto dominant equilibrium is also the riskier one. In contrast, our simulation results show that the skepticism and the local connectivity allow the populations to coordinate on the Pareto-dominant equilibrium even in the riskier setting.
2014-10-16T06:50:52.000Z
2014-10-16T00:00:00.000
{ "year": 2015, "sha1": "a2e5b159f4ecec4fafe58bf7b852a15a51d20a09", "oa_license": "CCBYNCND", "oa_url": "https://www.nature.com/articles/srep07776.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a2e5b159f4ecec4fafe58bf7b852a15a51d20a09", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Computer Science", "Psychology", "Physics", "Medicine" ] }
13979682
pes2o/s2orc
v3-fos-license
Effects of bacterial translocation on hemodynamic and coagulation parameters during living-donor liver transplant Background Bacterial translocation (BT) has been proposed as a trigger for stimulation of the immune system with consequent hemodynamic alteration in patients with liver cirrhosis. However, no information is available regarding its hemodynamic and coagulation consequences during liver transplantation. Methods We screened 30 consecutive adult patients undergoing living-donor liver transplant for the presence of BT. Bacterial DNA, Anti factor Xa (aFXa), thromboelastometry, tumor necrosis factor-α TNF-α, and interleukin-17 (IL-17) values were measured in sera before induction of anesthesia. Systemic hemodynamic data were recorded throughout the procedures. Results Bacterial DNA was detected in 10 patients (33%) (bactDNA(+)). Demographic, clinical, and hemodynamic data were similar in patients with presence or absence of bacterial DNA. BactDNA(+) patients showed significantly higher circulating values of TNF-α and IL-17, and had significantly higher clotting times and clot formation times as well as significantly lower alpha angle and maximal clot firmness than bactDNA(−) patients, P < 0.05. We found no statistically significant difference in aFXa between the groups, P = 0.4. Additionally, 4 patients in each group needed vasopressor agents, P = 0.2. And, the amount of transfused blood and blood products used were similar between both groups. Conclusion Bacterial translocation was found in one-third of patients at the time of transplantation and was largely associated with increased markers of inflammation along with decreased activity of coagulation factors. Trial registration Trial Registration Number: NCT03230214. (Retrospective registered). Initial registration date was 20/7/2017. Background Bacterial translocation (BT) is defined as translocation of bacteria and/or bacterial products from the gut to the mesenteric lymph nodes [1]. Although BT is a physiologically controlled process in healthy subjects, it is considered pathological in patients with liver cirrhosis who sustain increased BT events [2]. The clinical significance of diagnosing BT in patients with liver cirrhosis has been addressed [1][2][3][4]. Most studies have found that the presence of BT in cirrhotic patients is associated with significant hemodynamic changes, even in the absence of clinical infection, and is due to the release of inflammatory mediators like tumor necrosis factor-α (TNF-α) [2,3]. The effects of BT on coagulation abnormalities in patients with liver cirrhosis have not been investigated. Studies examining the relationship between true bacterial infection and coagulopathy have found that the presence of infection increases the incidence of bleeding in patients with liver cirrhosis [5,6]. The mechanism of this infection-induced coagulopathy remains poorly understood, but one postulated mechanism is that bacterial infection creates heparinoid-like substances [6]. These endogenous anticoagulants have been confirmed by thromboelastography and by the presence of antifactor X activity in the blood of infected patients [5,6]. The aim of the present study was primarily to explore the incidence of BT in cirrhotic patients at the time of liver transplantation, and secondarily to investigate the effect of BT on hemodynamic, inflammatory, and coagulatory parameters during living-donor liver transplantation. Methods Thirty consecutive adult patients with grade C liver cirrhosis undergoing living-donor liver transplant were enrolled in the study. The Research Ethics Committee approved the study protocol and written informed consents were obtained from all participating patients. Patients under 18 years, those who had positive blood or ascitic fluid cultures or who underwent treatment with antibiotics in the preceding 2 weeks, and those with fulminant liver failure were all excluded from the study. A standardized anesthetic protocol was used [7]. Anesthesia was induced with intravenous propofol, fentanyl, and atracurium. Anesthesia was maintained with sevoflurane adjusted between 1 and 2% in an oxygen/air mixture, a fentanyl infusion at 1-2 μg/kg/hr, and an atracurium infusion at 0.5 mg/kg/hr. Mechanical ventilation was provided by a Primus anesthesia machine (Dräger, Germany) using a tidal volume of 8 mL/kg with the respiratory rate adjusted to maintain PaCO2 between 30 and 35 mmHg. All patients were monitored for temperature, noninvasive and invasive arterial blood pressure, 5-lead electrocardiogram, peripheral oxygen saturation, end-tidal carbon dioxide tension, hourly urinary output, central venous pressure (CVP), and pulmonary artery occlusion pressure (PAOP). A pulmonary artery catheter (PAC) (OPTIQ SVO2/CCO Abbott Laboratories, North Chicago, IL, USA) was inserted into the right internal jugular vein. All the patients received 6 ml/kg crystalloids as maintenance intraoperative fluid. Fluid resuscitation was guided by using the pulse pressure variations (PPVs) through a Philips Intellivue MP 70 monitor (Philips, Suresnes, France). PPV more than 13% indicated that patients were fluid-responsive, and cardiac output could be increased by additional intravenous fluid administration. The patients received 250 ml boluses of 5% albumin as needed to maintain a PPV < 13%. Blood transfusions were administered based on the hemoglobin level (< 7 g/dl), and a thromboelastometry was used to choose blood transfusion products (platelets, fresh frozen plasma (FFP), and cryoprecipitate). Transfusion of FFPs was required when EXTEM clotting time (CT) is > 80 s. Transfusion of cryoprecipitate is indicated if EXTEM maximal clot firmness (MCF) < 35 mm and FIBTEM MCF < 8 mm. If EXTEM MCF < 35 mm and FIBTEM MCF > 8 mm this indicates the need for platelet transfusion [8]. In all cases, the decision of transfusion depends on the results of thromboelastometry and the presence of clinically significantly bleeding. We typically transfused FFPs at dose of 10-15 ml/kg but in 2 unitincrements till the bleeding cease. Norepinephrine was administered if the mean arterial pressure was < 70 mmHg despite adequate volume resuscitation. Hemodynamic variables Heart rate, mean arterial blood pressure, PAOP, CVP, and cardiac output (using a pulmonary artery catheter) were monitored. Hemodynamic data were recorded after induction of anesthesia, at the end of the preanhepatic phase, at the end of the anhepatic phase, and at the end of the surgery. Laboratory data Whole blood samples were taken from the patients before induction of anesthesia to perform the necessary tests. Thromboelastometry EXTEM, INTEM, and HEPTEM tests were performed with ROTEM delta (ROTEM®). The following four variables were recorded for each test: CT, clot formation time (CFT), alpha angle (α angle), and MCF. For the FIBTEM test, only the MCF was documented. Cytokine levels Serum levels of IL-17A and TNF-α were determined using the enzyme-linked immunosorbent assay (ELISA) kits of Euroclone (Wetherby, Yorkshire, UK) for IL-6 and TNF, and the R&D Systems kit (Wiesbaden, Germany) for IL-17, according to the manufacturer's instructions. Activated factor X (aFXa) The level of aFXa activity was determined using a validated chromogenic assay kit (COAMATIC Heparin; Chromogenix, Instrumentation Laboratory Company, Lexington, KE, USA) with the substrate S-2732 and the recommended apparatus (STA-R Evolution; Diagnostica Stago, Asnières, France). The test was considered positive when the level of anti-Xa was > 0.2 units/ml. Bacterial blood culture and DNA extraction We incubated 5-10 ml (optimally 8-10 ml) blood in a BACTEC 9120 system (Becton-Dickinson). All blood culture bottles (BACTEC™ Plus Aerobic/F and BACTEC™ Plus Anaerobic/F Becton-Dickinson) containing resins were incubated for a minimum of 5 days according to the manufacturer's instructions. When a positive signal was detected, bottles were removed and an aliquot of the broth was Gram-stained and processed by a range of routine biochemical test methods. Bacterial DNA was extracted from blood culture samples using the QIAmp DNA Minikit (Qiagen) according to the protocols in the manufacturer's instructions. The extracted DNA was stored at 4°C until required for PCR. We used the Dream Taq TM PCR Master Mix 2X (Fermentas) (#K1071) containing: Dream Taq TM DNA Polymerase, Dream Taq TM PCR Buffer, 4 mM MgCl 2 , and dNTPs for the PCRs. Other data collection We also kept records of Child-Pough (CTP) scores, Model for End Stage Liver Disease (MELD) scores, graft weight ratios (GWRs), and the use of intravascular volume replacement therapy [including colloid infusion and transfusions of packed red blood cells (PRBCs) and FFP]. All complications including rejection episodes, graft dysfunction, renal replacement therapy, nosocomial infections, hospitalization length, and ICU length of stay were documented. Statistical analysis Sample size estimation was based on the presence of anti-Xa activity because it is the main outcome variable. Previous study found that anti-Xa was present in 6.7 and 60% of non-infected and infected cirrhotic patient respectively [5]. Considering the incidence of bacterial translocation is 30%. We estimated the sample size to be 30 patients with power of 0.8 and alpha error 0.05 [2]. Descriptive statistics of the baseline characteristics, ROTEM, cytokines and anti-Xa values are expressed as median (interquartile range (IQR)). The Mann-Whitney rank-sum test (two-tail) was used for comparison of continuous variables between bacterial DNA(+) and bacterial DNA(−) cases. For categorical data, Fisher exact or chi-square tests were used for comparison as appropriate. A P value ≤0.05 was considered statistically significant. Results Thirty patients were enrolled in the study. Bacterial DNA (bactDNA) was only detected in 10 patients (33%). Patients were divided into two groups according to the presence or absence of bacterial DNA. There were no significant differences between the two studied groups in terms of age, gender, body mass index (BMI), MELD or CTP scores. Also, we found no significant differences in terms of the GWR, ICU length of stay, hospitalization length, or the mortality rates (Table 1) The use of vasopressors, PRBCs, and FFPs did not differ between the two groups either (Table 4). Discussion The main finding of this study was that cirrhotic bactDNA(+) patients who underwent liver transplant showed marked hypocoagulability on the thromboelastometric analysis, without evidence for increased endogenous heparin-like substance activity. Moreover, the presence of bacterial DNA was associated with a more systemic inflammatory response as suggested by the greater increases in TNF-α and IL-17. One-third of our patients had bacterial translocations, as evidenced by the presence of bacterial DNA in their serum at the time of liver transplant. The incidence of bacterial translocations among cirrhotic patients had been addressed previously and was found to be 38% [2]. To the best of our knowledge, this is the first study to According to our findings, the bactDNA(+) patients exhibited a significant increase in proinflammatory mediators, as represented by increased levels of IL-17 and TNFα. Consistent with this, studies have shown increased levels of inflammatory cytokines in cirrhotic patients with bacterial translocations [2,9]. The association between high IL-17 levels and the presence of bacterial translocation remains unclear, but an increased intestinal bacterial colonization can stimulate the Paneth cells to secrete IL-17 [10]. IL-17 has been linked to the severity of inflammation in tissues by its induction of the production of other proinflammatory mediators such as IL-1, TNF, IL-6, IL-8, CCL20, and G-CSF, collectively resulting in an influx of neutrophils [11]. With ROTEM, defects of extrinsic or intrinsic pathways may be evaluated through EXTEM and INTEM, respectively. Generally, a prolongation of CT is due to a coagulation initiation defect. An isolated prolongation of CT in INTEM may subtend an intrinsic pathway defect (factors XII, XI, IX, VIII), while an isolated prolongation of CT in EXTEM may subtend an extrinsic pathway defect (factor VII plus tissue factor). On the other hand, prolongation of CFT and reduction of MCF is mainly due to a substrate deficit (e.g. fibrinogen and platelets) [12]. In the present study bactDNA(+) patients had a significant hypocoagulable state, as suggested by prolonged of CT in EXTEM and of CFT in INTEM and EXTEM and reduction of MCF amplitude in INTEM, EXTEM, and FIBTEM. No previous studies have examined the effect of bacterial translocation on the coagulation state of cirrhotic patients. Circulating endotoxins are seems to be important predisposing factor for clotting because of endothelial dysfunction and nitric oxide dysregulation. On the other hand, several studies have shown increases in the incidence of coagulopathy in cirrhotic patients with active bacterial infections due to the presence of heparin-like substances [5,6]. That's why it is possible to have both bleeding and thrombosis in sequential fashion in a short time frame [13]. Anti-Xa concentrations can be measured to detect heparin activity in infected cirrhotics [5]. In our study, anti-Xa activity was comparable among patients of both groups; moreover, no differences in the clotting time were noted among the HEPTEM and the INTEM tests. This suggests that the hypocoagulable state in this group of patients cannot be explained by the presence of heparin-like substances. Plausible explanations include a sustained exposure of bactDNA(+) patients to exaggerated inflammatory responses leading to inappropriate activation and spending of coagulation factors. A similar finding is seen in patients with sepsis in whom activation of coagulation is associated with an initial hypercoagulation state that can develop into hypocoagulation as the coagulation factors become depleted [14]. In this study, the average numbers of transfused PRBCs were similar between groups; however, we evidenced a trend for high FFPs transfusions among bactDNA(+) patients. bactDNA(+) 8(7-9) 8(7-10) 7(6.7-9.5) 8 (6)(7)(8)(9) HR heart rate, MAP mean arterial pressure, CVP central venous pressure, PAOP pulmonary artery occlusion pressure, CO cardiac output Improvements in the anesthetic and surgical practices have led to an increasing number of patients being able to undergo LTs without the need for transfusion of red blood cells or blood products [15]. The use of a cell saver, restrictive fluid strategy, and lower limit of transfusion triggers, and the use of splanchnic vasoconstrictors have contributed effectively to the minimization of transfusions during liver transplants [7,16]. This is the reason why the presence of other factors that impair coagulation does not appear to contribute significantly to the bleeding risk [17]. Another study found that cirrhotic bactDNA(+) patients had a lower mean arterial pressure and lower systemic vascular resistance than bactDNA(−) patients [2]. And, the difference in hemodynamic profiles should be related to increased nitric oxide levels [18]. However, in the present study we could not find any significant difference between patients with and without bacterial translocation, although we saw a trend toward higher use of vasopressors in the bactDNA(+) patients. The postoperative course, nosocomial infection rate, and incidence of mortality were comparable between both groups of patients. However, we are aware that our study population was not big enough to detect all the significant differences between the two groups. Given the observational nature of our study, we could not infer a cause-effect relationship between the presence of bacterial DNA and the changes in thromboelastometric parameters. Also, because of the small sample size we cannot draw any conclusions regarding the effect of bacterial translocation on either transfusion requirement or its effect on the development of postoperative organ dysfunction. Conclusion Our data suggest that bacterial translocation occurs in one-third of patients at the time of transplantation and is associated with increases in inflammation markers, along with a decreased activity of coagulation factors. Further larger studies are warranted to explore the relevance of these findings with regards to the transfusion requirements and postoperative outcomes. Availability of data and materials Data are available from the authors upon reasonable request after permission of Alexandria University. Authors' contributions HAM conceived the study and participated in its design. FAF participated in the design of the study. HMD participated in data collection. DG participated in data collection. ME participated in data collection and drafted the manuscript. AH participated in data collection. RH participated in data collection. MMK participated in the design of the study. AA performed statistical analyses. ME helped draft the manuscript. AM participated in the design of the study, data interpretation, and helped draft the manuscript. All the authors read and approved the final manuscript. Ethical approval and consent to participate This study has been approved by the Ethics Committee of the Faculty of Medicine at the Alexanderia University, with approval number 020583. Informed written consents were obtained from all participants to participate in the study.
2018-04-30T16:48:12.771Z
2018-04-25T00:00:00.000
{ "year": 2018, "sha1": "f1d2157c205374f082b993bf2a179faf7f24e2af", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12871-018-0507-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f1d2157c205374f082b993bf2a179faf7f24e2af", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
7253489
pes2o/s2orc
v3-fos-license
KnoE: A Web Mining Tool to Validate Previously Discovered Semantic Correspondences The problem of matching schemas or ontologies consists of providing corresponding entities in two or more knowledge models that belong to a same domain but have been developed separately. Nowadays there are a lot of techniques and tools for addressing this problem, however, the complex nature of the matching problem make existing solutions for real situations not fully satisfactory. The Google Similarity Distance has appeared recently. Its purpose is to mine knowledge from the Web using the Google search engine in order to semantically compare text expressions. Our work consists of developing a software application for validating results discovered by schema and ontology matching tools using the philosophy behind this distance. Moreover, we are interested in using not only Google, but other popular search engines with this similarity distance. The results reveal three main facts. Firstly, some web search engines can help us to validate semantic correspondences satisfactorily. Secondly there are significant differences among the web search engines. And thirdly the best results are obtained when using combinations of the web search engines that we have studied. Introduction The Semantic Web is a new paradigm for the Web in which the semantics of information is defined, making it possible for the Web to understand and satisfy the requests of people and machines wishing to use the web resources.Therefore, most authors consider it as a vision of the Web from the point of view of an universal medium for data, information, and knowledge exchange [1]. In relation to knowledge, the notion of ontology as a form of representing a particular universe of discourse or some part of it is very important.Schema and ontology matching is a key aspect in order that the knowledge exchange in this extension of the Web may be real [2]; it allows organiza-tions to model their own knowledge without having to stick to a specific standard.In fact, there are two good reasons why most organizations are not interested in working with a standard for modeling their own knowledge: (a) it is very difficult or expensive for many organizations to reach an agreement about a common standard, and (b) these standards do not often fit to the specific needs of the all participants in the standardization process. Although ontology matching is perhaps the most valuable way to solve the problems of heterogeneity between information systems and, there are a lot of techniques for matching ontologies very accurately, experience tells us that the complex nature of the problem to be solved makes it difficult for these techniques to operate satisfactorily for all Terms alignment and matching are often confused.In this work, we will call matching the task of finding correspondences between knowledge models and alignment to the output of the matching task kinds of data, in all domains and as all users expect.Moreover the heterogeneity and ambiguity of data descriptions makes it unlikely that optimal mappings for many pairs of entities will be considered as best mappings by any of the existing matching algorithms. Our opinion is shared by other colleagues who have also experienced this problem.In this way, experience tells us that getting such function is far from being trivial.As we commented earlier, for example, "finding good similarity functions is, data-, context-, and sometimes even userdependent, and needs to be reconsidered every time new data or a new task is inspected" or "dealing with natural language often leads to a significant error rate" [3]. Figure 1 shows an example of matching between two ontologies developed from two different perspectives.Matching is possible because they belong to a common domain that we could name "world of transport", however there is difficult to find a function in order to discover all possible correspondences.As a result, new mechanisms have been developed from customized similarity measures [4,5] to hybrid ontology matchers [6,7], meta-matching systems [8,9] or even soft computing techniques [10,11].However, results are still not entirely satisfactory, but we consider that the web knowledge could be the solution.Our idea is not entirely original; for example, web knowledge has already been used by Ernandes et al. [12] for solving crosswords automatically in the past. We think that this a very promising research line.In fact, we are interested in three characteristics of the World Wide Web (WWW): 1.It is one of the biggest and most heterogeneous databases in the world.And possibly the most valuable source of general knowledge.Therefore, the Web fulfills the properties of Domain Independence, Universality and Maximum Coverage proposed by Gracia and Mena [13]. 2. It is close to human language, and therefore can help to address problems related to natural language processing. 3. It provides mechanisms to separate relevant from non-relevant information or rather the search engines do so.We will use these search engines to our benefit. In this way, we believe that the most outstanding contribution of this work is the foundation of a new technique which can help to identify the best web knowledge sources for solving the problem of validating semantic correspondences to match knowledge models satisfactorily.In fact, in [14], the authors state: "We present a new theory of similarity between words and phrases based on information distance and Kolmogorov complexity.To fix thoughts, we used the World Wide Web (WWW) as the database, and Google as the search engine.The method is also applicable to other search engines and databases".Our work is about those search engines. Therefore in this work, we are going to mine the Web, using search engines to decide if a pair of semantic correspondences previously discovered by a schema or ontology matching tool could be true.It should be taken into account that under no circumstances this work can be considered as a demonstration that one particular web search engine is better than another or that the information it provides is, in general, more accurate. The rest of this article is organized as follows.Section 2 describes the problem statement related to the schema and ontology alignment problem and reviews some of the most outstanding matching approaches.Section 3 describes the preliminary definitions that are necessary for understanding our proposal.Section 4 deals with the details of KnoE, the tool we have built in order to test our hypothesis.Section 5 shows the empirical data that we have obtained from several experiments using the tool.Section 6 discusses the related works presented in the past, and finally, Section 7 describes the conclusions and future lines of research. Problem Statement The process of matching schemas and ontologies can be expressed as a function where given a couple of models of this kind, an optional input alignment, a set of configuration settings and a set of resources, a result is returned.The result returned by the function is called alignment.An alignment is a set of semantic correspondences (also called mappings) which are tuples consisting of a unique identifier of the correspondence, entities belonging to each of the respective ontologies, the type of correspondence (equality, generalization, specialization, etc..) between the entities and a real number between 0 and 1 representing the mathematical probability that the relationship described by R may be true.The entities that can be related are concepts, object properties, data properties, and even instances belonging to the models which are going to be matched. According to the literature, we can group the subproblems related to schema and ontology matching in seven different categories. 1. How to obtain high quality alignments automatically. 2. How to obtain alignments in the shortest possible time. 3. How to identify the differences between matching strategies and determine how good each is according to the problem to be solved. 4. How to align very large models. 5. How to interact with the user during the process. 6. How to configure the parameters of the tools in an automatic and intelligent way. 7. How to explain to the user why this alignment was generated. Most researchers work on some of these subproblems.Our work does not fit perfectly with any of them but it identifies a new one: How to validate previously discovered semantic correspondences.Therefore, we work with the output from existing matching tools (preferably with cutting-edge tools).There are a lot of outstanding approaches for implementing this kind of tools: [15,16,17,18,19,20,21].They often use one or more of the following matching strategies: 1. String normalization. This consists of methods such as removing unnecessary words or symbols.Moreover, strings can be used for detecting plural nouns or to take into account common prefixes or suffixes as well as other natural language features.2. String similarity.Text similarity is a string based method for identifying similar elements.For example, it may be used to identify identical concepts of two ontologies based on having a similar name [22]. 3. Data Type Comparison.These methods compare the data type of the ontology elements.Similar concept attributes have to be of the same data type. 4. Linguistic methods.This consists of the inclusion of linguistic resources such as lexicons and thesauri to identify possible similarities.The most popular linguistic method is to use WordNet [23] to identify some kinds of relationships between entities. Inheritance analysis. These kinds of methods take into account the inheritance between concepts to identify relationships.The most popular method is the analysis that tries to identify subsumptions between concepts. 6. Data analysis.These kinds of methods are based on the rule: If two concepts have the same instances, they will probably be similar.Sometimes, it is possible to identify the meaning of an upper level entity by looking at one of a lower level. 7. Graph-Mapping.This consists of identifying similar graph structures in two ontologies.These methods use known graph algorithms.Mostly this involves computing and comparing paths, children and taxonomy leaves [4]. 8. Statistical analysis.This consists of extracting keywords and textual descriptions to detect the meaning of one entity in relation to others [24]. 9. Taxonomic analysis.It tries to identify similar concepts or properties by looking at their related entities.The main idea behind this analysis is that two concepts belonging to different ontologies have a certain degree of probability of being identical if they have the same neighborhood [25]. 10. Semantic analysis.According to [2], semantic algorithms handle the input based on its semantic interpretation.One supposes that if two entities are the same, then they share the same interpretations.Thus, they are deductive methods.Most outstanding approaches are propositional satisfiability and description logics reasoning techniques. Most of these strategies have proved their effectiveness when they are used with some kind of synthetic benchmarks like the one offered by the Ontology Alignment Evaluation Initiative (OAEI) [26].However, when they process real ontologies, their results are worse [27].For this reason, we propose to use a kind of linguistic resources which have not been studied in depth in this field.Our approach consists of mining knowledge from the Web with the help of web search engines, in this way, we propose to get benefit from the fact that this kind of knowledge is able to support the process of validating the set of correspondences belonging to an schema or ontology alignment. On the other hand, several authors have used web knowledge in their respective work, or have used a generalization: background knowledge [28,29,30,31].This uses all kinds of knowledge sources to extract information: dictionaries, thesauri, document collections, search engines and so on.For this reason web knowledge is often considered a more specific subtype. The classical approach to this problem has been addressed in literature with the use of a tool called WordNet [23].Related to this approach, the proposals presented in [15] is the most remarkable.The advantage that our proposal presents in relation to the use of WordNet [23] is that it reflects more closely the language used by people to create their content on the Internet, therefore, it is much closer to everyday terms, thus, if two words appear very often on the same website, we believe that there is some probability that a semantic relationship exists between them. There are other works about Web Measures.For instance, Gracia and Mena [13] try to formalize a measure for comparing the relatedness of two terms using several search engines.Our work differs from that in several key points.Firstly, they use Yahoo! as a search engine in their experiment arguing its balance between good correlation with human judgment and fast response time.Instead we prefer to determine the best source by means of an empirical study.Secondly, authors say they can perform ontology matching tasks with their measure.Based in our experiences, this is not a great idea; i.e. they need to launch many thousands queries in a search engine in order to align two small ontologies and to lower the tolerance threshold [27].Therefore, they obtain a lot of false positives.Instead, we propose to use the cutting-edge tool [21] to match schemas or ontologies and use web knowledge to validate these previously discovered correspondences.For the same ontologies, we need a thousand times fewer queries number and we do not incur any additional false positive. Technical Preliminaries In this section, we are going to explain some technical details which are necessary to understand our proposal. Definition 1 (Similarity measure).A similarity measure sm is a function sm : µ 1 × µ 2 → R that associates the similarity between two entities µ 1 and µ 2 to a similarity score sc ∈ in the range [0, 1]. A similarity score of 0 stands for complete inequality and 1 for equality of the entities µ 1 and µ 2 . Definition 2 (Alignment).An alignment a is a set of tuples {(id, e, e , n, R)}.Where id is an identifier of the mapping, e and e are entities belonging to two different models, R is the relation of correspondence between these entities, and n is a real number between 0 and 1 that represents the probability that R may be true. Definition 3 (Matching function). A matching function mf is a function mf → A that associates two input knowledge models km 1 and km 2 to an alignment a using a similarity measure. There are many matching techniques for implementing this kind of function as we shown in Section II. Definition 4 (Alignment Evaluation). An alignment evaluation ae is a function ae that associates an alignment a and a reference alignment a R to two real numbers stating the precision, recall of a in relation to a R . Precision states the faction of retrieved correspondences that are relevant for a matching task. Recall is the fraction of the relevant mappings that are obtained successfully in a matching task.In this way, precision is a measure of exactness and recall a measure of completeness.The problem here is that techniques can be optimized either to obtain high precision at the cost of the recall or, alternatively, recall can be optimized at the cost of the precision.For this reason a measure, called f-measure, is defined as a weighting factor between precision and recall.For the rest of this work, we use the most common configuration which consists of weighting precision and recall equally.Notions of similarity and relatedness seems to be very similar, but they are not.Similarity expresses equivalence, while relatedness expresses membership in a common domain of discourse.For example, similarity between car and wheel is low while they are not equivalent at all, while relatedness between car and wheel is high.We can express the differences more formally: Theorem 1 (Similarity involves relatedness).Let µ 1 and µ 2 be two entities belonging to different knowledge models.If µ 1 and µ 2 are similar then µ 1 and µ 2 are related. Theorem 2 (Relatedness does not involve similarity).Let µ 1 and µ 2 be two related entities belonging to different knowledge models.If µ 1 and µ 2 are related then we cannot guarantee that they are similar. Lemma 1 (About the validation of semantic correspondences).Let S be the set of semantic correspondences generated using a specific technique.If any of these correspondences are not related, then they are false positives. Example 1 (About Lemma 1).Let (bucks, bank, =, 0.8) be a mapping automatically detected by a matching tool.If we use a relatedness distance which, for example, tell us that bucks and bank do not co-occur in the same websites frequently, then we have that the matching tool generated a false positive.Otherwise, if bucks and bank co-occur very often in the Web, then we cannot refute the correctness of this mapping. Definition 6 (Hit). Hit is an item found by a search engine to match specified search conditions.More formally, we can define a hit as the function hit : ϑ → N which associates a natural number to a set of words to ascertain its popularity in the WWW. A value of 0 stands for no popularity and the bigger the value, the bigger its associated popularity.Moreover, we want to remark that the function hit has many possible implementations.In fact, every web search engine implements it a different way.For this reason, we can not take into account only one search engine to perform our work. Example 2. (Normalized Google Distance). It is a measure of relatedness derived from the number of hits returned by the Google search engine for a given (set of ) keyword(s).Keywords with the same or similar meanings in a natural language sense tend to be close in units of Google distance, while words with dissimilar meanings tend to be farther apart.Finally, we define a correspondence validator as an software artifact that uses a relatedness distance to detect false positives in schema or ontology alignments according to the Lemma 1.We have built a correspondence validator called Knowledge Extractor (KnoE). KnoE Semantic similarity between text expressions changes over time and across domains.The traditional approach to solve this problem has consisted of using manually compiled taxonomies.The problem is that a lot of terms are not covered by dictionaries; therefore, similarity measures that are based on dictionaries cannot be used directly in these tasks.However, we think that the great advances in web research have provided new opportunities for developing new solutions. In fact, with the increase of larger and larger collections of data resources on the WWW, the study of web measures has become one of the most active areas for researchers.We consider that techniques of this kind are very useful for solving problems related to semantic similarity because new expressions are constantly being created and also new senses are assigned to existing expressions. The philosophy behind KnoE (Knowledge Extractor) is to use a web measure based on the Google Similarity Distance [14].This similarity measure gives us an idea of the number of times that two concepts appear together in comparison with the number of times that the two concepts appear separately in the subset from the Web indexed by a given search engine. For the implementation of the function hit, we have chosen the following search engines from among the most popular in the ranking Alexa [32]: Google, Yahoo!, Lycos, Altavista, MSN and Ask. The comparison is made between previously discovered correspondences.In this way we can decide if compared correspondences are considered reliable, or if they are not. We could launch a task to make a comparison between all the entities of source and target knowledge model respectively.Then, only pairs of entities likely to be true (those whose parameter n exceeds a certain threshold) would be included in the final output alignment.There are several reasons why we do not propose this: Attempting to match models using directly such web knowledge function as Google Distance would involve considerable cost in terms of time and broadband consumption because each comparison needs 3 queries for the search engine and repeating this m • n times, where m and n are the number of entities belonging to the source and target knowledge models respectively.But the most important reason is that the amount of generated false positives means that this process may be unworkable.We have tried to solve the benchmark from OAEI [26] using only web knowledge and have obtained an average f-measure of about 19 percent.This represents a very low figure if we consider that the most outstanding tools obtains a f-measure of above 90 percent for the same benchmark [27].Finally, KnoE has been coded using Java so it can be used in console mode on several operating systems, but to make the tool more friendly to the user, we have programmed a graphical user interface, as Figure 2 shows. The operation mode is simple: once users select correspondences to compare, they should choose one or more search engines to perform the validation.In Figure 3, we have launched a task to validate the correspondence (f ootball, goal) using Google, Yahoo! and MSN.As it can be seen, Google considers that is not possible to refute the correctness of the correspondence, while Yahoo! and MSN consider that the equivalence is wrong. Empirical Evaluation Now we evaluate KnoE using three widely accepted benchmark datasets.These benchmarks are Miller-Charles [33], Gracia-Mena [13], and Rubenstein-Goodenough [34] which are pairs of terms that vary from low to high semantic relatedness. Several notes that are important in order to perform these experiments are: Some of the companies which own the web search engines do not allow many queries to be launched daily, because it is considered as mining service.So the service is limited and several days were necessary to perform the experiments.Results from Lycos Search Engine have not been included because, after several executions, they do not seem to be appropriate.In addition, it is important to note that this experiment was performed in February 2010, because the information indexed by the web search engines is not static. Table 1 shows the results that we have obtained for the Miller-Charles benchmark dataset.Table 2 shows the results we have obtained for the Gracia-Mena benchmark dataset.Finally, Table 3 shows the results we have obtained for the Rubenstein-Goodenough benchmark dataset. On the other hand, Figures 4, 5, and 6 show the behavior of the average means from the web search engines in relation to the benchmark datasets.We have chosen to represent the average mean because it gives us the best result among the statistical functions studied.We have studied the mode and median additionally, but it does not outperform the average mean. The comparison between the benchmark datasets and our results is made using the Pearson's Correlation Coefficient, which is an statistical measure which allows to compare two matrices of numeric values.Therefore the results can be in the interval [-1, 1], where -1 represents the worst case (totally different values) and 1 represents the best case (totally equivalent values). • Experimental results on Miller-Charles benchmark dataset show that the proposed measure outperforms all the existing webbased semantic similarity measures by a wide margin, achieving a correlation coefficient of 0.61. • Experimental results on Gracia-Mena benchmark dataset show that the proposed measure outperforms all the existing web-based semantic similarity measures (except Ask), achieving a correlation coefficient of 0.70. • Experimental results on Rubenstein-Goodenough benchmark dataset show that the proposed measure outperforms all the existing web-based semantic similarity measures (except Yahoo!) achieving a correlation coefficient of 0.51. The average mean presents a better behavior than the rest of studied mining processes: It is the best for the first benchmark dataset and the second one for the second and third benchmark dataset.We interpret this in the following form: although a correct pair of concepts cannot be validated by a specific search engine, it is very difficult that all search engines can be wrong at the same time.Therefore, for the rest of this work, we are going to use the average mean in our semantic correspondence validation processes. case score Mil.-Cha.Experimental results obtained on Rubenstein-Goodenough benchmark dataset Correspondence Validation There are two kinds of correspondences in an alignment: correct mappings and false positives.Correct mappings are correspondences between entities belonging to two different models which are true.False positives are correspondences between entities belonging to two different models which are false, but the technique that generated the alignment considered them as true.To reduce the number of false positives in a given alignment increases the recall and, therefore, improves the quality of the alignment. On the other hand, our strategy can face four different situations: to validate or not validate correct mappings and to validate or not validate false positives.Obviously, we want that correct mappings may be validated and that false positives may be not validated.Under no circumstances we want correct mappings to not be validated; that means not only that we are not improving the results, but we are getting worse them (by decreasing the precision).Validated false positives neither improve nor diminish the precision or recall, for this reason it is a failure, although it does not alter the overall quality of the results. In Table 4 we can see a sample for real results obtained when validating an alignment between two ontologies related to bibliography.We have chosen a threshold of 0.51, thus, all correspondences with a relatedness score higher than this value will be validated.There are a total count of 18 discovered semantic correspondences.6 false positives has not been validated, so we have improved the recall a 33 percent. 2 correct mappings have not been validated, so the precision has decreased a 11 percent.Finally, a false positive has not been validated, so the quality has not been altered.With this results (recall increased 33 percent and precision decreased 11 percent) the overall quality of the alignment (f-measure) has been improved a 11 percent using KnoE. Discussion The results we have obtained can give us an idea of the behavior of different web search engines and their possible application to validate our strat-egy for schema and ontology matching.In fact, we can highlight two features which draw attention in the set of results that we have obtained: 1.There is great disparity between the results obtained by the web search engines that have been taken into account.We think it would be especially interesting to know why. 2. The average of the values from the different search engines outperform, in general, the values returned by the web search engines atomically. Regarding the first fact, we must look at how search engines treat the identical words, synonyms and word variations.We can see many cases with totally opposite results.This shows that there are web knowledge sources that are more appropriate than others, at least, for the domain in which the study has been performed. Why such search engines like Yahoo! offer better results than other search engines, we are not sure.At first, we could consider that it is either the quantity or quality of content indexed by these search engines.On the other hand, Ask indexes currently much less web contents than either Google or Yahoo!, but the treatment of queries and/or indexing of content that is relevant to the datasets used, means that it can also provide good results according to some of these benchmarks.In this way, we think that the results that we have obtained do not depend largely on the indexed content. Secondly, we have that the average mean of the single results is, in general, better that the web search engines.We have obtained good results for the average mean in the three cases, 0.61, 0.7 and 0.51 respectively.These results are on average better that the rest of the single web measures, what means that this configuration could be useful to validate semantic correspondences. Related Works Apart from semantic correspondence validation, web measures can be used in many other applications [13], such as analysis of texts, annotation of resources, information retrieval, automatic • Regarding the Web as a knowledge corpus has become an active research topic recently.For instance, that unsupervised models demonstrably perform better when n-gram counts are obtained from the Web rather than from other corpus was presented in [35].Resnik and Smith [36] extracted sentences from the Web to create a parallel corpora for machine translation. • Regarding web hits, Turney [37] defined a point-wise mutual information measure using the number of hits returned by a Web search engine to recognize synonyms.Matsuo et al. [38] proposed the use of Web hits for extracting communities on the Web.They measured the association between two personal names using the overlap coefficient, which is calculated based on the number of Web hits for each individual name and their conjunction. • There is other way to measure relatedness: text snippets from search engines.For example, Sahami et al. [39] measured semantic similarity between two queries using snippets returned for those queries by a search engine. For each query, they collect snippets from a search engine and represent each snippet as a TF-IDF-weighted term vector.Chen et al. [40] proposed a double-checking model using text snippets returned by a Web search engine to compute semantic similarity between words. To the best of our knowledge, our proposal is the first attempt to use web knowledge in order to improve the schema and matching process by supervising alignments automatically, so unfortunately, we do not have references to compare quantitatively with our work yet. Conclusions and Future Work In this work we have presented a proposal for validating previously discovered semantic correspondences using web knowledge.It has mainly consisted of developing the concepts of relatedness and building a tool for implementing the Google Similarity Distance [11] using other popular search engines.With the obtained results we can assign to the discovered semantic correspondences a degree of confidence, and therefore we can discard or include them in the final alignment which will be presented to the users.In this way, we are able to improve the recall, and therefore, the overall quality (f-measure) of the results.In our work we can extract that: 1. Web Search Engines can be considered valid sources of knowledge that provide support to the task of validated semantic correspondences in a completely unsupervised manner. 2. There is a wide disparity in the results generated by the web search engines that we have studied. 3. Ask, Google, and Yahoo! seem to be the best web knowledge sources for validating previously semantic correspondences.However, an average mean of all search engines is, in general, even better.We think that these results are no dependent on a greater quantity of content indexed and higher quality, however, the treatment they gives to user queries makes them the most appropriate web search engines to perform this task from among the benchmark that we have studied. As future work, we propose a comparison of the knowledge provided by WordNet [23] and that provided by the web sources.On the other hand, the development of a new version for the tool to automate the entire process, from the selection of discovered correspondences to determining the best knowledge sources to validate them, is already in process.The idea is to be able to evaluate web sources automatically according to widely accepted benchmarks.Our end goal is, given the specifications of an ontology matching problem, to compute the optimum alignment function so that the problem can be solved accurately and without requiring human intervention in any part of the process.In this way, the semantic interoperability between people, computers or simply agents might become true. Fig. 1 . Fig. 1.Example of matching between two ontologies representing vehicles and landmarks respectively Definition 5 ( Relatedness Distance).Relatedness Distance is a metric function that states how related two or more entities belonging to different models are and meets the following axioms 1. relatedness(a, b) ≤ 1 2. relatedness(a, b) = 1 if and only if a = b 3. relatedness(a, b) = relatedness(b, a) 4. relatedness(a, c) ≤ relatedness(a, b) + relatedness(b, c) The normalized Google distance (NGD) between two search terms a and a isD(a, b) = mx{log hit(a), log hit(b)} − log hit(a, b) log M − mn{log hit(a), log hit(b)} (1)where M is the total number of web pages searched by Google; hit(a) and hit(b) are the number of hits for search terms a and b, respectively; and hit(a, b) is the number of web pages on which a and b co-occur. Fig. 2 . Fig. 2. Screenshot from the main window of KnoE.Users can select individual terms or lists.Moreover, they can choose some search engines for mining the web Fig. 3 . Fig. 3. Graphical User Interface for KnoE.In this figure we show the validation of the pair (football, goal) according to several search engines Table 1 . Experimental results obtained on Miller-Charles benchmark dataset Table 2 . Experimental results obtained on Gracia-Mena benchmark datasetRub.-Goo.Google Ask Altavista MSN Yahoo KnoE Table 4 . Sample from the results obtained when validating a real alignment.A threshold of 0.51 has been defined empirically indexing, or spelling correction, as well as entity resolution.On the other hand, we have identified three points when researching about web measures: the Web as a knowledge corpus, measures based on web hits, and measures based on text snippets.
2017-08-02T23:59:54.782Z
2012-11-01T00:00:00.000
{ "year": 2012, "sha1": "def929b59e8bd3581a511955fb236be60d1400eb", "oa_license": "CCBY", "oa_url": "https://zenodo.org/record/1205799/files/article.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "b2604effca6eaf36193aed8cd654ec864ae69e8b", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
219721631
pes2o/s2orc
v3-fos-license
Curcumin: a Wonder Drug as a Preventive Measure for COVID19 Management A major outbreak of highly contagious disease novel coronavirus (COVID19) that has recently emerged as epidemic in China in December 2019, spreads across the globe and becoming a pandemic [1]. The disease is caused by novel Corona virus SARS-COV-2 (severe acute respiratory syndrome coronavirus 2) belonging to the family coronaviridae. Coronaviruses are single stranded positive sense RNA viruses, transmitted to humans via respiratory droplets. Majority of the severe SARS-CoV2 infected patients develop acute respiratory distress due to the elevated levels of proinflammatory cytokines and other clinical conditions like diarrhoea, when infection is transmitted through food [1–3]. Globally, it is reported that 6,057,853 positive cases with 371,166 deaths thus far. In India over 190,000 confirmed COVID19 positive cases have been reported, the virus claimed 5577 lives so far suggesting a low mortality rate in Indian population as compared to other ethnics. Till date there is no specific antiviral therapy available to treat COVID-19 patients. Combination therapy has been considered by the clinicians which include antiviral agents, antibiotics and anti-inflammatory drugs [2] including hydroxychloroquine are widely used in developed countries. In the context of preventive and supportive therapy, several polyphenolic compounds extracted from natural products were identified with varied antiviral mechanisms such as targeting virus host specific interactions, viral entry, replication, and assembly. In line with these findings, curcumin, is one of the natural compounds that had been widely investigated for its antiviral effects [4]. Curcumin, a natural polyphenolic compound extracted from roots of rhizome plant Curcuma longa (family Zingiberaceae), exhibits wide range of therapeutic properties including antioxidant, anti-microbial, anti-proliferative, anti-inflammatory, neuroprotective and cardioprotective properties. Curcumin, the yellow pigment of turmeric is extensively used in our Indian traditional herbal medicines to cure many diseases associated with infection and inflammation for many decades [5]. It is reported that, curcumin exerts antiviral activities against broad spectrum of viruses including HIV, HSV-2, HPV viruses, Influenza virus, Zikavirus, Hepatitis virus and Adenovirus [3, 4]. Recent studies have indicated that alike original SARSCoV, the SARS-COV2 also invades human host cells by targeting Angiotensin Converting Enzyme 2(ACE2) membrane receptor, an entry site for coronavirus. The binding of viral S protein to ACE2 receptor present on mucus membrane mediates the viral and membrane fusion Yamuna Manoharan and K.C. Vasanthakumar have contributed equally to this work. . Globally, it is reported that 6,057,853 positive cases with 371,166 deaths thus far. In India over 190,000 confirmed COVID19 positive cases have been reported, the virus claimed 5577 lives so far suggesting a low mortality rate in Indian population as compared to other ethnics. Till date there is no specific antiviral therapy available to treat COVID-19 patients. Combination therapy has been considered by the clinicians which include antiviral agents, antibiotics and anti-inflammatory drugs [2] including hydroxychloroquine are widely used in developed countries. In the context of preventive and supportive therapy, several polyphenolic compounds extracted from natural products were identified with varied antiviral mechanisms such as targeting virus host specific interactions, viral entry, replication, and assembly. In line with these findings, curcumin, is one of the natural compounds that had been widely investigated for its antiviral effects [4]. Curcumin, a natural polyphenolic compound extracted from roots of rhizome plant Curcuma longa (family Zingiberaceae), exhibits wide range of therapeutic properties including antioxidant, anti-microbial, anti-proliferative, anti-inflammatory, neuroprotective and cardioprotective properties. Curcumin, the yellow pigment of turmeric is extensively used in our Indian traditional herbal medicines to cure many diseases associated with infection and inflammation for many decades [5]. It is reported that, curcumin exerts antiviral activities against broad spectrum of viruses including HIV, HSV-2, HPV viruses, Influenza virus, Zikavirus, Hepatitis virus and Adenovirus [3,4]. Recent studies have indicated that alike original SARS-CoV, the SARS-COV2 also invades human host cells by targeting Angiotensin Converting Enzyme 2(ACE2) membrane receptor, an entry site for coronavirus. The binding of viral S protein to ACE2 receptor present on mucus membrane mediates the viral and membrane fusion and subsequent viral replication in host [1,5]. A recent study showed that expression of ACE2 was detected in nasal epithelial cells, alveolar epithelial type II cells (AECII) of lungs and luminal surface of intestinal epithelial cells. Hence nasopharynx, lungs and intestine facilitate viral entry and serve as potential site of viral invasion [6]. Most studies have shown that Angiotensin II exerts its biological activities by binding to two receptors namely angiotensin 2 type 1 receptor (AT1R) and angiotensin 2 type 2 receptor (AT2R). Angiotensin-converting enzyme 2 (ACE2) a homologue of ACE, sharing 61% sequence similarity with the ACE catalytic domain, hydrolyses Angiotensin II to Angiotensin (1-7) and attenuates Angiotensin II-ATIR axis mediated vasoconstriction effects, thereby reducing the blood pressure through vasodilation [7]. In line with the growing evidences of therapeutic properties of the curcumin, here we propose a hypothetical treatment strategy of using curcumin as (1) potential inhibitory agent blocking the host viral interaction (viral spike protein-ACE2 receptor) at an entry site in humans and (2) as an attenuator via modulating the proinflammatory effects of Angiotensin II-AT1 receptor-signalling pathways reducing respiratory distress in the treatment of COVID19. A study using Insilico approach involving docking and stimulation, demonstrated the dual binding affinity of polyphenolic compoundsin which both the viral S protein and ACE2 binds to curcumin. Binding of curcumin to receptor-binding domain (RBD) site of viral S protein and also to the viral attachment sites of ACE2 receptor, demonstrated that curcumin can act as potential inhibitory agent antagonizing the entry of SARS-CoV2 viral protein [3]. Moreover, emulsion form of topical application of curcumin may effectively prevent the SARS-CoV2 infection in humans, as the viral entry site of ACE2 receptor is predominantly distributed at the nasal cells, mucosal surface of respiratory tract and eyes [6]. Further, curcumin has been extensively studied for its role in the regulation of RAAS (renin-angiotensin-aldosterone system) components through which it is known to exert anti-oxidant, anti-inflammatory and antihypertensive effects. Animal studies have implicated the role of curcumin in the downregulation of ACE and AT1R receptor expression in brain tissue and vascular smooth muscle cells, respectively resulting inhibition of Angiotensin II-AT1R mediated effects of hypertension and oxidative stress in animals [8,10]. Previous studies revealed high level of AT2R and ACE2 expression in myocardial cells treated with curcumin thus exhibiting the protective mechanism of curcumin via modulationof effects mediated by Angiotensin II receptors AT1R and AT2R. Upregulation of AT2R induces suppression of AT1R expression leading to Angiotensin II-AT2R mediated anti-inflammatory effects involving an inhibition of NF-jB activity and oxidative stress. Hence, treatment with curcumin attenuated the proinflammatory effects induced by Angiotensin II-AT1R axis leading to significant decrease in the level of proinfammatory cytokines TNF-a, IL-6 and reactive oxygen species [5,10]. Nutritional supplements of curcumin with vitamin C and zinc have showed promising results in boosting the natural immunity and protective defense against the CoV infections have been noted in many hospitalized patients in Indian setting. It is also noted that pharmacological formulation of curcumin in nanoemulsion system proved increased solubility and bioavailability and with enhanced antihypertensive effect [9]. Henceforth, it is clear that the biological properties including advance mode of drug delivery system of curcumin could be considered while formulating the pharmaceutical products and its application as preventive measure in the inhibition of transmission of SARS-COV2 infection among humans. However, further large scale clinical trials are warranted to understand the usefulness of curcumin for the pharmacological application in nanoemulsion system. In conclusion, we propose that curcumin could be used as a supportive therapy in the treatment of COVID19 disease in any clinical settings to circumvent the lethal effects of SARS-CoV-2. Authors Contribution YM &VH wrote the article including the concept; VK, SM, and PKS designed the study and edited the article. FFT reviewed the manuscript for its scientific content. Funding This study did not receive any external fund from any sources. Compliance with Ethical Standards Conflict of interest The authors declare that they have no conflict of interest.
2020-06-18T05:06:32.599Z
2020-06-17T00:00:00.000
{ "year": 2020, "sha1": "16a0622e30a462f0cc46959010659805870e1fb7", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s12291-020-00902-9.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "2cdb8b40ab7647a4ba381842eb8eb58ab571b517", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232423310
pes2o/s2orc
v3-fos-license
High-Resolution Crustal S-wave Velocity Model and Moho Geometry Beneath the Southeastern Alps: New Insights From the SWATH-D Experiment We compiled a dataset of continuous recordings from the temporary and permanent seismic networks to compute the high-resolution 3D S-wave velocity model of the Southeastern Alps, the western part of the external Dinarides, and the Friuli and Venetian plains through ambient noise tomography. Part of the dataset is recorded by the SWATH-D temporary network and permanent networks in Italy, Austria, Slovenia and Croatia between October 2017 and July 2018. We computed 4050 vertical component cross-correlations to obtain the empirical Rayleigh wave Green’s functions. The dataset is complemented by adopting 1804 high-quality correlograms from other studies. The fast-marching method for 2D surface wave tomography is applied to the phase velocity dispersion curves in the 2–30 s period band. The resulting local dispersion curves are inverted for 1D S-wave velocity profiles using the non-perturbational and perturbational inversion methods. We assembled the 1D S-wave velocity profiles into a pseudo-3D S-wave velocity model from the surface down to 60 km depth. A range of iso-velocities, representing the crystalline basement depth and the crustal thickness, are determined. We found the average depth over the 2.8–3.0 and 4.1–4.3 km/s iso-velocity ranges to be reasonable representations of the crystalline basement and Moho depths, respectively. The basement depth map shows that the shallower crystalline basement beneath the Schio-Vicenza fault highlights the boundary between the deeper Venetian and Friuli plains to the east and the Po-plain to the west. The estimated Moho depth map displays a thickened crust along the boundary between the Friuli plain and the external Dinarides. It also reveals a N-S narrow corridor of crustal thinning to the east of the junction of Giudicarie and Periadriatic lines, which was not reported by other seismic imaging studies. This corridor of shallower Moho is located beneath the surface outcrop of the Permian magmatic rocks and seems to be connected to the continuation of the Permian magmatism to the deep-seated crust. We compared the shallow crustal velocities and the hypocentral location of the earthquakes in the Southern foothills of the Alps. It revealed that the seismicity mainly occurs in the S-wave velocity range between ∼3.1 and ∼3.6 km/s. INTRODUCTION The Eastern Alps and external Dinarides across North-Eastern Italy, Austria and Western Slovenia are the result of the collision between the European plate with the Adriatic microplate (e.g., Dewey et al., 1989). Their evolution since Late Cretaceous is mainly controlled by the protrusion of the Adriatic lower crust (e.g., Handy et al., 2010), a relatively rigid and less deformed continental crustal block pushed into weaker parts of the orogen. The Adria microplate, squeezed between the African and European plates, is rotating counterclockwise relatively to Eurasia (e.g., Serpelloni et al., 2005;Le Breton et al., 2017) and its indentation is accommodated by NNW-SSE shortening in the Eastern and Southern Alps. The Eastern Alps and the Southeastern Alps show a complex structure reflecting the interplay between orogen-normal shortening and orogen-parallel motion (e.g., Handy et al., 2015). The seismicity is mainly located in the upper-middle crust and along the Southeastern Alps foothills (Viganò et al., 2015;Bressan et al., 2016). Fault mechanisms show a compression from NW-SE to N-S, and NNE-SSW, consistent with the geodynamic setting, and seismicity patterns seem to be controlled by the crustal heterogeneities and the different degree of interseismic coupling along the main thrust front. Seismic experiments carried out during the last two decades, TRANSALP (TRANSALP Working Group, 2002), ALP 2002 (Brückl et al., 2007;Grad et al., 2009;Šumanovac et al., 2009), and CROP (Finetti, 2005) suggested the subduction of Eurasia below the Adria microplate to the west of the Southeastern Alps below the Tauern Window. From the Tauern window to the east, a sudden change in the subduction direction is inferred from teleseismic tomography studies (e.g., Kissling et al., 2006;Handy et al., 2015), and the interaction of the Adriatic microplate and Eurasia is made more complex by its underthrusting below the Dinarides and the Pannonian fragment (Brückl et al., 2010;Šumanovac et al., 2016). The Eastern Alps structural complexity and the existing different tectonic models triggered in the last decade a number of studies on crustal and upper mantle structure, also made possible by the development of more dense seismological networks in the area (AlpArray Seismic Network, 2015;Istituto Nazionale Di Oceanografia E Di Geofisica Sperimentale, 2016;Hetényi et al., 2018a). By using permanent and temporary stations, seismic tomography studies were carried out at continental and regional scales (e.g., Molinari et al., 2015;Behm et al., 2016;Guidarelli et al., 2017;Tondi et al., 2019). A recent joint inversion of surface wave phase velocities from ambient noise and earthquakes confirmed the heterogeneity of the crustal structure between the Central and Eastern Alps (Kästle et al., 2018). New crustal models from ambient noise tomography have also been recently proposed by Lu et al. (2020), Molinari et al. (2020), Qorbani et al. (2020) improving the resolution of the existing reference models like EPcrust (Molinari and Morelli, 2011). Receiver function data and the global-phase seismic interferometry, provided by the AlpArray complementary experiment EASI (Hetényi et al., 2018b;Kvapil et al., 2020), with focus on the Eastern Alps, highlighted a complex crustal structure and suggested a possible Adria subduction below Eurasia (Hetényi et al., 2018b;Bianchi et al., 2020). However, the interpretation of the receiver function data is still ambiguous due to the possible presence of slices of lower crust imbricated at the contact between Eurasia and Adria, and therefore not a single interface at depth with an overall acoustic impedance Bianchi et al., 2015). For mapping in greater detail the Moho discontinuity, investigating its possible fragmentation, and improving the knowledge about the dynamic processes that originated crustal growth, accretion, delamination, and underplating, an array of 154 broadband seismic stations (Figure 1, AlpArray-SWATH-D project) in the Eastern Alpine region was completed at the end of 2017 and operated for 2 years (Heit et al., 2017). SWATH-D focuses on a key area of the Alps where the hypothesized flip in the subduction polarity is suggested to occur and where the TRANSALP experiment has imaged a jump in the Moho geometry (TRANSALP Working Group, 2002). The temporary network complemented the larger-scale AlpArray network and existing permanent stations of the Alps-Adria region (Heit et al., 2017). The spatial density of the integrated seismic networks provides the opportunity to improve the lateral resolution for ambient noise and receiver function studies, including greater detail in the crustal and uppermost mantle models. We apply an ambient noise tomography to a new dataset exploiting the SWATH-D temporary experiment. Rayleigh wave phase velocity measurements obtained for pairs of stations are integrated with the measurements adopted from Nouibat et al. (2021). S-wave velocities are obtained by using nonperturbational and perturbational inversion in an area including the Friuli and Venetian plains, the Alps foothills, the Alpine chain and external Dinarides. A new high-resolution crustal model is computed down to a depth of 60 km and its relation with the main geologic and tectonic features are discussed. The estimated crustal thickness in the Southeastern Alps and external Dinarides is compared with those included in the Northern Adria crust (NAC) model (Magrin and Rossi, 2020) and the Moho map of Spada et al. (2013). FIGURE 1 | (A) Map showing the seismic stations used in this study. The ZS, CR, SL, and MN stations are used for computation of correlograms in this study. The correlograms computed using up to 4 years of the recordings from the IV, OE, and Z3 stations are adopted from Nouibat et al. (2021). The blue circle shows the location of D024 station of the ZS network and location of the June 14, 2019, Mw 3.7 earthquake is shown by black star. (B) Map showing the major tectonic features in the study region modified from Schmid et al. (2004Schmid et al. ( , 2008 and Handy et al. (2010). ZS-SWATH-D temporary network; CR-Croatian seismograph network; IV-Italian national seismic network; MN-Mediterranean very broadband seismographic network; SL-Seismic network of the republic of Slovenia; Z3-AlpArray Z3 network; OE-Austrian seismic network. Seismic Data and Cross Correlation The dataset used in this study is composed of two subsets. The larger subset exploited 10 months of continuous recordings for computation of correlograms. The other smaller complementary subset of correlograms was adopted from Nouibat et al. (2021) (hereafter NBT21), which were calculated using up to 4 years of continuous recordings. Location of the seismic stations used in the two datasets is shown in Figure 1. In the following, we will present the description of these datasets. The SWATH-D temporary experiment deployed 154 broadband seismic stations during 2017-2020. Taking the recording duration and quality of the stations into consideration, we processed 10 months of continuous seismic data recorded between October 2017 and July 2018 from 133 stations of the SWATH-D temporary network (ZS) (Heit et al., 2017). Continuous recordings of 13 permanent stations of the Slovenian (SL) (Slovenian Environment Agency, 2001), Croatian (CR) (University of Zagreb, 2001), and the Mediterranean very broadband (MN) (MedNet Project Partner Institutions, 1990) networks between October 2017 and July 2018 were also used (Figure 1). The continuous data were baseline-corrected and downsampled to 5 Hz, and the instrumental response was removed. We followed the workflow proposed by Bensen et al. (2007) with slight modifications for the computation of cross-correlation of the vertical components. The waveforms were filtered between 0.5 and 100 s and cut to 1 h segments. We applied the time domain normalization and spectral whitening. We then cross-correlated the 1 h-long segments of all the ZS station pairs. We also performed the cross-correlation between the 1 h segments of the permanent stations (i.e., SL, CR, and MN networks) and all the ZS stations. Since the number of permanent station pairs is much smaller than the ZS-permanent station pairs and their ray crossings mainly coincide with those of the ZS-permanent and ZS-ZS station pairs, the crosscorrelations between the permanent stations were not included. Subsequently, we stacked up to 24 available correlograms of each day. The resulting daily correlograms were stacked again over 3 months. We checked the quality of the correlograms by means of estimating the signal-to-noise-ratio (SNR). Then the SNR of the 3 month stacked correlograms were compared to investigate the seasonal variation in the quality of the correlograms. Comparison of the 3 month correlograms revealed that the correlograms were temporarily stable during the 10 month period of recording, and their SNR values did not exhibit substantial variations. Supplementary Figure S1 shows the 3 month correlograms with the SNR value of higher than 5 between station D024 (the blue circle in Figure 1) and other contemporaneously operating stations. We also stacked the daily correlograms for the 3-10 month periods to examine the effect of stacking duration on the quality of the correlograms. We fixed the value for the SNR threshold at 10. The results revealed that the SNR values of the majority of the station pairs were higher than the threshold after 7 months of stacking. Therefore, 10 months of stacking was sufficient to obtain stable correlograms (i.e., ZS, SL, MN, and CR station pairs). At the end of this procedure, we selected 4050 high-quality empirical green functions (EGFs) showing clear and symmetrical signals on both the causal and acausal parts. Figure 2 shows the cross-correlations computed between station D024 (the blue Figure 1) and other contemporaneously operating stations. Dispersive Rayleigh wave packets are evident on both the causal and acausal parts of the correlograms. circle in We followed the same procedure to compute the correlograms between some stations of the Italian national seismic network (IV) (INGV Seismological Data Centre, 1997) and the AlpArray Z3 network (Z3) (AlpArray Seismic Network, 2015) and the ZS stations to improve our ray coverage. However, unlike the ZS, SL, MN, and CR station pairs, the 10 month period of recording was not sufficient to satisfy our quality control criteria for the IV-Z3, IV-ZS, Z3-ZS station pairs. Therefore, we adopted 3036 correlograms from NBT21 as a complement to the dataset computed in this work. The correlograms were computed following the procedure explained in Soergel et al. (2020) and using up to 4 years of the continuous recordings of 148 stations of the IV, Austrian (OE) (Zentralanstalt Für Meterologie Und Geodynamik [ZAMG], 1987) and Z3 networks (Figure 1). Regardless of the time domain normalization, the preprocessing steps in the Soergel et al. (2020) procedure are mainly similar to the method we applied (i.e., detrending, low-pass filtering, and instrument response correction). This allows for the seamless integration of the correlograms computed in this study and those of NBT21. NBT21 followed the comb filter preprocessing routine for handling of transient high amplitude earthquake signals. They calculated the cross-correlations between each station pair using 4 h windows and stacked them to obtain the final correlogram for each station pair. We controlled the quality of the correlograms and kept 1804 EGFs with SNR values greater than 10 on both the causal and acausal parts to supplement our initial dataset. Figure 3 shows the variation in the number of EGFs with interstation distances and azimuths before and after inclusion of the high-quality correlograms of NBT21. Rayleigh Wave Phase Velocity Measurements The calculated EGFs have been then used to estimate the phase velocity of Rayleigh waves for each couple of stations. It is worth remembering that the largest period for which the phase velocity can be measured is proportional to the interstation distance. In the traditional frequency-time analysis (e.g., Levshin et al., 1992) the wavelength should be smaller than or equal to onethird of the interstation distance (Yao et al., 2006;Lin et al., 2008). However, the method of Ekström et al. (2009) allows for measuring reliable phase velocities up to periods with wavelength comparable to the interstation distance. The technique proposed by Ekström et al. (2009) uses the Aki's spectral formulation to measure the phase velocity directly from the zero crossings of the real part of the correlation spectrum. We used the methodology of Ekström et al. (2009) incorporated into the GSpecDisp package (Sadeghisorkhani et al., 2017) for measuring the phase velocity dispersion curves of the Rayleigh waves. The reliable period range for each couple of stations was conservatively selected so that the interstation distance was 1.5-50 times larger than the considered wavelengths. We visually evaluated the causal and acausal parts of the EGFs and manually picked the dispersion curves exhibiting a clear, smooth and continuous trend of period dependence. The phase velocities were picked at periods where energy level on the real part of the correlogram spectrum is significant. We also used the multiple-filter approach (Herrmann, 1973(Herrmann, , 2013 to measure the phase velocities of some random correlograms and compared them with those obtained from GSpecDisp package. The comparison revealed no substantial difference between the dispersion curves in the period range where 3-50 wavelengths can propagate within the interstation distance. Furthermore, we measured the phase velocities from 44 records of June 14, 2019, Mw 3.7 earthquake (the black star in Figure 1) using the multiple-filter approach (Herrmann, 1973(Herrmann, , 2013 to further validate the dispersion curves obtained from correlograms. The earthquake epicenter is located close to station D013 of the ZS network. The dispersion curves from June 14, 2019, Mw 3.7 earthquake were in agreement with those from correlograms between D013 and other ZS stations (Figure 3). Figure 3D shows the dispersion curves picked from the D013-D024 and D013-D086 correlograms as well as the dispersion curves from the waveforms of June 14, 2019, Mw 3.7 earthquake recorded by D024 (46.21 • N, 11.23 • E) and D086 (46.64 • N, 11.35 • E) stations. At periods longer than 30 s, the dispersion curves obtained from the correlogram dataset of NBT21 exhibit two diverging trends. The set of dispersion curves with lower velocities at longer periods are related to the rays crossing the Po and Venetian-Friuli plains, while the dispersion curves of the rays crossing the Alps have higher velocities at periods longer than 30 s. Figure 3C shows the number of phase velocities obtained from correlograms in the period range of 2-30 s. The number of measurements varies between 2,390 in 30 s and 5,094 in 7 s. Since the SNR of most of the 3 month stacked correlograms were smaller than the threshold of 10 we were not able to estimate the uncertainty of the phase velocity measurements from seasonal variability as it is proposed by Bensen et al. (2007). 2D Phase Velocity Tomography The fast-marching surface wave tomography package (FMST) (Rawlinson, 2005) is used for inversion of the reliable Rayleigh wave phase velocity measurements. FMST uses the fast-marching method (Sethian and Popovici, 1999) for the forward prediction of the traveltimes. It applies an iterative subspace inversion to map lateral variations in phase velocity accounting for the nonlinear relationship between velocity and traveltime. The study area was parameterized with 0.1 • × 0.1 • cell grids. The cell grid size is selected so that each cell in the target region contains a minimum of 20 ray crossings. The average of the measured phase velocities at each period was considered as the homogeneous starting model of the inversion. FMST allows the damping and smoothing regularization parameters to be adjusted in order to cope with the problem of non-uniqueness. The damping factor prevents the solution model from departing too much from the starting model, while the smoothing factor avoids unrealistic sudden changes and constrains the smoothness of the solution model. Although the dispersion curves were picked carefully, we first ran the inversions with a high value of damping factor to detect and discard the highly incoherent paths with the traveltime residual greater than three times the average of all the traveltime residuals (Kaviani et al., 2020). We performed the tomographic inversion again with a set of regularization parameter pairs in the range between 0 and 5 to select the optimal damping and smoothing parameters at each period. The optimal damping and smoothing parameters were selected by the construction of the trade-off curves. After careful inspection of the trade-off between data misfit and model variance at each period, the damping factor was chosen. The trade-off curves between misfit and model roughness were used to estimate the optimal smoothing parameters at each period. Figure 4 shows the smoothing and damping trade-off curves and the selected optimal parameters for periods of 5 and 20 s. We performed checkerboard tests to elucidate the dimensions of the features that can be resolved through the inversion process. The actual ray coverage and the selected optimal regularization parameters were used in the checkerboard tests. Thanks to the dense ray coverage, the 0.3 • × 0.3 • blocks at periods shorter than 5 s were recovered in the Southeastern Alps region covered by the ZS network. However, smearing effects are obvious at periods longer than 4 s. Performing the checkerboard tests with anomaly sizes of 0.5 • × 0.5 • revealed that the anomalies are well recovered in most of the target region ( Figure 5). Following Zelt (1998), we quantitatively assessed the semblance between the true and recovered checkerboard anomalies through the calculation of the resolvability factor. The areas in tomographic results with resolvability factor of higher than 0.7, and the tomographic grids with a minimum of 20 ray crossings are shown in Figure 5. The final tomographic results are confined within the overlapped area between the resolvability factor of higher than 0.7 and a minimum of 20 ray crossings ( Figure 5). We initially performed the tomography for the period range between 2 and 40 s. However, considering the checkerboard test results, we decided to confine the final tomographic inversions to the 2-30 s period band in the region covered by ZS network and to the 5-30 s period range in the remaining parts. 1D S-wave Velocity Inversion We extracted local dispersion curves for each 0.1 • × 0.1 • grid node of the tomographic model and inverted them for 1D depthdependent S-wave velocity profiles. We performed the two-step procedure of Haney and Tsai (2017) for non-perturbational and perturbational inversion of Rayleigh-wave velocities. First, we applied the non-perturbational inversion, based on the Dix-type relation for surface waves, and an optimal non-uniform finiteelement grid of layers generated based on depth sensitivity of the Rayleigh waves (Haney and Tsai, 2015; Figure 6). The 1D velocity profiles obtained were then used as starting models for the subsequent perturbational inversion that relies on the finite-element method resulting in a matrix formulation of the forward problem. It allows for linking the forward and inverse problems using matrix perturbation theory. The perturbational inversion iteratively refines the starting model provided by the non-perturbational Dix-type inversion to find the final model that fits the data. We generated a set of synthetic phase velocity dispersion curves with 2.5% noise from arbitrary velocity models. A comparison of the inverted and true models revealed that the inversion process is capable of producing a smoothed version of the true model (Supplementary Figure S2). Figure 6 shows an example of the measured and predicted dispersion curves as well as the starting and final S-wave velocity models at a grid node. Although the perturbational updating of the non-perturbational starting model has not substantially affected the shallower S-wave velocity structures, it leads to significant fitting improvements at longer period dispersions representing the deeper S-wave velocity structures. We inverted the local dispersion curves for the 1D velocity profiles down to a depth of more than 200 km. However, considering the sensitivity kernels calculated using the final S-wave velocity model, we consider as reliable the results obtained down to 60 km (Figure 6). Four examples of the inverted 1D S-wave velocity profiles and their sensitivity kernels are shown in Supplementary Figure S3. 3D S-wave Velocity Model After inverting all the local dispersion curves, we assembled the resulting 1D S-wave velocity profiles into a pseudo-3D crustal S-wave velocity model of the Southeastern Alps (hereafter SEA-Crust). Figure 7A shows seven depth slices from the surface down to 60 km. The average velocity is increasing from about 2.0 km/s at the surface to about 4.5 km/s at the depth of 60 km. The average 1D S-wave velocity profiles of the Alps and the Friuli and Venetian plains are shown in Figure 7B. The P-wave velocities in Figure 7B were calculated considering the average crustal Poisson ratio of 0.256 (Christensen, 1996). In the depths shallower than about 15 km and to the north of the deformation front, S-wave velocity is higher than the southern parts (i.e., in the Venetian and Friuli plains). In contrast, deeper depth slices reveal higher velocities in the Venetian and Friuli plains compared to the Eastern Alps. Considering both the S-wave velocities of 4.0 and 4.2 km/s as proxies for the Moho depth, the crust in the eastern Alps is, on average, about 15 km thicker than in the Venetian and Friuli plains (Figure 7). At 60 km depth, the depth slice is dominated by velocities greater than 4.0 km, implying that the crust in the whole study region should not be thicker than 60 km. Figures 8A-H shows the variation of S-wave velocity with respect to the average velocity at different depths from 5 to 40 km. Figure 8I shows the major faults in the region. The absolute velocities at the same depths are shown in Supplementary Figure S4. The Southern prominent low-velocity zone at 5 and 10 km depths is related to the Venetian and Friuli plains. The upper crustal S-wave velocities change considerably from the Friuli and Venetian plains to the Alps foothills. As we expected, in the shallow crust, the S-wave velocities are lower for the Adria plate beneath the Friuli and Venetian plains, where soft sediments and sedimentary rocks are thicker. The S-wave velocity at 10 km depth in the Southeastern Alps reaches the value of about 3.8 km/s that is in agreement with the high P-wave velocity value of 6.8 km/s reported by other works (Behm, 2009;Bressan et al., 2012). Considering the increasing trend of the crustal thickness from the south to the north of the study region, the 15-30 and 20-40 km depth ranges are approximately associated with the middle-lower crust in the Southeastern Alps and Adria, respectively. The maps of absolute velocity in Supplementary Figure S4 reveal that the middle and lower crust are faster in the southern parts than in the northern parts of the study region. Nonetheless, a more rigid lower and middle crust for the Adria plate is not a new result and is consistent with the efficient wave propagation due to low attenuation of recorded earthquakes and SmS wave propagation (e.g., Bragato et al., 2011;Sugan and Vuan, 2014). The faster middle and lower crust of the Adria compared to the Southeastern Alps is also observed by the other recent tomographies (Kästle et al., 2018;Lu et al., 2018;Qorbani et al., 2020). The boundary between the higher and lower velocity anomalies to the east of longitude 12 • E and in the 15-40 km depth range perfectly mimics the leading edge of the Alpine front responsible for the 1976 Friuli earthquake (Aoudia et al., 2000) to the east and the Bassano-Cansiglio active folding system to the west (Figure 8). The presence of a high-velocity body (HV1) deeper than 10 km to the east of the Giudicarie fault is evident. The NNW oriented part of HV1 at 40 km depth is also clearly observed by Molinari et al. (2020) and roughly observed by Kästle et al. (2018) and Lu et al. (2018). On the other hand, such a high-velocity body seems to be absent from both the Love and Rayleigh-wave shear velocity model of Qorbani et al. (2020). The location of the HV1 at 10-20 km depth range coincides with the Permian magmatic rocks at the surface (Schuster and Stüwe, 2008). The eastward extension of HV1 with depth can be considered as the continuation of the Permian magmatic complex within the lower crust. The low-velocity anomaly in the 10-20 km depth range and to the northwest of the study region is also observed by Qorbani et al. (2020). Crystalline Basement and Moho Depth Different criteria have been used by receiver function and tomography studies to capture the sedimentary layer-crystalline basement boundary and the Moho discontinuity. Using the iso-velocities at the bottom of the layers is among the most well-established approaches for estimation of the discontinuity depths in seismic imaging studies. However, the S-wave isovelocities used in the literature range from 1.5 to 3.0 km/s and from 3.9 to 4.3 km/s for depicting the basement and Moho depths, respectively (Christensen and Mooney, 1995;Brocher, 2005;Moschetti et al., 2010;Molinari and Morelli, 2011;Macquet et al., 2014;An et al., 2015;Guidarelli et al., 2017;Magrin and Rossi, 2020;Planès et al., 2020). Part of this discrepancy originates from the lack of single definitions of the discontinuities and the trade-off between the S-wave velocity and the depth of the discontinuities. If the S-wave velocities of both the top and the bottom layers are estimated, a strong depth gradient of the 1D S-wave velocity profile can be considered as an indicator of the discontinuity. We used the gradient of the 1D velocity profiles in the top 10 km as a proxy to select the reasonable iso-velocity representing the crystalline basement depth. Figure 9A presents the number of maximum depth gradients in various velocities. Considering the histogram, we selected the average depths corresponding to a range of velocities between 2.8 and 3.0 km/s as the indicator of the discontinuity. A map illustrating the crystalline basement depth is presented in Figure 9B. It shows that the crystalline basement depths in the Po, Venetian and Friuli plains ranges between ∼4 and ∼10 km. In the same region, Qorbani et al. (2020) has also traced a low-velocity anomaly with values of less than 3.0 km/s down the 10 km. According to Steinhart (1967) and Thybo et al. (2013), the seismic Moho is defined as a rapid increase of the crustal P-wave velocity to a value in the range of 7.6 and 8.6 km/s. In the absence of a sharp increase in velocity, the Moho is the level at which the P-wave velocity exceeds the 7.6 km/s threshold. Taking the average crustal Poisson ratio of 0.256 (Christensen, 1996) into account, the S-wave threshold velocity is 4.3 km . In some part of the study region (i.e., beneath the Southeastern Alps) the Moho is deeper than 55 km which is close to the maximum depth of SEA-Crust (Molinari and Morelli, 2011;Spada et al., 2013;Bianchi et al., 2015;Hetényi et al., 2018b;Kästle et al., 2018;Lu et al., 2018Lu et al., , 2020Stipčević et al., 2020). Therefore, in these regions, SEA-Crust is not able to sample the shallow uppermost mantle beneath such a deep Moho and the 1D velocity models vary smoothly with depth, especially toward the deepest parts. Thus, for estimating the crustal thickness we preferred to use the iso-velocity depths rather than depth gradient of the 1D velocity profiles. To this end, we collected a set of available Moho depths from receiver function studies (Bianchi et al., , 2015Hetényi et al., 2018b;Stipčević et al., 2020) and compared them with different depths of iso-velocities between 3.9 and 4.3 km/s. Figure 9C illustrates the receiver function Moho depths vs. iso-velocity depths. The comparison revealed that the 4.2 km/s iso-velocity depths are in a general agreement with the receiver function results. The 4.1 and 4.3 km/s iso-velocities mainly give rise to underestimation and overestimation of the Moho depth, respectively. However, considering the available uncertainty of the receiver function results, some of the 4.1 and 4.3 km/s iso-velocity depths are acceptable. Thus, the average of the iso-velocity depths between 4.1 and 4.3 km/s is taken to be the Moho depth. We calculated the standard deviation of the depths in the same iso-velocity range as a measure of uncertainty of the Moho depth ( Figure 9D). The general pattern of higher uncertainties for the deeper Moho depth estimates is partly due to the decreasing vertical resolution with depth. Comparison of the Pseudo-3D S-wave Model With Other Studies We compared SEA-Crust with those of NAC (Magrin and Rossi, 2020) and Kästle et al. (2018) (hereafter KST18) through the calculation of their relative changes (Supplementary Figures S5-S7) for fixed-depth slices. Mapping the local differences in the upper crust (5 and 10 km depth, Supplementary Figure S5), NAC is faster than SEA-Crust almost everywhere in the Po and Venetian-Friuli plains (more than 20% at 5 km depth). Beneath the southernmost SWATH-D development and at 5 and 10 km depths, the relative change between SEA-Crust and NAC is smaller (∼10%) (Supplementary Figure S5). This consistency between SEA-Crust and NAC coincides with the region where NAC is constrained by local earthquake tomographies (Anselmi et al., 2011;Bressan et al., 2012;Viganò et al., 2013). Within the 20-30 km depth range, NAC is, on average, faster (less than 10%) than SEA-Crust beneath the Alps and slower beneath the plains (Supplementary Figure S5). NAC also tends to be slower than SEA-Crust (less than 15%) in the lower crust (depth >30 km) and almost everywhere in the study region (Supplementary Figure S5). In the upper crust (depth <10 km), SEA-Crust appears to be on average ∼10% faster and slower than KST18 in the Southeastern Alps and in the plains, respectively (Supplementary Figure S6). Within the 20-30 km depth range, KST18 is ∼10% slower than SEA-Crust (Supplementary Figure S6). The S-wave velocity differences between KST18 and SEA-Crust increase gradually with depth and reach ∼15% at 35 km. Toward the deeper structures and between 50 and 60 km depths, while the KST18 is slower than SEA-Crust (less than 10%) beneath the plains, it turns out to be faster (less than 10%) beneath the Alps (Supplementary Figure S6). (Schmid et al., 2004(Schmid et al., , 2008Handy et al., 2010). (B) The average 1D S-wave velocity profiles in the Southeastern Alps and the Venetian and Friuli plains. S-wave velocities of 4.0 and 4.2 km/s are shown by vertical green lines. The P-wave velocity profiles are calculated considering the average Poisson ratio of 0.256 for the continental crust (Christensen, 1996). Extremely variable S-wave velocities characterize the plains, which is also pronounced by differences between NAC and KST18 (Supplementary Figure S7). Comparing the three models shows that SEA-Crust in the upper crust is more compatible with KST18 rather than NAC all around the study region. This is probably as a result of the similarities between the approaches used by this study and KST18. A peculiarity of SEA-Crust is that it exploited the new dataset of phase velocities extracted from the SWATH-D station pairs with short interstation distances. Therefore, we expect SEA-Crust to be more selective and accurate for the paths crossing the plains and the Alpine region at short periods and shallower depths. It can justify the higher relative changes in SEA-Crust with respect to NAC and KST18 in the shallow crust (Supplementary Figures S5, S6). Border of the Po and Venetian-Friuli Plains The Southern low-velocity anomaly at 5 km in Figure 8A coincides with the location of the well-known Po, Venetian and Friuli plains. The S-wave velocity in the region covered by the basins is not homogeneous, and a relatively higher velocity trend beneath the Schio-Vicenza fault divides it into the eastern and western lower velocity parts. The S-wave velocity range between 2.8 and 3.0 km/s used for determination of the basement depth is in the same range as those reported for the Mesozoic carbonates on top of the crystalline basement (Pola et al., 2014;Turrini et al., 2014;Molinari et al., 2020). The estimated basement depth in Figure 9B is in turn shallower (∼5 km) beneath the Schio-Vicenza fault compared to its western and eastern deepest parts (∼10 km) to the southwest of the Garda Lake and the northwestern corner of the Adriatic Sea. The topography of the crystalline basement depth correlates well with the depth of the Pliocene base related to the softer sediments (Pola et al., 2014). The N-S cross sections in Figure 10 also highlight the soft and consolidated sedimentary cover of the Venetian and Friuli plains that is negatively correlated to the topography. Upper and Middle Crustal Structure vs. Seismicity The N-S cross sections in Figure 10 reveal that the middle and lower crust, particularly toward the northern parts is highly heterogeneous. Part of this heterogeneity comes from a vertical sequence of higher and lower velocity layers beneath the Southeastern Alps, which is also detectable in section A. Beneath the Adria plate, there are sharp velocity transitions between the upper, middle and the lower crust; the middle crust seems to be well developed with velocity gradients. However, toward the north, the velocity gradients are laterally distorted by highvelocity bodies in the upper and middle crust starting from the foothills of the Alps. The seismicity we plotted along the cross sections spans from 2008 to 2019 and are extracted from the OGS catalog (Friuli-Venezia Giulia Seismometric Network Bulletin, 2019). The seismicity is mainly located in the southern front of the Alps (i.e., where the elevation increases along the topographic sections) and is confined in a narrow velocity band between ∼3.1 and ∼3.6 km/s (Figure 10). The 15 September 1976 (Ms = 6.1) earthquake (Aoudia et al., 2000) is shown in section E, and its adjacent seismicity clearly reveals the concentration of the seismicity on the interface separating the higher velocity from the lower velocity zones. The concentration of the seismic activity in the narrow velocity band is also in agreement with the occurrence of large earthquakes in the upper-crustal density domain boundaries and the regions of the highest geodetic strain rate (Spooner et al., 2019). This supports the hypothesis that the faster middle and lower crust of Adria in this area is more rigid than the Southeastern Alps, while the faster upper crust of southeastern Alps is more rigid than Adria (e.g., Brückl et al., 2007Brückl et al., , 2010Marotta and Splendore, 2014;Magrin and Rossi, 2020). Moho Topography During the last two decades, important efforts were made to determine the crustal thickness of the Alps through different approaches (e.g., Kummerow et al., 2004;Spada et al., 2013;Hetényi et al., 2018b;Kästle et al., 2018;Lu et al., 2018Lu et al., , 2020Spooner et al., 2019;Magrin and Rossi, 2020). The Moho model of Spada et al. (2013) is derived from a combination of receiver function and controlled source seismological studies. The more recently published study of Magrin and Rossi (2020) is focused on the northern tip of the Adria plate (i.e., about the same study area as ours). They integrated the available information about the depth of the main interfaces and the physical properties of the crust to build the NAC model. The continuous Moho map of NAC and the crustal thickness model of Spada et al. (2013) are plotted along all the sections in Figure 10. Figure 9D shows the crustal thickness map from the average depths between 4.1 and 4.3 km/s iso-velocities. The red circles in Figure 9D represent the location of the estimation nodes, and their size are inversely proportional to the uncertainty of the estimates. Crustal thickening, as expected, is found in our results from south to north (Figures 9D, 10). Taking the uncertainties into account, while the thinner crust of the Adria plate is shown as gentle undulations that seems to be consistent with NAC, the Moho model of Spada et al. (2013) is about 10 km deeper (see sections B, D, and C in Figure 10). Moving toward the north and along the sections B, C, and D, the change in our Moho depth at the Alps foothills is sharper than the other models, and the crustal thickness mirrors the topography, particularly in section D (Figure 10). The Moho depth of Kummerow et al. (2004) beneath the TRANSALP profile is depicted in sections B, C, and D. Surprisingly, considering the three adjacent sections, it appears that the maximum Moho depth beneath the Dolomite Mountains significantly varies from more than 50 km in section B and D to ∼40 km in section C which coincides with the location of TRANSALP. In section B, a sharp Moho step with magnitude of more than 15 km is positioned between the Moho gradients in the NAC and Spada et al. (2013) models. In section C, by contrast, the Moho depth remains unchanged at ∼40 km toward the north of the Alps foothills. While the magnitude of crustal thickness in section D is similar to section B, the Moho depth increases at a longer wavelength along the profile in section D. Section A, perpendicular to the three adjacent sections, clearly portrays the N-S narrow corridor of crustal thinning. The shallower Moho in section C is located beneath the surface observation of the Permian magmatic rocks and HV1 that can be considered as the continuation of the Permian magmatism to the deep-seated crust. Presence of a much smoother lateral variation in the crustal thickness to the east of the Giudicarie line is reported by Kästle et al. (2018) and Spooner et al. (2019). Except for the inconsistencies in their middle parts, the Moho depths in sections E, F, and G generally agree with the NAC Moho. The deeper Moho to the north of the Palmanova fault is consistent with the results of the recent receiver function study of Stipčević et al. (2020). The NW-SE thickened crust along the boundary between the Friuli plain and the external Dinarides is mainly formed as a result of the past and ongoing Adria-Europe convergence, which is accommodated by thrusting and strike slip faulting (Vičič et al., 2019). The crustal thickness from the receiver functions of Hetényi et al. (2018b) along the EASI profile is shown in section F. The EASI Moho beneath the Periadriatic Line reaches the depths of more than 60 km, which is far deeper than the values of the other models. The southern part of the EASI crosses the thickened crust to the east of the Friuli plain (∼50 km). However, the thick crust in this region is not captured by EASI receiver functions (Hetényi et al., 2018b) and it shows a ∼10 km thinner crust which is consistent with Spada et al. (2013) and slightly deeper than the NAC Moho. CONCLUSION We compiled a collection of 5854 correlograms calculated using the continuous seismic recordings from the permanent and temporary networks in Italy, Austria, Slovenia and Croatia. Most of the correlograms are computed using the seismic recordings of the AlpArray SWATH-D complementary experiment, and additional 1084 EGFs are provided by Nouibat et al. (2021). We used the GSpecDisp package (Sadeghisorkhani et al., 2017) for measuring the phase velocity dispersion curves of the Rayleigh waves between 2 and 30 s. The FMST package is applied for the Rayleigh phase velocity tomography in a 0.1 • × 0.1 • grid covering the Southeastern Alps, the western part of the external Dinarides, and the Friuli and Venetian plains. We inverted the resulting local dispersion curves for 1D S-wave velocity profiles using the nonperturbational and perturbational inversion methods Tsai, 2015, 2017). Finally, the 1D S-wave velocity profiles are assembled into a pseudo-3D S-wave velocity model. By using the depth gradient of the 1D S-wave velocity profiles, the depth of the crystalline basement beneath each node is determined. We also compared the iso-velocity depths with the crustal thickness inferred from receiver functions and found the 4.2 km/s isovelocity to be a reasonable representation of the Moho depth. Taking the resulted S-wave velocity model and the crystalline basement and Moho depth maps into account, the following principal conclusions can be drawn: -Thanks to the close station spacing of the SWATH-D network, our S-wave velocity model contains more details compared to the other available models (e.g., Kästle et al., 2018;Magrin and Rossi, 2020) particularly at shallower depths. -The crystalline basement depth in the Po, Venetian and Friuli plains, ranges between ∼4 and ∼10 km. -The crystalline basement beneath the Schio-Vicenza fault is shallower (∼5 km) than its eastern and western regions implying that the Schio-Vicenza fault can be considered as a prominent structural feature between the Venetian and Friuli plains to the east and the Po-plain to the west. -Comparison of the shallow crustal velocities and location of the earthquakes in the southern foothills of the Alps reveals that the seismicity mainly occurs in a narrow velocity band between ∼3.1 and ∼3.6 km/s. -The map of the iso-velocity-based Moho depth illustrates a N-S trending narrow corridor of thinner crust (∼40 km) beneath the Dolomite Mountains and along the TRANSALP profile, which separates the eastern and western thicker (∼55 km) crustal cores. -The Moho depth map displays a thickened crust along the boundary between the Friuli Plain and the external Dinarides. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article including the pseudo-3D crustal S-wave velocity model, and the Moho and basement depths of the Southeastern Alps, the western part of the external Dinarides, and the Friuli and Venetian plains (SEA-Crust) are publicly accessible at: https://doi.org/10.5281/zenodo. 4574022. AUTHOR CONTRIBUTIONS AS-B: conceptualization, data curation, methodology, software, writing original draft, writing, review and editing and visualization. AV: data curation, conceptualization, methodology, software, writing original draft, writing, review and editing, visualization, funding acquisition, and supervision. AA: methodology, writing original draft, writing, review and editing, validation, funding acquisition, and supervision. SP: validation, writing review and editing, funding acquisition and Supervision. AlpArray and AlpArray-Swath-D Working Group: design of experiment, data acquisition, data curation, funding acquisition. All authors contributed to the article and approved the submitted version.
2021-03-31T13:17:46.539Z
2021-03-31T00:00:00.000
{ "year": 2021, "sha1": "899176d87d4b99281003e0f9525ac70ca14db9bb", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/feart.2021.641113/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "899176d87d4b99281003e0f9525ac70ca14db9bb", "s2fieldsofstudy": [ "Geology", "Environmental Science" ], "extfieldsofstudy": [] }
211686386
pes2o/s2orc
v3-fos-license
NSSC: Novel Segment Based Safety Message Broadcasting in Cluster-Based Vehicular Sensor Network Extensive research attention has been devoted to the Vehicular Sensor Network (VSN) owing to its great potential in environment monitoring. Still, it is difficult to diminish the broadcast storm and data collisions in vehicular senor environment. Due to improper broadcasting of safety message and transmission of packet at same time from multiple vehicles leads to collision. Our key intention is to overwhelm these shortcomings in VSN. Hence, we propose Novel Segment based Safety message broadcasting in Cluster (NSSC) based VSN. Our NSSC is mainly concentrated in three successive processes that are Cluster Formation, Collision Avoidance and Safety Message Broadcasting. Cluster formation is performed originally to sustain stable vehicular environment. Here, Variant based Clustering (VbC) Scheme is proposed to elect Cluster Head (CH) and to form clusters. CH is selected using Chaotic Crow Search (CCS) algorithm. In accord to mitigate the data collision during transmission between CH and Cluster Member, we propose Adaptive Carrier Sense Multiple Access/Collision Avoidance (Ada-CSMA/CA) protocol. Safety message broadcasting adopts Segment based Forwarder Selection (SFS) scheme which selects optimal forwarder using Fuzzy-Vikor method. In this, optimal forwarder is selected concerning to broadcast safety message which reduces the broadcast storm. In regard to validate the proposed NSSC, we have conducted simulations on Omnet++ and SUMO simulator based Veins framework. The acquired results are auspicious in terms of succeeding metrics reachability, average number of collision, duplicate data packets, latency, packet delivery ration and throughput. I. INTRODUCTION In Vehicular Ad-hoc NETwork (VaNET), there are certain demands on sensing and transmitting data among vehicles in order to satisfy the services like emergency broadcasting and data transmission [1], [2]. In order to satisfy this demand, VSN has been aroused which is an effective and reasonable way to sense the vehicular surroundings [3]. It is also notable VSN is progressing as a new infrastructure for sensing the definite world specifically in metropolitan regions which large amounts of vehicle furnished with the on board sensors. Owing to number of the vehicles, velocity and frequent topology changing constraints induces the frequent disconnection of communications and transmission losses [4]. The associate editor coordinating the review of this manuscript and approving it for publication was Mahdi Zareei . In order to enhance the communication in vehicular sensor environment, clustering is a resulting technique which divides the vehicles into set of clusters [5]. Each cluster is managed through specific vehicle named as Cluster Head (CH) which collects data from its Cluster Member (CM) [6]. CH vehicle is selected regarding to the certain criteria such as mobility, link, etc. In several works, CH is elected using optimization algorithm such as Moth Flame Optimization (MFO) algorithm [7], Hybrid Genetic algorithm Data collision avoidance is significant process while collecting data from CM [8]. Standardized Medium Access Control mechanisms are used to avoid data collisions among vehicles. [9]. Clustering plays vital role in safety message broadcasting in terms of reducing number of rebroadcasts in vehicular sensor environment [10]. Safety message broadcasting is significant in vehicular based network in order to preserve safety among vehicles in VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see http://creativecommons.org/licenses/by/4.0/ road areas [11]. Since, there exist several unforeseen incidents that are occurred frequently on the roads which threaten the people lives [12]. If an accident happened in the environment, a message must be transmitted to all vehicles present in the surroundings. Different broadcasting algorithms are utilized to transmit safety message to the vehicles whenever accident occurs in the vehicular sensor environment [13]. The algorithms that are used to broadcast the safety message are listed as follows: Selective Epidemic Broadcast (SEB) [14], Adaptive Emergency Broadcast [15] and Hybrid Relay Node Selection [16]. However, blind broadcasting of safety message results in broadcast storm and also induces loss in safety message [17]. From the above discussed studies, it is noticed that still vehicular environment faces many issues that are summarized as follows: • Frequent topology changes induce instability in clustering which tends to concurrent cluster formation. • Data collisions occurred during transmission due to multiple vehicles sending their data in same period of time. • Broadcast storm is high in safety message broadcasting due to pursuing of blind procedure. A. RESEARCH PURPOSE VANET has arisen as an auspicious technology given effective vehicular protection, traffic organization, infotainment and position based services [18], [19]. In present days, vehicles have been required a lot of information about roads surrounding them for their safety and convenience [20], [21]. The safety driving in vehicular environment is ensured by the leveraging the incorporation of infrastructure nodes i.e RSU and other nodes including IoT nodes and smart phones [22], [23]. Under this situation, VSN is introduced for supporting communications between roadside sensor nodes and vehicles present on the roads. The safety driving of VSN incorporate media access control (MAC) protocols, data forwarding schemes, and routing protocols for unicast, multicast, and broadcast, transport layer protocols, and security services [24]- [-26]. The MAC protocol plays crucial part to increase the efficiency of any device by utilizing the power consumption [27]. Moreover, VSN comprises of two different sensor nodes, some are deployed in road sides and others are embedded on the vehicles. It creates an end to end reliable network for transmitting sensor data congregated from a vehicular environment. A plenty of research applications are emerged in VSN such as emergency warning and road monitoring. VSN comprises of two specific types that are Intra Vehicle VSN and Inter Vehicle VSN. • Intra Vehicle VSN-It is introduced for the determinations of monitoring, control and communication between components and subsystems exist inside a vehicle. • Inter Vehicle VSN-It is introduced for vehicle to communication with neighbor vehicle in order to transmit safety related information. Intra VSN is limited in space whereas inter vehicle VSN able to communicate with multiple vehicles through Dedicated Short Rang Communication (DSRC) communication. Figure 1 shows the VSN architecture with vehicles (Active Sensor) and Road Side Sensor (RSS) (Passive Sensor) nodes [28]. However, most of the research studies concentrated only on the data gathering and fusion based routing in VSN [29], [30]. Yet, there is in need of clustering and collision avoidance since vehicles having high mobility behavior thus leads to frequent communication losses and collisions due to frequent data transmissions. In addition to it, broadcasting safety message and detecting event in vehicular sensor environment is also significant processes to preserve safety of the vehicular users. Therefore, in this work we have concentrated on the clustering, collision avoidance and safety message broadcasting in VSN. To the best our knowledge, we are first in integrating aforesaid processes in VSN that enhances the performance of the network. B. RESEARCH CONTRIBUTION This section deals with contribution of proposed NSSC system. The contributions are summarized as follows: • In accord to construct stable network, we rely on establishing stable clusters in the VSN environment. Our proposed clustering adopts Variant based Clustering (VbC) scheme where Chaotic Crow Search (CCS) algorithm is utilized for CH election. • In order to achieve high throughput in data transmission, our NSSC carried out collision avoidance with the aid of Adaptive CSMA/CA algorithm which adaptively changes the back-off time. • Our novelty is presented in safety message broadcasting which mitigates the broadcast storm effectively. For this purpose, NSSC performs Segment based Forwarder Selection (SFS) method where best forwarder is selected through Fuzzy-Vikor (FV) algorithm. • In regard to prove performance of the NSSC, we validate the performance using succeeding metrics such as reachability, average number of collision, duplicate data packets, latency, packet delivery ration and throughput. C. RESEARCH OUTLINE Outline of our research work is concise as follows: Section 2 deliberates the prior works present in VSN with their limitations. Section 3 demonstrates problems occur in previous works that are related to VSN. Section 4 explains brief study of our NSSC with our proposed algorithms. Section 5 explains numerical results obtain from our simulation environment and also compare it with existing methods. Finally, section 6 concludes our contribution and also affords some comments on our future work. II. STATE OF ART WORK In this section, prior works that are related to the vehicular environment is exemplified. Furthermore, it designates the various approaches that are developed in the vehicular network. This section further sub-divided into two that are benefits and demerits of the existing works. A. CLUSTERING Ghada et al. [31] have introduced the Double Head Clustering (DHC) method for vehicular network which is referred as the movement based clustering algorithm. Here, CH selection considers multiple metrics that are speed of the vehicle, direction, and position. Besides, it also considers other metrics associated to the link quality including the signal-to-noise ratio (SNR) and the link expiration time (LET). Hence, DHC method increases the cluster stability and efficiency. Regin and Menakadevi [32] have pointed out the dynamic clustering mechanism in vehicular network based on the node density. Density based Dynamic Clustering (DBDC) mechanism is used to determine the node density. Dynamic clustering process is enabled whenever node density exceeds the threshold value. Henceforth, it attains better scalability even in high density scenario. Fahad et al. [33] have offered Grey Wolf Optimization (GWO) to cluster the vehicles in the network. Here, GWO algorithm is used to select the optimal CH in vehicular environment. Here, CH elected based on the four factors that are speed, position, location and direction. Therefore, it increases the clustering efficiency in vehicular network. Ren et al. [34] have suggested Unified Framework for Clustering (UFC) in vehicular environment. UFC mechanism comprises of two major parts that are CH selection and Cluster formation. CH selection is performed by estimating probability of being CH where succeeding metrics are considered that are link lifetime and link expiration time. Therefore, UFC is adaptable to any network scale environment. Kittusamy et al. [35] have offered an Enhanced Whale Optimization (EWO) method in vehicular networks. Initially, it constructs the clusters based on the Adaptive Weighted Clustering (AWC) algorithm. Here, EWO is used to elect the CH in formed clusters. EWO estimates fitness function for each CH. Vehicular node which achieves higher fitness is selects as CH. Thus reduces the frequent formation of clusters. Joshua et al. [36] have pointed out reputation oriented weighted clustering protocol (RWCP) in Vehicular environment. RWCP considers the upcoming metrics that are direction of vehicles, position, velocity and reputation in order to stabilize the cluster. Here, Multi Objective Firefly Algorithm (MOFA) is utilized to enhance the parameters of the RWCP. Ishtiaq et al. [37] have presented the Moth Flame Optimizer (MFO) for intelligent clustering in the vehicular network. This algorithm considers the mobility of the vehicle to select the optimal cluster head for clustering process. For this purpose, it estimates the fitness function for each vehicle present in the network. The vehicle which has highest fitness function is selected as the cluster head. The elected cluster head forms the clusters in the network. Wang and Chen [38] have presented the Efficient Data Gathering and Estimation (EDGE) in VSN. The EDGE method pursues the dynamic partition procedures in VSN network using region quad tree algorithm. It comprises three phases that are adjustment phase, gathering phase and estimating phase. In adjusting phase, grids are merged, split according to the sensed data from the one grid. Gathering phase, vehicular nodes receives beacon signal to sense respective grid. In estimation phase, quality index of the air is estimated. Hu et al. [39] have presented the Integrity Content Offloading (ICO) technique in the VSN. This work proposes two offloading procedures that are direct link and relay assisted path respectively. In direct link offloading, vehicular nodes are directly offload data into the RSU. In relay assisted path based offloading; vehicular nodes send data to the RSU via two hop relay nodes. Relay vehicles are selected by computing distance to the sink node in its communication range. Finally, minimum distance node is selected as relay vehicle to offload data to the RSU. Sadou and Bouallouche-Medjkoune [40] have pointed out message delivery in Hybrid Sensor and Vehicular Network (HSVN). Herein, sensor nodes are deployed between two RSU nodes. Sensed data from the sensors are provided to the RSU node, if there is no vehicle passing in its sensing range. RSU sends sensed data to the sink node via vehicular nodes which is selected using mathematical linear programming. The proposed mathematical linear programming considers distance metric to select vehicular node. B. COLLISION AVOIDANCE Al-Absi et al. [41] have introduced efficient mechanism for vehicular communication. In order to reduce data collision among different vehicles, distributed MAC mechanism is introduced. It provides individual time slots to each vehicle thus reduce the collision effectually. Therefore, it enhances the throughput via reducing the collision. Sahoo and Sheu [42] have suggested collision free MAC for data transmission. Here, CSMA slots are allocated to each VOLUME 8, 2020 node in order to reduce the collision due to transmission of packets in at same time. In addition to it, it also generates 3D markov chain methodologies to investigate the throughput of the network. In order to allocates slots to the vehicles. Rajeshwar Reddy and Ramanathan [43] have introduced an efficient MAC layer based data communication in vehicular network. Here, CSMA/CA algorithm is utilized to provide time slots to the vehicles. During the allocated time slots period, vehicles transmit their data to its head node thus reduces the data collisions. Nie et al. [44] have offered quality based data gathering in VSN. Here, two algorithms are utilized that are Deviation Detection (DD) and Mixed Integer Linear Programming (MILP). MILP algorithm achieves optimal solution by collecting all data form the vehicles whereas DD algorithm achieves efficient solution by updating vehicle frequency and proportion of the vehicles. Yuan et al. [45] have suggested cost effective sensing in VSN. Cost effective sensing model utilizes the probabilistic matrix model to reveal the status of the environment. The matrix factorization technique is used to decrease the amount of uncertainty exist in an unsensed data. With spatial temporal correlation of the sensed data, sensing task is only allocated to the small subset of sensing area. Lyu et al. [46] have offered the efficient congestion control in the vehicular network. It utilizes the two machine learning algorithms for link perception learning such as Naïve Bayes and Support Vector Machine (SVM). Apart from these algorithms, it also utilizes the TDMA method to allocate timing slots to the vehicular nodes to transmit their data packet to congestions. Zakaria et al. [47] have proposed new model for enhancing the throughput of the network through re-configuration processes. Here, the new algorithm contributes on re-routing and channel assignment in wireless mesh network in order to reduce the re-configuration and increase the throughput. C. SAFETY MESSAGE BROADCASTING Sarmad Shah et al. [48] have pointed out Time Barrier based Emergency Dissemination (TBED) in vehicular network. In this, emergency data is disseminated through time barrier mechanism which reduces the dissemination overhead. This method works based on the super node according to timely disseminate the safety messages. Using this approach farthest node rebroadcast the message which can more distance. Tian et al. [49] have introduced the emergency message broadcasting using the Distributed Position based Protocol (DPP) in vehicular network. Here, enhanced location based protocol is utilized to broadcast the emergency message among huge scale vehicular environment. Using these proto-col emergency messages is only broadcasted to the interested region and rebroadcast of the messages is based on the information incorporated in the communication. More et al. [50] have suggested an Efficient Message Broadcasting (EMB) in vehicular network. In this, emergency message broadcasted via selected optimal disseminator through the computation of weight value. Herein, weight factor is computed using average speed metric. The vehicle with good score is selected as the best disseminator for emergency messages. Bi et al. [51] have introduced Safety Message Broadcast (SMB) systems in vehicular network. Multiple MAC layer protocols are investigated in this work such as cluster based broadcast, neighbor knowledge based broad-cast, probability based broadcast, cross layer based broad-cast and location based broadcast. Feng et al. [52] have introduced Safety Message Broadcast Strategy (SMBS) in vehicular network. Safety message is transmitted by electing optimal forwarder. Optimal forwarder is elected based on the priority of each vehicle. Highest priority vehicle broadcast the safety message to the neighbor vehicle in order to transmit safety in-formation to all neighbor vehicles. Zhang et al. [53] have suggested an Adaptive Link Quality based Safety Message (ALQSM) dissemination method in vehicular network. In this, the score oriented priority distribution method is utilized to select an optimal forwarder. This reduces the contention during emergency broadcasting among different vehicles. Chaqfeh et al. [54] have pointed out efficient safety data dissemination in vehicular network. For which, it proposes the Multi directional Data Dissemination Protocol (MDDP). MDDP disseminates the safety message based on the consideration of position of the vehicles. MDDP broad-cast safety message effectually in order to avoid frequent vehicle accidents. Balador et al. [55] have offered the supporting beacon based event driven message transmission in the vehicular network. It proposes the three distinct methods for the event driven message transmission. They are dedicated phase for event-driven messages, event-driven message transmission without token, and event-driven message transmission upon token reception. Hoang et al. [56] have presented efficient message dissemination techniques under emergency condition. Here, the emergency messages are disseminated with the aid of the relay present in the network. It also sets the time slots to reduce the delay occurred during the emergency dissemination. Nguyen et al. [57] have concentrated on the Store Carry Forward (SCF) scheme for the emergency message dissemination. Herein, source vehicle sends warning message to the SCF vehicles that are exists in its communication range. SCF vehicle broadcast warning message to its neighbors and carries this information until it new neighbor is arrived. D. RESEARCH GAPS IN EXISTING WORKS This section describes the research gaps of aforesaid studies related to the vehicular network. Table 1 illustrates the research gaps of the existing works in terms of clustering, collision avoidance and emergency dissemination. From above discussed studies, it is noticed that none of the works have concentrated on clustering with respect to collision avoidance. These limitations are resolved in our proposed NSSC method with effective algorithms. III. PROBLEM DEFINITION In this part, we deliberate the problems that are present over the VSN environment in detail. Our network com- In this work, we define two major problems that are broadcast storm (occurred during safety message broadcasting) and data collisions (occurred during data transmission among vehicles). These problems are discussed in upcoming studies. We initially concentrated on problems that are happened by partitioning the VSN network [38]. This way of partitioning the network is only applicable to the unobstructed scenarios thus reduces the network performance. Distance based relay selection leads to loss in sensed data transmission due to breakable communication between relay vehicles and RSU [39], [40]. Sensed events are initially classified in vehicular nodes and again it is classified in RSU. This way of classifying the event tends to high complexity in decision making and also consumes more bandwidth [58]. During safety message broadcasting, vehicular nodes receive many duplicate warning messages thus leads to broadcast storm [57]. It also induces data collisions during transmission of packets. In order to model problems that are exists in previous VSN, our proposed NSSC method expect to reach succeeding objectives that are: • Formation of stable clusters in VSN to tackle bottlenecks related to the frequent link losses and CH rotations. • Minimizing collisions during data transmission between CM and CH • Reducing broadcast storm while broadcasting safety message to the neighbor vehicles. IV. NSSC MODEL This NSSC model describes the proposed method in detail with our proposed algorithms. A. SYSTEM OVERVIEW The major hurdles in safety message dissemination and data transmission is resolved in proposed NSSC method. Our network comprises of four types nodes that are Vehicles with Active Sensor Node, RSS (Passive Sensor Node), Road Side Unit (RSU) and Sink node in figure 2. Our network is divided into multiple stable clusters in order to avoid frequent link losses. To attain this, proposed model implements VbC scheme that elects CH using CCS algorithm using two different metrics that are Mobility and Connectivity metrics. Mobility metric is updated using Circle Variant and Connectivity metric is updated using Gauss Variant. In order to avoid data collision occurred during transmission of data between CH and CM, our method proposes Ada-CSMA/CA algorithm with adaptively change the back-off time by considering buffer size. To broadcast safety message without broadcast storm, Segment based SFS scheme is carried out which selects optimal forwarder using FV algorithm with succeeding metrics consideration that are node degree, position, forwarding probability and delay. B. CLUSTERING Primarily cluster is formed using the VbC scheme which comprises of two sequential processes that are CH selection and cluster formation. 1) CH ELECTION CH is elected using CCSA algorithm which is new natural inspired optimization algorithm which is inspired by the crow search mechanism for hiding their food. Our CCSA algorithm incorporates chaotic theory in order to avoid local optimal solution in CH election. Chaotic theory comprises of different chaotic maps where we select two best chaotic variants that are circle and gauss. To the best of our knowledge, we are first in using CCSA in VSN to elect optimal CH. Here, CH is elected based on the two different metrics that are mobility and connectivity metrics. Mobility (M m ) metrics are relative position, speed and maximum acceleration. Connectivity (C m ) metrics are node connectedness and PDR. These metrics are described as follows: a) Relative position (R p ) -It is used to measure the position traversed by a vehicle over a certain period. Since, high mobility of vehicle affects the stable cluster formation. b) Speed ( ) -It is significant metric to evaluate the speed of the vehicle. Since, high speed vehicle induces frequent selection of CH. By considering this parameter, our VbC scheme avoids frequent selection of CH. c) Maximum Acceleration (Max a ) -It defines the rate of change of speed of the vehicle with respect to the time. It is used to support stable cluster formation in high mobile environment of our VSN. d) Node Connectedness (N c ) -Node connectedness metric is used to evaluate the count of the neighbor nodes. This metric is used to select an optimal CH with high communication capabilities. VOLUME 8, 2020 e) PDR -PDR metric is used to know the packet delivery capability of the vehicular node. With the aid of this metric, we select optimal CH with high PDR rate. With the consideration above parameters, CCSA selects an optimal CH. CCSA begins with setting adjustable parameters and crow positions (y) are initializes randomly. Fitness function is evaluated using below expression: The new position of the crow is updated using the two chaotic variants such as circle and gauss. Here, circle variant is used to update the mobility metric and gauss variant is used to update the connectivity metric. Circle variant is expressed as follows: where, d = 0.2, e = 0.5. Gauss variant is expressed as follows: With the usage of these two variants, CCSA updates the crow positions. These variants are performed well in updating of crow new position compared to other variants in the chaotic theory and also provide high accuracy in CH election. Algorithm 1 illustrates the CH election procedure of proposed CCSA briefly. From this procedure, VbC scheme elects optimal CH. 2) CLUSTER FORMATION After completion of CH election, stable cluster is formed in order to avoid frequent link losses. Elected CH broadcasts the election message to its neighbor nodes. Neighbor node transmit join request to its CH. CH receives join request and estimates link stability of the neighbor node. Highest link stability node only selected as CM. Link stability is estimated using the below expression, where, di represent the distance between the CH and neighbor node. f (t) represents the duration of the link being connected over time t. This way of forming clusters results in stable communication between CH and CM. As a result, we avoid data losses and also enhance the throughput and PDR in data transmission. C. COLLISION AVOIDANCE In order to reduce the data collisions occurred during the transmission between CH and its CMs, our NSSC method proposes Ada-CSMA/CA algorithm. Proposed Ada-CSMA/CA algorithm executed in the CH which allocates back-off time for each member in adaptive manner. Here, back-off time for each member is allocated based on the buffer size of each member. Buffer size parameter is used to evaluate the number of the packets to be transmitted in upcoming transmission. It can be evaluated using below expression, where, P n represents the total number of packets and Q l represents the total queue length. In addition to it, CH also changes the contention window size adaptively based on the presence of vehicles in each cluster. Hence, our NSSC method performed well for both high and low density scenario. Figure 3 demonstrates the procedure of Ada-CSMA/CA algorithm where three CM are considered. From the above figure, it is noticed that CM-1 has high buffer size. Hence, it has low back-off time to transmit its packet to the CH. And then, CM-3 has very low buffer size, so that it has high backoff time compared to the other members CM-1 and CM-2. DIFS is DCF Inter Frame Spacing (DIFS) which is defined as the time delay before transmitting packets to the destination. SIFS is Shortest Inter Frame Spacing which is described as the time delay after transmitting packet to the CH. This way of transmitting sensed data to the CH avoids data collisions during transmission. Thus, enhances the performance of NSSC in terms of successful data transmission. D. SAFETY MESSAGE BROADCASTING Safety message broadcasting is significant process in VSN in order provide safety driving to the vehicular users. To address the broadcast storm problem in safety message broadcasting, our proposed method executes the SFS scheme which selects optimal forwarder to broadcast safety message. If any vehicle supposed to accident in the road, then it implements SFS scheme to broadcast safety message to its member. SFS scheme initially segments the transmission range into the square region as depicted in figure 4. And then, it divides the square region into two that are front and back regions. Furthermore, these regions are split into left and right diagonal regions. The prime benefit of segmenting transmission range into square is to evade the transmissions of the same data packet by the neighbor vehicles (that are close to each other) located in the different segments. We focus on back region of the segmented square since, safety message is necessary to the vehicles that are moving towards the accidental area. Back region comprises of three portions that are center portion and two sider portions. Figure 5 depicts the safety message broadcast mechanism of out proposed NSSC. Vehicles present in the center portion are most appropriate and adequate to carried out safety message broadcast process with reduced number of transmissions. From the center portion, we select optimal forwarder using FV algorithm. If center portion does not have any vehicle, then we select forwarder from the side portion in parallel manner which reduces the latency in safety message broadcasting. FV algorithm is fuzzy hybrid Multi-Criteria Decision Making (MCDM) model. Here, fuzzy is used to compute weight for each criterion (C). Here, we consider four metrics to select optimal forwarder that are node degree, position, forwarding probability and delivery delay. These metrics are described as follows: 1 Node degree (C 1 ): It is significant metric in forwarder selection. Since, it defines the ability of vehicle to broadcast safety message to the wide range of vehicles. 2 Position (C 2 ): It is used to know the current speed of the vehicle since selection of high-speed vehicle leads to reduction in reachability. VOLUME 8, 2020 3 Forwarding Probability (C 3 ): It is used to select forwarder with high forwarding ability. This metric is considered because most of the vehicles does not having capability to forward the vehicle. 4 Delivery Delay (C 4 ): This metric is used to estimate the delivery delay of the vehicle to broadcast the safety message. It plays vial role in reducing latency in safety message broadcasting. Decision matrix of FV algorithm is expressed as follows: where, A 1 , . . . A m indicates the alternatives to be chosen and C 1 , C 2 , C 3 , C 4 indicates the evaluation criteria. The fuzzy weight values for each criterion are calculated based on the importance of the each criterion. Weight of each criterion is estimated using the below expression, where,S i indicates the standard deviation for each criterion. It can be estimated as follows: where,x n = 1 Z n i=1x in , Z represents the total number of alternatives. Fuzzy best (x j + ) and worst values are estimated using below equations,x j + = max ix ij (9) x j − = min ix ij (10) Linear normalization of the fuzzy matrix is represented by the score valuesH i andG i as follows: Based on these expressions index VIKOR is estimated which is computed as follows: where, u represent the weight in the strategy of maximum group utility which is assigned as a value of 0.5. These values are arranged in descending order in order to rank each alternative. Lastly, high rank vehicle is elected as forwarder to transmit the safety message to its neighbor vehicle. This way of broadcasting safety message avoids broadcast storm, reduces latency and also enhances the reachability. V. EXPERIMENTAL RESULTS In this part, detailed analysis and investigations of proposed NSSC method is discussed. For this purpose, this section is further divided into four aspects that are simulation scenario, validation metrics, comparative analysis and results discussion. A. SIMULATION SCENARIO Our NSSC method is implemented in Omnet++ (Network Simulator) and SUMO (Traffic Simulator) with veins model. Veins model is capable to execute both SUMO and Omnet++ in analogous manner. Veins framework incorporates a comprehensive models to make vehicular network simulations as realistic as possible without losing speed. Parameters that are utilized in simulation environment are discussed in table 2. Figure 6 demonstrates the simulation environment of proposed NSSC method. It comprises of passive sensor nodes, vehicles with active sensor nodes, RSU and Sink nodes. Our simulation environment comprises of 100 vehicular nodes, 200 sensor nodes, 1 RSU and 1 Sink node over 2.5Km range of simulation area. The vehicular nodes present in the VSN are transmit their sensed data to the RSU or Sink node using a one hop communication through Cluster head node. In simulation network, vehicles follow TraCI mobility to move along the road sides. In this, vehicles are communicating through IEEE 802.11p standard which comprises of greater potential including the huge data rate, low latency, and so on. B. VALIDATION METRICS The performance of NSSC is evaluated using six validation metrics that are reachability, average number of collisions, duplicate data packets, latency, packet delivery ratio and throughput. 1) REACHABILITY This metric demonstrates the capability of the method in terms of number of vehicles that are conscious of an event. In general, reachability is described as the proportion of vehicles that are efficaciously acquiring the broadcasted data packets (S r,v ) with respect to the total number of reachable vehicles (T v ). This is mathematically illustrated as follows: 2 ) AVERAGE NUMBER OF COLLISIONS It signifies the average number of collided data packets. This metric is used to measure the successful data transmission potential of the proposed method. Average Number of Collisions = C p t p (15) where, C p indicates the collided packets and t p indicates the total number of transmitted packets. 3) DUPLICATE DATA PACKET This metric is significant to investigate the robustness of the proposed safety message broadcasting scheme. It designates the average number of duplicated data packets that are acquired in each vehicle within the safety zone region of the VSN. 4) LATENCY It is noteworthy metric to estimate the efficacy of the NSSC safety message broadcasting model in regards of the delay. This metric represents the time spent by a broadcasted data packet from its origin vehicle to the designated vehicle through network. where, P s indicates the size of the packet and R t indicates the transmission rate. 5) PDR It is utilized to compute the performance of the NSSC work in terms of the successful data delivery. It represents the ratio of the successfully received packets at the destination vehicle (S d,v ) with respect to the number of generated packets at source vehicle (t s,v ). It can be expressed as follows: 6) THROUGHPUT This metrics designates the amount of data effectively transferred between source and destination vehicle in given time period. This metric illustrates the efficacy of the proposed method in terms of the packet transmission. It is generally estimated in bits every second or packets every second. It can be expressed as follows: where, N b represents the number of packets transmitted in T specific time period. C. COMPARATIVE ANALYSIS This section compares the simulation results of existing methods and our proposed method with the use of six validation metrics as discussed earlier. We compare our proposed method with existing methods including EDGE [38], ICO [39], HSVN [40], TBED [48] and SCF [57]. Reachability metric is used to evaluate the performance of safety message broadcasting scheme of proposed NSSC. Performance of the reachability is simulated by varying number of vehicles. Figure 7 shows that comparison of reachability with respect to the existing methods EDGE, TBED and SCF. NSSC method shows better performance by attaining maximum reachability expressly at increase in number of vehicle. By reason of pursuing novel SFS scheme. It selects the best forwarder to broadcast safety message to the neighbor vehicles that avoids broadcast storm problem. For this purpose, it carried out FV algorithm which provides better performance in selecting optimal forwarder. By contrast, existing methods like EDGE and SCF achieves less reachability compared to NSSC. This is because of their poor performance in safety message broadcasting. SCF achieves low reachability since it simply broadcast safety message to all vehicles present over the emergency zone thus leads to broadcast storm. Meanwhile, EDGE and TBED also achieves less reachability due to lack of emergency detection over its surrounding. Thus reduces the reachability compared to the proposed NSSC. 2) ANALYSIS ON DUPLICATE DATA PACKET Duplicate data packet metric provides the robustness of the NSSC in terms of the broadcasting the safety message. In accord to validate this metric, we simulated by varying number of vehicles. Figure 8 demonstrates that performance of NSSC is better compared to the existing methods. This is because of our novel SFS scheme. If any accident or emergency situation is known by vehicle, then it segments communication range into square form. And, it selects optimal forwarder from the back region of the square segment. Since, safety message must be dispatched to the vehicles that are moving in the direction of emergency region. Finally selected forwarder only transmits safety message to the neighbor vehicles. Thus reduces the duplicate data packet reception in vehicles. By contrast, EDGE, TBED and SCF methods achieves high duplicated data packet receiving ratio compared to the proposed NSSC. The reason for this is that, safety message is broadcasted by all vehicles that are present over the emergency area. As a result, number of duplicate data packet receiving is increased drastically. From this analysis, we conclude that our method achieves less data duplicate data packet reception compared to existing methods. 3) ANALYSIS ON LATENCY Latency metric is noteworthy in safety message broadcasting which evaluates the performance of NSSC in terms of the time. For this purpose, we validated performance of latency by varying number of vehicles in the network. Figure 9 demonstrates that simulation results comparison of NSSC and existing methods in terms of the latency. It is observed that, performance of the NSSC is better compared to the existing methods including EDGE, SCF and TBED. This is because of selecting optimal forwarder in SFS scheme. For this purpose, it executes FV algorithm which considers succeeding metrics to select optimal forwarder that are node degree, position, forwarding probability and delay. Here, delay metric is used to reduce the broadcasting latency. This way of selecting forwarder results in less latency in safety message broadcasting of NSSC. Meanwhile, existing methods like SCF and EDGE achieves high latency in safety message broadcasting. Due to lack of optimal forwarder selection in safety message broadcasting that leads to increase in latency. In contrast, TBED method achieves less latency compared to the SCF and EDGE methods, since it selects best forwarder node to broadcast safety message. Even though, latency of TBED is high compared to the NSSC owing to lack of parameter consideration. It is seen that performance of the NSSC is better in latency compared to the SCF, EDGE and TBED methods. The table 3 defines the comparisons on efficiency of safety message broadcasting of proposed NSSC and existing methods SCF, EDGE and TBED. 4) ANALYSIS ON AVERAGE NUMBER OF COLLISION Average of number of collision metric plays substantial role in sensed data transmission which signifies NSSC performance in terms of the collision. This metric performance is validated by varying the number of vehicles. Figure 10 describes the comparison of NSSC and existing methods such as EDGE, ICO, SCF, TBED and HSVN. It is noticed that, proposed NSSC method achieves better performance compared to the other existing methods. The reason for this that, NSSC proposes Ada-CSMA/CA based collision avoidance mechanism which is executed by each CH in the network. It allocates time slots to its member with the aid of Ada-CSMA/CA. Here, Ada-CSMA/CA changes the back off time adaptively for each member based on its buffer size. Thus reduces the data collision effectively and provides proper number of slots to each member without wastage and contractions. In the meantime, existing methods like ICO, SCF, TBED and HSVN doesn't concentrate on the proper data transmission and collision avoidance techniques. It transmits sensed data without consideration problem of being collide thus induces more losses in transmitted sensed data. Likewise, EDGE also achieves high data collision owing to lack of proper data transmission between vehicles. Therefore, our NSSC performance better in reducing average number of collisions as compared to the existing methods. 5) ANALYSIS ON PDR PDR metric is important to validate the performance of NSSC in regards of the successful packet delivery. According to validation, PDR is evaluated under modifying number of vehicles in the network. Figure 11 represents the performance comparison of NSSC in terms of PDR. From this figure, it is perceived that PDR is increased as increase in the number of vehicle. The reason for this is that the growth in the number of vehicles improves the probability of vehicles being connected with the network thus in turn avoids the link losses. PDR of the NSSC is high since our method establishes stable clustering by using VbC scheme. VbC scheme selects best CH using CCSA algorithm and then cluster is formed in stable manner. This way of performing clustering leads to the reduction in frequent link losses among CH and CM which tends to increase in PDR. Whereas, existing methods like EDGE, ICO, SCF, TBED and HSVN are obtained less PDR compared to NSSC. This is because of occurring of concurrent link losses due to high mobile nature of vehicle. Frequent link losses reduce the packet delivery by inducing packet losses. As a result, existing methods achieves less PDR compared to the NSSC. 6) ANALYSIS OF THROUGHPUT Throughput metric is noteworthy process to the measure the performance of NSSC in terms of the data transmission VOLUME 8, 2020 over network. In regard to measure performance of the throughput, we simulated by varying density of the vehicles. Figure 12 exemplifies the comparison of simulation results with respect to the existing and NSSC methods. From this figure, it is observed that throughput is increased with respect to the increase in vehicular density. It is happened due to enhancement in ability of data transmission while increase in vehicle. Our NSSC method forms cluster based on the link stability metric thus upsurges the number of successful data transmission in the designed network. And CH is elected based on the mobility and connectivity metrics that avoids frequent CH rotations. Hence, NSSC reduces the frequent formation of the clusters via forming stable clusters. Therefore, NSSC achieves high throughput compared to the existing methods. Meanwhile, existing methods like EDGE, ICO, SCF, TBED and HSVN. Since, these methods don't rely on link oriented parameters while transmitting packets to the designated node. Consequently, there exist frequent link losses during packet transmission. As a result, existing methods achieve less throughput compared to the proposed NSSC. The table 4 illustrates the comparisons on efficiency of network performance of proposed NSSC and existing methods EDGE, ICO, SCF, TBED and HSVN. Figure 13 shows the comparison of the throughput performance with the speed of the vehicle. From this figure, it is perceived that performance of the proposed NSSC is better when increase in the speed of the vehicle. This is because of our proposed CCSA based clustering process. It selects optimal cluster head and forms the clusters effectually. This way of forming the clusters avoids the frequent link breakages among the vehicles. Thus improves the performance during the data transmission. Besides, we transmit the sensed data packet to the sink node or RSU through the cluster head node only. Besides, our proposed safety message dissemination also avoids collision during dissemination. This way of transmitting the data packets provides better throughput in the network. On the contrary, the existing methods like ICO, SCF, HSCN and TBED achieves less throughput when increase in the vehicle speed. This is because of their poor clustering process or lack of clustering process in the data packet transmission. And, their emergency dissemination process also produces high data collisions. Hence, these methods achieve less throughput compared to our NSSC method. From table 4, we noticed that our NSSC method outperforms other existing methods such as EDGE, ICO, SCF, TBED and HSVN. This shows efficiency of our NSSC method in network performance. D. RESULTS DISCUSSION This part deliberates the discussions on results obtained from simulation. Based on the validation results, proposed NSSC has performed as compared to the other techniques such as EDGE, ICO, SCF, HSVN and TBED. Our NSSC method increases reachability up to 30% compared to the existing EDGE and SCF method. This is owing to the fact that the proposed NSSC is based on the SFS scheme which competes well in reachability. NSSC method reduces duplicate data packet up to 50% as compared to the existing methods EDGE and SCF. Due to usage if FV algorithm which selects optimal forwarder to broadcast safety message. The main hurdle in data transmission is collision which is reduced by NSSC up to 30% compared to the existing methods EDGE, ICO, SCF, TBED and HSVN. This is happened because of proposed Ada-CSMA/CA based collision avoidance technique which allocates slots to each CM based on their buffer size. Latency is major issue in safety message broadcasting which is decreased in NSSC method up to 55% compared to the TBED, EDGE and SCF methods. Since, we select optimal forwarder by considering multiple parameters which includes delay parameter also. Thus results in faster broadcasting of safety message to the neighbor vehicles. Our NSSC enhances PDR and throughput up to 30% compared to the other techniques including EDGE, ICO, SCF, TBED and HSVN methods. Owing to formation of stable clusters over network that avoids frequent link losses between vehicles. VI. CONCLUSION As a step toward designing of efficient VSN with reduced data collisions and broadcast storm, this paper proposes the NSSC. This comprises of three major processes that are clustering, collision avoidance and safety message broadcasting. In order to establish the stable clustering in VSN, NSSC method introduces VbC scheme which select optimal CH using CCSA algorithm and forms clusters with the consideration of link stability metric. Data collision avoidance is a major hurdle in VSN since it has frequent data transmission. In order to avoid this, NSSC executes Ada-CSMA/CA algorithm which allocates back-off time based on the buffer size. It also adjusts the contention window size based on the number of vehicles in cluster which improves the NSSC performance in high density scenario. Safety message broadcasting is executed through SFS scheme which selects optimal forwarder using FV algorithm in the squared region. As a result, broadcast storm is avoided during safety message transmission. At last, we validate the performance of the NSSC with six validation metrics that are reachability, PDR, throughput, latency, duplicate data packet and average data packet collisions. Performance of NSSC is compared with five existing methods such as EDGE, SCF, ICO, TBED and HSVN. From the comparison results, we conclude that our NSSC outperforms other existing methods. He also served as the Dean for the College of Computer and Information Sciences and the Head for the Academic Accreditation Council, Al-Yamamah University. He is currently a Professor with the Computer Science Department, CCIS, King Saud University (KSU), Riyadh, Saudi Arabia, where he is also the Director of the Cyber Security Chair. His research areas of interests include mobile-pervasive computing and cyber security. He served as the General Chair for the IEEE Smart World Symposium and a technical program committee member in numerous international conferences/workshops, such as the IEEE CCNC, ACM BodyNets, and the IEEE HPCC.
2020-02-20T09:12:37.229Z
2020-02-14T00:00:00.000
{ "year": 2020, "sha1": "c6bb05b4a023fd97a13e2ba62091ee67748d7eef", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/8948470/08999537.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "1b3c3751453588d356b593648bd6abc772007d8e", "s2fieldsofstudy": [ "Computer Science", "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
119133854
pes2o/s2orc
v3-fos-license
Status of background-independent coarse-graining in tensor models for quantum gravity A background-independent route towards a universal continuum limit in discrete models of quantum gravity proceeds through a background-independent form of coarse graining. This review provides a pedagogical introduction to the conceptual ideas underlying the use of the number of degrees of freedom as a scale for a Renormalization Group flow. We focus on tensor models, for which we explain how the tensor size serves as the scale for a background-independent coarse-graining flow. This flow provides a new probe of a universal continuum limit in tensor models. We review the development and setup of this tool and summarize results in the 2- and 3-dimensional case. Moreover, we provide a step-by-step guide to the practical implementation of these ideas and tools by deriving the flow of couplings in a rank-4-tensor model. We discuss the phenomenon of dimensional reduction in these models and find tentative first hints for an interacting fixed point with potential relevance for the continuum limit in four-dimensional quantum gravity. I. INVITATION TO BACKGROUND-INDEPENDENT COARSE-GRAINING IN TENSOR MODELS FOR QUANTUM GRAVITY The path-integral for quantum gravity takes center stage in a diverse range of approaches to quantum spacetime. It is tackled either as a quantum field theory for the metric [1][2][3][4][5][6], or in a discretized fashion with a built-in regularization [7][8][9][10][11][12][13][14][15][16][17][18][19]. The latter approach, relying on unphysical building blocks of space(time), provides access to a physical space(time) only when a universal continuum limit can be taken. Universality [20][21][22] is key in this setting, as it guarantees independence of the physics from unphysical choices, e.g., in the discretization procedure, i.e., the shape of the building blocks. To discover universality, background-independent coarse-graining techniques are a well-suited tool as universality arises at fixed points of the coarse-graining procedure. The notion of "background independent coarse-graining" at a first glance appears to be an oxymoron and suggests this review should be extremely short. After all, in order to coarse grain, one first needs to define what one means by "coarse" and by "fine". Intuitively one would expect these notions to rely on a background. In particular, a definition of ultraviolet and infrared, key to the setup of Renormalization Group (RG) techniques, seems to require a metric, i.e., a geometric background. Yet, with RG techniques now playing an important role in different quantum-gravity approaches, coarse-graining techniques suitable for a setting without distinguished background have successfully been developed [23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38] and applied to various quantum-gravity models. In this review, we will focus on the developments kicked off in [13,[39][40][41][42], and introduce the key concepts behind a background-independent RG flow and the associated notion of coarse graining. In particular, we will focus on the development and application of these tools to tensor models. Tensor models are of interest for quantum gravity both as a way of exploring the partition function [13,[43][44][45] directly as well as through a conjectured correspondence of specific tensor models to aspects of a geometric description in the context of the SYK-model [46][47][48]. In both settings, the large N limit, where N is the tensor size, is of key interest, and physical results are extracted in the limit N → ∞. In their simplest version that is of particular interest to quantum gravity, tensor models are 0-dimensional theories, i.e., there is no notion of spacetime in the definition of the models. Instead, the dual interpretation of the interactions in tensor models is that of discrete building blocks of space(time), cf. Fig. 1. In this interpretation of tensor models through the graphs dual to the Feynman diagrams, the building blocks are interpreted as pieces of flat space(time). Curvature is accordingly localized at the hinges 1 . The dual representation of tensors is in terms of building blocks of geometry. The d indices of a rank-d tensor are associated to the (d − 2)-subsimplices of a (d − 1)-simplex. For instance, for rank 3, the indices are associated to the edges ((d − 2) subsimplices) of a triangle ((d − 1) simplex), cf. Fig. 1. Correspondingly, in the rank-4-case, each index is associated to one of the four faces of a tetrahedron, cf. Fig. 2. When two tensors are contracted along one index, the corresponding (d − 1)-simplices share a (d − 2) simplex, e.g., for the rank-3 case, two triangles are glued along an edge, cf. Fig. 3. In the rank-4-case, two tetrahedra are glued along a face. Allowed interaction terms are positive powers of the tensors that contain no free indices. This means that each (d − 2)-subsimplex is glued to another (d − 2)-subsimplex. Therefore they correspond to d-dimensional building blocks of space(time), e.g., tetrahedra for a fourth-order interaction in the rank-3 case, cf. Fig. 1. The propagator of the theory identifies all d indices of two tensors, corresponding to a glueing of one d − 1 simplex to another, e.g., glueing of two triangles along their faces. Accordingly, the terms in the Feynman diagram expansion of tensor models have a dual interpretation as simplicial pseudomanifolds. In other words, the combinatorics of tensor models encode FIG. 3. The invariant T ijk T ijl T mnl T mnk , depicted on the left, is associated to the gluing of four triangles (center) into a building block of 3-space (to the right). The contraction of common indices is associated to the gluing of triangles along common edges. dynamical triangulations. In the simplest case, when no additional rules are imposed on the gluing, Riemannian pseudomanifolds are generated. The inscription of local lightcones inside the building blocks, such that a consistent notion of causality can emerge and the pseudomanifold is Lorentzian, requires additional rules for the gluing and more than one type of building block [9][10][11]57]. There are no experimental hints that indicate that spacetime is a simplicial pseudo-manifold, accordingly it is assumed to be a continuum manifold. In particular, while the presence of physical discreteness close to the Planck scale could be compatible with all observations to date, one would not expect a naive discretization as it arises from tensor models, to actually be physical. Instead, this form of discreteness should be regarded merely as a regularization of the path integral. In order to take the continuum limit in tensor models, the number of degrees of freedom, encoded in the tensor size N , must be taken to infinity. In [42,43,[58][59][60][61][62] it was shown that models of real (complex) tensors with a O(N ) ⊗ O(N ) ⊗ ... ⊗ O(N ) (U (N ) ⊗ U (N ) ⊗ ... ⊗ U (N )) symmetry 2 admit a 1/N expansion, where N is the size of the tensors. Here, each symmetry group in the above product acts on exactly one of the indices of the tensor. Due to the existence of a 1/N expansion, these are viable candidates to search for a physical continuum limit by taking N → ∞. Yet, simply taking N → ∞ is not sufficient in order to obtain a physical continuum limit: The microscopic properties and structure of the building blocks in the model is not taken to be physical, but only a discretization/regularization. Different microscopic choices can be made that should not leave an imprint on the continuum physics, such as, e.g., the shape of the building blocks. Accordingly, the continuum limit should be universal. Universality is achieved at fixed points of the Renormalization Group flow. Therefore, an RG flow must be set up for these models. Unlike in quantum field theories defined on a background, no local, i.e., geometric notion of scale is available. In fact, the only notion of scale is the size of the tensors, N . In fact, using the tensor size N as a scale agrees with the intuitive notion of coarse graining, also underlying formal developments such as the a-theorem [63]: Coarse-graining leads from many degrees of freedom (large N ), to fewer, effective degrees of freedom (small N ). Therefore, a pregeometric RG flow is set up in the tensor size N , where a universal continuum limit can then be discovered as an RG fixed point. This point of view was advocated in [13,39] and formally developed and benchmarked in [40,41,64,65]. In the dual picture, the lattice spacing needs to be taken to zero in such a way that the correlation length on the lattice diverges. Then, microscopic details of the setup become irrelevant. This is possible at a higher-order phase transition, linked to a fixed point in the space of couplings. The intuition behind these ideas can be tested in the two-dimensional case, where the double-scaling limit of matrix models [66][67][68][69], which is a universal continuum limit, is obtained by taking N → ∞ while tuning the coupling to a critical value as a power of N . This is completely analogous to the case of continuum RG flows, where universal critical behavior with diverging correlation length is tied to RG fixed points, near which couplings scale with particular powers of the scale. Specifically, the double-scaling limit in matrix models with coupling g is achieved by taking N → ∞ and g → g crit , while holding (g − g crit ) 5 4 N = const, (1) which can be rewritten in the form g(N ) = g crit + const. 4/5 N −4/5 . This immediately brings to mind the linearized scaling of couplings close to RG fixed points, which is given by the scale raised to the power −θ, with the critical exponent θ. Note that there are arguments suggesting that quantum gravity should be discrete. One might interpret this as implying that there is no need to take the continuum limit in tensor models, and one can instead even work at finite N . Yet, discreteness is actually a subtle issue in quantum gravity. As discussed in more detail, e.g., in [70], kinematical and dynamical discreteness are not the same thing in quantum gravity, and discreteness can be an emergent property of the physical continuum limit. On the other hand, a simple implementation of discreteness in the sense of a cutoff potentially features the same breakdown of predictivity at scales near the cutoff that effective field theories do. Specifically, the interaction terms compatible with the symmetries of a model are infinitely many for tensor models. The continuum limit is a way of imposing predictivity in a model by reducing the number of free parameters characterizing its dynamics to finitely many. In the RG language, this is linked to the fact that fixed points feature only finitely many relevant directions. In the language of critical phenomena, one has to tune only finitely many parameters to approach criticality in the sense of a higher-order phase transition. In this spirit, we aim at discovering a universal continuum limit in tensor models for quantum gravity such that both independence of unphysical microscopic details as well as predictivity is guaranteed. We leave open the question whether these models feature emergent discreteness once the continuum limit is taken, but merely point out that taking the continuum limit does in fact not preclude the possibility of emergent, physical discreteness. In summary, to discover a universal continuum limit, at which a physical spacetime could emerge from discrete building blocks of spacetime, we must discover universal critical points. These are linked to RG fixed points. In the absence of a background, the only scale available for coarse-graining is the tensor size N . As we will explain in the next sections, setting up an RG flow in N is both conceptually meaningful as well as feasible in practice. This review is structured as follows. In Sec. II we introduce the conceptual basics of background-independent coarse graining. We provide an overview of how to implement these ideas in practice and how to set up a flow equation in Sec. III. In Sec. IV we discuss in detail how scaling dimensions can be derived in a setting without a background, translating to the absence of physical length scales and corresponding units that would define canonical dimensions. We provide an overview of the benchmark case of two dimensions in Sec. V, where quantitatively robust results on the well-known continuum limit can be achieved using our flow equation. In Sec. VI we summarize results in the rank-3-case, where several RG fixed points give access to a dimensionally reduced continuum limit. We also highlight a recently discovered candidate for a fixed point which might potentially turn out to be relevant for three-dimensional quantum gravity. To provide a step-by-step instruction in how to set up and evaluate RG flows in tensor models, we present the first study of a rank-4-model with these tools in Sec. VII. We discover several universality classes featuring dimensional reduction. As a hint of the promise our method could have, we unveil tentative indications for a universality class that might potentially be linked to four-dimensional quantum gravity. In the outlook and conclusions VIII we advocate that progress towards a comprehensive understanding of quantum gravity could be accelerated by strengthening the effort to bridge the gap between different approaches to quantum gravity. We discuss in particular how continuum studies of asymptotic safety, Monte Carlo simulations of (causal) dynamical triangulations and FRG studies of tensor models could provide a link to phenomenology and particle physics, while allowing to probe features of emergent geometries and enabling us to link the discrete and continuum side via a universal transition. II. CONCEPTUAL BASICS: BACKGROUND INDEPENDENT RENORMALIZATION GROUP FLOW IN GRAVITY Renormalization Group techniques are playing a role in several different approaches to quantum gravity. This includes the asymptotic-safety program [4,5], the continuum limit in spin foams [23][24][25][26][27][28][29][32][33][34][35][36] and Hamiltonian RG flows in canonical loop quantum gravity [37], tensorial (group) field theories [52,71] as well as holographic RG flows in the context of the AdS/CFT conjecture [72]. Yet, at a first glance, quantum gravity would appear to be the one of the fundamental interactions to which RG techniques are not easily applicable. The reason lies in the dichotomy of background independence and local coarse graining. While the results obtained with a local coarsegraining formulation can be made background independent, see, e.g., [30], the RG flow itself necessarily relies on an (auxiliary) background, if the flow has the interpretation of a local coarse graining. A more direct reconciliation of RG techniques with background independence is provided by a nonlocal form of coarse-graining: RG flows, in agreement with the a-theorem [63], connect descriptions with many degrees of freedom with effective descriptions of the same system based on fewer degrees of freedom. This idea can be realized both in a local as well as a nonlocal form. The latter is directly applicable to tensor models for quantum gravity. These are defined without any notion of spacetime, metric or locality. Yet they come with a measure of the number of degrees of freedom, namely the tensor size N . Coarse-graining therefore corresponds to integrating out subsequent "layers" of the tensors (rows and columns in the matrix-model case), thereby connecting a description at large N with an effective description at small N . In particular, such coarse-graining techniques allow us to search for a well-defined large N -limit, where the dynamics stays invariant under the step from N to N + 1, such that the limit N → ∞ can be taken. In this limit, one can hope for quantum space(time) to emerge from tensor models. Note also that while local coarse-graining techniques typically rely on Riemannian signature, raising the difficulty of connecting back to the Lorentzian case of interest for physics, a nonlocal coarse graining does not rely on a momentum cutoff. Accordingly, a more direct search for a universal continuum limit for Lorentzian models could become possible in this setup. This includes applications of the FRG to tensor models dual to causal dynamical triangulations [73] as in [74], as well as the application of coarse-graining techniques to the link matrix in causal sets [75]. We will now explain how to implement these ideas in practice in the form of a flow equation. One can view the flow equation as a reformulation of the path integral in terms of a functional differential equation. The search for a continuum limit in the path integral then becomes the search for a well-defined ultraviolet (in an appropriate sense) solution of the flow equation. At a completely general and formal level, the derivation of the flow equation from the path integral works as follows: One introduces a new term into the exponential in the generating functional, that is quadratic in the field and depends on some external parameter which we will call K here. For now, we leave this parameter completely general, and do not provide any physical interpretation associated with it. It is simply to be thought of as a "sieve" on the space of field configurations, letting through only a subset of configurations. The generating functional depends on K and is denoted by Z K , schematically where S[ϕ] is a given microscopic action, J is an external source and ϕ denotes the random fields. The trace is to be interpreted in a suitable way for the model at hand, i.e., it signifies a momentum integral and trace over internal indices in standard QFTs on a background, and an appropriate summation over indices in the discrete case, e.g., for tensor models. We do not write indices for simplicity, but the fields are not necessarily scalars. As a function of the parameter K, a subset of configurations in the generating functional are suppressed, such that in the limit K → ∞, all configurations are suppressed. Conversely, in the limit K → 0 the unmodified generating functional is recovered. Since the suppression term is quadratic in the field, ∂ K Z K can be expressed in terms of the two-point function, For the modified Legendre transform with φ = ϕ , this implies which is known as the functional renormalization group (FRG) equation. For the case of a continuum QFT on an (auxiliary) background it was derived in [76], see also [77,78], pioneered for gauge theories in [79] and gravity in [3]. Up to here, the derivation of the flow equation from the path integral is just a formal "trick" that can be performed with any (functional) integral: Instead of performing the integral "all at once", one introduces the exponential of a quadratic term that depends on an external parameter. This allows to derive a differential equation that encodes how the result of the integral reacts to changes in the parameter. As long as the suppression term is quadratic in the field, an equation which is structurally of the form Eq. (6) follows directly from the definition Eq. (3). The question to address in a physics setting is whether any physical meaning can be given to the external parameter and consequently to the ensuing differential equation. For instance, in local field theories introducing an external parameter that does not lead to a notion of local coarse graining is not expected to be fruitful. In such cases, the modes that remain after integrating out some "shells" of modes do not contain physically relevant degrees of freedom. Thus deriving effective field theories for those degrees of freedom might be an interesting computational exercise, but is presumably not useful for answering physical questions. The notion of UV/IR is therefore key to make the effective field theories obtained by renormalization useful for practical computations. Thus, although both in QFTs with and without a background, different choices for K are possible, "non-local" choices have not yet been tested for their usefulness in the setting with a background. Accordingly, in the case with a background it turns out to be the most powerful tool to relate K to a momentum scale. This choice allows to implement a notion of local coarse graining: Decomposing configurations into eigenfunctions of an appropriate Laplacian, R K suppresses configurations with eigenvalues of the Laplacian smaller than k 2 . In this case, the flow equation has the interpretation of providing the response of the effective dynamics to a local coarse-graining step. The quest for a well-defined path integral, which exists as all configurations are taken into account becomes the question for a well-defined solution of Eq. (6) for K → ∞. Specifically, in tensor models, it is useful to choose the suppression term as a function of the number of components of the tensor, N , e.g., in the form such that In a slight abuse of notation, we use T both for the tensors that are integrated over in the generating functional, as well as for their expectation value on which the effective average action Γ N depends. As we search for a phase transition in these models, we will employ the FRG to search for infrared (IR) fixed points. The relevant directions correspond to the number of parameters that require tuning to reach criticality. As we aim at approaching such IR fixed points in the limit of large tensors, we will set up beta functions in the large N limit. In the following, we will review how to use Eq. (8) for practical calculations and to search for candidates for a universal continuum limit in quantum gravity. which admits a Taylor expansion with nonvanishing coefficients not only of the quartic but generically also of all T 2n , n ∈ N. The is the analogue of the well-known observation that Wilsonian coarse-graining flow generates all quasi-local interactions that are compatible with the symmetries 3 . Even though in the case at hand we are not dealing with a local coarse-graining flow, the analogous observation holds and all interactions with positive powers of tensors that obey the symmetries, are generated. Accordingly, to implement the flow equation in practice requires the following steps Steps c) and d) are then iterated and only fixed-point solutions which reach stability under the steps in the iteration procedure are kept. The tensor models 4 typically of interest for quantum gravity feature an independent symmetry group for each index. position, e.g., a product of d copies of an O(N ) symmetry for the real rank-d model. Accordingly, interactions cannot have an explicit index dependence, and no tensors with open indices can occur. All allowed interactions O (n) of n tensors can therefore be cast in the form where d is the rank. The contraction pattern C a1...an,b1...bn,...,d1...dn is a product of Kronecker deltas, in which a's can only be contracted with a's, b's with b's and so forth. All possible permutations of the labels 1 to n have to be taken into account independently for each index set a, b, etc. Some of the resulting O (n) will be combinatorially equivalent, in which case only one representative is taken into account. In Table I we list combinatorially distinct structures up to sixth order in the tensors for the real and complex rank-3 models. Note that the theory space includes multi-trace interactions. This name derives from the rank-2, i.e., matrix-model case, where interactions take the form Tr T a1b1 ...T anbn · ... · Tr T a1b1 ...T ambm . In the case of higher rank, similarly combinatorially disconnected interactions are part of the theory space. These are generated by the flow, even if they are not included in a truncation. There is no symmetry principle (that we are aware of) that allows to set the corresponding couplings to zero. B. Regulator & symmetry breaking The key ingredient to set up the flow equation is the regulator, or "infrared" suppression term. In this context, infrared means low values of indices. Accordingly, the regulator should satisfy the two limits T abc T abc The first condition ensures that "UV" modes are unsuppressed. It also ensures that no modes are suppressed once the IR cutoff scale N is lowered to zero. The second condition enforces that "IR" modes are suppressed. The third condition ensures that in the limit of infinite cutoff, the effective action essentially reproduces the classical action. The three conditions can be achieved with different so-called shape functions, i.e., different choices of R N ({a i }). The arguably simplest choice is where r, p > 0. While there are optimization criteria for a similar shape function at lowest order in the derivative expansion in the continuum [94], it has not yet been investigated what form an optimized cutoff takes for tensor models. As a generalization, one might consider the argument of the regulator to be N r /(a r1 +b r2 +c r3 ), which should result in three combinations of the four parameters r, r 1 , r 2 , r 3 to appear in the beta functions. Demanding a discrete symmetry of the indices fixes r 1 = r 2 = r 3 = p. The introduction of the regulator term necessarily breaks the symmetry of the model, as ) symmetry requires all index positions to be treated on an equal footing. Setting up the RG flow is therefore incompatible with the unbroken symmetry, leading to an enlargement of the theory space. Specifically, the invariants in Eq. (10) are generalized and include with functions f (a 1 , ..., d n ) encoding the explicit index dependence. Yet there is an important difference to a setting where the symmetry is broken from the outset, and which features the same theory space. It lies in a modified Ward identity that accounts for the symmetry-breaking introduced by the regulator. It selects a hypersurface in the larger theory space on which the full symmetry is recovered at the IR endpoint of the flow. Although the regulator vanishes in this limit, this is not sufficient to restore the symmetry, since the regulator has introduced symmetry violations in the flow at all finite scales. To compensate these, the initial condition for the flow, set in the UV needs to break the symmetry in a specific way that is dictated by the Ward-identity. Therefore a fixed point of the RG flow simultaneously needs to solve the modified Ward identity in order to lead to a symmetric IR limit. This requirement cannot necessarily be imposed on truncations: While the exact flow equation and Ward-identity are compatible, the Ward-identity in general requires other terms to be present in the truncation than the flow equation provides. In matrix models, a simple solution of the Ward-identity was discovered [41]: As symmetry-breaking is not introduced through tadpole diagrams (i.e., the leading order contributions to the beta functions in an expansion in couplings) in matrix models, the theory space is not enlarged in the tadpole-approximation. Beyond rank 2, such a simple solution is no longer possible, as even the tadpole approximation generates symmetry-breaking terms. C. Bootstrap strategy for consistent truncations To characterize a universality class, at least all non-irrelevant critical exponents must be calculated. Accordingly, the set of all couplings which have a significant overlap with a relevant or marginal direction must be included in a minimal truncation. A priori, this set is not determined at an interacting fixed point. In practice, the following strategy is available: one starts with an assumption about a systematic division of theory space into relevant and irrelevant directions. A reliable truncation should at least include all couplings which are expected to be relevant as well as the leading irrelevant ones. If the beta functions in this truncation feature a fixed point, the critical exponents at the fixed point indicate whether the initial assumption about relevant couplings holds. If this is the case, terms beyond the truncation are expected to most likely only provide subleading corrections to the relevant critical exponents. A particularly useful assumption is that of near-canonical scaling which allows one to use the canonical dimension as a guiding principle. This assumption works very well for a large class of fixed points and implies essentially that low orders in a vertex expansion are sufficient to obtain quantitative estimates of the critical exponents. The underlying reason is that for these cases, the mechanism that induces the fixed point is a balance between canonical scaling and leading-order quantum corrections. This mechanism is at work as soon as one departs from the critical dimension of a particular interaction, and generates a UV (IR) attractive fixed point if the coupling is asymptotically free (trivial) in its critical dimension. Examples include Yang-Mills in d = 4 + , the Gross-Neveu model in d = 2 + for the former and the Wilson-Fisher fixed point for the latter, see, e.g., [95][96][97][98][99][100]. For the search of a quantum gravity fixed point in the tensor model theory space, the canonical dimension could be a useful guiding principle. The motivation for this comes from the hope that the universality class discovered for quantum gravity in the continuum where metric fluctuations are summed over appears to be near-canonical. To match the corresponding spectrum of scaling exponents, one would expect a near-canonical scaling also on the tensor model side 5 . The continuum asymptotic safety regime has been studied intensively and there is mounting evidence that the non-Gaussian fixed point explored in that approach features near-canonical scaling, the largest anomalous scaling is about 2 while the difference of quantum to canonical scaling goes to zero for the couplings of √ gR n with n > 3, see, e.g. [101][102][103][104]. It is therefore a well-motivated starting point to assume that no operator with, e.g., canonical dimension −4 (or slightly more negative) can have significant overlap with a relevant direction at the quantum gravity fixed point in tensor models. This leads to a truncation ansatz in which one includes all operators up to this scaling. One can then search for a fixed point with quantum-gravity characteristics and check explicitly whether the near-canonical scaling assumption is justified. Having found such a semi-perturbative fixed point in a truncation, one needs to check whether this fixed point is a truncation artifact, i.e., that the RG-flow at that point simply is parallel to the projection onto the truncation. One can obtain hints about this by (1) varying the regulator andthe way in which one projects onto the truncation ansatz and (2) enlarging the truncation. If a fixed point is stable under variations of the regulator and projection rule and if it appears with the same features in larger truncations, then it is unlikely that the fixed point is a truncation artifact. A larger truncation also allows one to re-check the assumption of near canonical scaling. Ideally one finds that the deviation from canonical scaling decreases for the new operators which suggest that canonical scaling becomes a better and better assumption for the operators not included in the truncation. A similar strategy was successfully applied to the semi-perturbative UV-attractor of the Grosse-Wulkenhaar model [105], where it was indeed possible to bound the deviation from canonical scaling. Deriving such a bound for tensor models would complete the bootstrap approach. D. In practice: The PF expansion The FRG equation (8) is an equation for the effective action functional, which involves inverting a field-dependent operator and taking the regulated trace over the eigenvalues of the field-and index-dependent two-point function A very useful strategy to perform these two operations is the PF-expansion, which is a Taylor expansion of the RHS of the flow equation in the tensor T abc around the vanishing field configuration T abc ≡ 0. To obtain the expansion, we rewrite the regularized inverse two-point function that enters the flow equation (8) as where we use a shorthand notation Γ N,abcdef = δ 2 Γ N /δT abc δT def . Thence, the flow equation (8) is expressed as where we suppressed the tensor indices for simplicity and expanded the inverse two-point function as a geometric series. This way of writing the RHS of the flow equation is very useful when one considers finite polynomial truncations in T abc , because in this case one can truncate the sum at finite order. All further terms of the sum would possess more tensors than the monomials in the truncation. IV. LARGE N SCALING DIMENSIONS In settings with a background, where the RG flow corresponds to a local coarse graining, one RG step is literally a scale transformation. Accordingly, the canonical scaling dimensions of couplings are their mass dimensions. These can be determined prior to studying the actual RG flow. In the background independent setting, there is no notion of locality or spacetime and accordingly all couplings are dimensionless in terms of units of length or mass, and no notion of mass dimension exists. Yet, mass dimension is not the notion of dimensionality that is relevant to a pregeometric RG flow anyways. Instead, it is a consistent scaling with N that is central here. This scaling is not determined a priori. Nevertheless, one can determine it in two steps: 1. Since the purpose of the FRG setup is the investigation of the large N -behavior of the tensor model, we need to scale the coupling constants in such a way that the beta functions admit a 1/N expansion. This gives a stack of coupled inequalities, which exclude most scaling prescriptions. Imposing the additional requirement that no interactions should be artificially decoupled from the system uniquely fixes all but one scaling dimensions. 2. A further condition comes from the geometric interpretation of tensor models. Specifically, the interpretation in terms of the Regge-action of the triangulation that is associated to each tensor-model Feynman graph is only possible for a particular scaling of the associated coupling constant with N . We will now present these two steps in more detail and determine the scaling dimension for the tensor models of quantum gravity. For the first step, let us briefly return to the background dependent continuum setting. There, the flow equation automatically provides a scaling dimension. It arises by demanding that the beta functions form an autonomous system, such that, after an appropriate rescaling of the couplings, the explicit dependence on the scale drops out. As a specific example, consider the beta function for the Newton couplingḠ, which reads to leading order inḠ with # < 0 , [1,[106][107][108][109][110]. Demanding independence from k provides the scaling dimension and is in agreement with the mass dimensionality. The dimensionless coupling takes the form Without knowing anything about mass dimensionality, one can thus alternatively fix the canonical scaling dimensions of couplings by demanding that the beta functions form an autonomous system. This strategy is applicable to the pregeometric setting. For instance, the couplingḡ 2,1 4,1 of the interaction T abc T ade T f de T f bc in a real rank 3 model has the beta function βḡ2,1 resulting in Note that fixing the scaling dimensions in this way is possible in the large N limit, but not at finite N . This is a consequence of the fact that at any given order in the couplings, different orders in N appear. As we explicitly use the large N -limit, this only results in an upper bound on the scaling dimensions. Choosing scaling dimensions below this upper bound also results in autonomous beta functions in the large N limit 6 . Yet, for this choice the corresponding interactions decouple from the beta functions. The "most interacting" system, where no interactions are suppressed artificially, is achieved when the scaling dimensions are chosen as the upper bounds. As is evident from Eq. (18), the thus-determined scaling dimensions depend on the parameters r and p in Eq. (11). Insight into the physics allows to fix the ratio r/p. For instance, for the geometric interpretation of tensor models, the interpretation of the dual picture in terms of dynamical triangulations results in a relation of the couplings of the tensor model and the scale N to the couplings of the Regge action. This relation only works for a specific choice of canonical scaling for the leading coupling (i.e., one of the quartic couplings). In turn, this scaling dimension fixes r/p. It turns out that this is r/p = 1 for a rank d tensor model, if d is the dimension entering the corresponding Regge action in the continuum picture. Yet, for those fixed points in the tensor model that show dimensional reduction to a matrix model 7 , r/p < 1 is the correct choice. As a specific example for how the geometric interpretation fixes r/p, consider the possibly simplest quantum-gravity tensor model, the so-called rank 3 colored complex model [43] defined through the action The Feynman-diagram expansion of this model yields the amplitude where N 3 denotes the number of 3-cells 8 in the triangulation ∆(γ) associated with the colored Feynman graph γ and where N 1 denotes the number of 1-cells. Comparing this with the Regge action S R [∆] = κ 3 N 3 −κ 1 N 1 of a triangulation ∆ allows us to identify the coupling constants as κ 1 = ln(N ) and κ 3 = 3 2 ln(N ) − 1 2 ln(λλ). The uncolored models that we investigate with the FRG are obtained by integrating out all but the last color. The scaling N −3/2 of the coupling constant λ then implies the scaling N −2 for the cyclic melonic interactions. In other words the geometric compatibility condition fixes r/p = 1. V. BENCHMARKING THE FRG IN MATRIX MODELS In two-dimensional quantum gravity, the relevant critical exponent of the double-scaling limit is known. In this limit, the continuum limit in dynamical triangulations can be taken in such a way that all topologies contribute. For reviews and introductions, see, e.g., [21,[111][112][113][114]. The matrix model that is dual to dynamical triangulations can be chosen to be Hermitian N × N matrices ϕ, with the generating functional given by The double-scaling limit requires taking N → ∞, while holding where g 4 crit is the critical value of the coupling. This can be rewritten in the form This is structurally similar to the leading-order scaling of couplings in the vicinity of a fixed point of the RG flow. Accordingly, one is led to identify as a relevant critical exponent. This similarity prompted the authors of [39] to set up a pregeometric RG flow in matrix size N . In that paper as well as the follow-up works [115][116][117][118][119][120], the coarse-graining was implemented explicitly by integrating out the outermost rows and columns of the matrices in a Gaussian approximation. In [40], the flow equation (8) in the pregeometric setting was first derived. Applying it to truncations of a single-trace form, Γ N = i=2 g 2i Trφ 2i , yielded a critical exponent that approaches θ = 1 from above. Extending the truncation to multitrace operators does not improve the estimate, but instead makes it worse. The critical exponent θ = 0.8 for gravity is first reproduced at the first multicritical point [41], which corresponds to gravity coupled to conformal matter [121]. Instead of reviewing these results in greater detail, here we explore an alternative prescription to calculate the critical exponents that leads to a significant improvement in the estimate. This prescription was already explored in [64] for tensor models, and has been put forward for continuum QFTs in [122]. It consists in keeping the anomalous dimension η = − N ∂ N lnZ N constant while calculating the stability matrix, i.e., The notation () η indicates that the derivative is taken at fixed η. The alternative, more standard prescription differs by including derivatives of η and will be denoted by θ I to clearly differentiate between the two. As a specific example, consider the case where a coupling g i already corresponds to an eigendirection at a fixed point. No off-diagonal elements of the stability matrix contribute to its critical exponent, such that In [122] it was observed that a scaling relation for critical exponents in the O(N ) ⊕ O(M ) model, which is known to hold for the epsilon-expansion [98,123,124], is only satisfied for the FRG in truncations of the full flow to the local potential approximation plus anomalous dimension for the prescription in Eq. (25). The more standard prescription leads to small violations of the scaling relation in those truncations. Here, we show that theθ-prescription gives improved results for the critical exponent of the double-scaling limit, resulting in only 14 % deviation already in a calculationally very straightforward truncation, cf. Fig. 4. The results for the critical exponent as a function of truncation order in Fig. 4 appear to be fit well by a function of the formθ with fit parameters a = 0.91, b = 1.54 and c = 0.29. An extrapolation to n → ∞, which is the complete single-trace subsector of theory space, yieldsθ(n → ∞) = 0.91, which is only a 14 % deviation from the exact result θ = 0.8. Whether this is accidental, or whether there is a deeper reason why theθ prescription works better for matrix and potentially also tensor models, remains to be explored in the future. One source of systematic errors for the critical exponent is the breaking of the U (N ) symmetry of the matrix model through the regulator [41]. This can be seen by the fact that the U (N )-Ward-identity obtains a non-vanishing RHS through the introduction of the regulator: where A is the generating matrix of an infinitesimal unitary transformation, which generates the transformation G . By generating we mean that a unitary transformation U = exp(i A) transforms the matrix φ as . This implies that the RG flow generates symmetry breaking operators even if the initial condition is a U (N ) symmetric action. In particular the relevant directions will acquire contamination by these symmetry breaking operators. Hence, when investigating the large N -limit with the FRG, one has to include these symmetry-breaking operators to find accurate critical exponents. Including the symmetry breaking operators into a truncation and distinguishing them from the symmetric operators by a projection on the truncation is a technically rather challenging task. Fortunately, there is a self-consistent workaround in the case of matrix models that gives surprisingly stable results [41]: It is based on the observation that tadpole diagrams of U (N )-symmetric operators do not generate symmetry-breaking operators for the rank-2-case. In other words, the tadpole approximation to the broken U (N )-Ward-identity is solved by a symmetric effective average action. Using the tadpole approximation in a single trace truncation allows one to find the infinite series of so-called multicritical points. The m-th multicritical point is a fixed point with m non-vanishing couplings at the fixed point whose fixed-point values occur with alternating sings and whose critical exponents are θ . multicritical fixed points. For this leading critical exponent, one therefore obtains an estimate that deviates from the exact value by only 3%, which is a rather high precision. The subleading relevant critical exponents at the multicritical points are not reproduced with comparable precision in this truncation. Nevertheless, we interpret the precision of the leading relevant exponent as a signature that the FRG can successfully pass the benchmark test posed by rank 2 models. Only at the double scaling limit, i.e., the m = 1 fixed point, one obtains θ This relatively large discrepancy of the critical exponents from 0.8 can be explained by the fact that the tadpole approximation does only capture effects from the tree-level truncation, which contains only one coupling. Given such a small truncation, it is actually remarkable to obtain the values of the critical exponent with 25% accuracy. As a consequence of universality, specific fixed points in tensor models can also reproduce the matrix-model results. This is due to the fact that the shape of the building blocks is not relevant for the continuum limit. Therefore even higher-dimensional building blocks can reproduce a lower-dimensional continuum limit, at a point in theory space where the effective dynamics "flattens" these building blocks in an appropriate way. To recover the lower-dimensional scaling, the canonical scaling dimensions of the model have to be adjusted by choosing r/p < 1 in Eq. (11). In that choice, and for the prescription Eq. (25), the matrix-model exponent is approximately recovered from the fixed points in tensor models, see Sect. VI and VII. VI. CHARTING THREE DIMENSIONS FROM A TENSOR-MODEL POINT OF VIEW An important motivation for RG studies of the large-N -behavior of tensor models is the search for a continuum limit that can be associated with quantum gravity. The first step in the systematic program that can lead to the confirmation or refutation of the conjecture that there might exists a continuum limit in tensor models which corresponds to quantum gravity, is a systematic investigation of theory spaces. Varying the number of tensor fields, the rank of the tensors and symmetry-structures provides a number of different theory spaces which one can then investigate with the FRG. The first step in the FRG investigation of a theory space consists of finding tentative candidates for universal fixed points. This provides insight into which interaction structures could be of particular importance for a continuum limit. Below, we discuss the status of this systematic program in more detail for rank-3 tensor models. In summary, by investigating a complex uncolored model, i.e., a model with U (N ) ⊗ U (N ) ⊗ U (N ) symmetry, and a real uncolored model, i.e., a symmetry group of the form O(N ) ⊗ O(N ) ⊗ O(N ), we discover that certain classes of fixed points are shared. In particular, we find fixed points that exhibit a form of dimensional reduction and evidence that these fixed points are not truncation artifacts. Crucially, the real model features a new, tetrahedral interaction, cf. the third entry in Tab. 1, introduced by Carrozza and Tanasa [62], and later taken up in [47] for an SYK-type model. This interaction appears to be key for the generation of a fixed point which does not appear to feature dimensional reduction and therefore constitutes a tentative candidate for a continuum limit for three-dimensional quantum gravity. We stress that of course it requires much more than just the discovery of the fixed point to establish its relevance for three-dimensional quantum gravity; finding a fixed point without dimensional reduction is a necessary but not sufficient step in linking tensor models to a well-behaved phase of quantum gravity. A. Dimensional reduction in tensor models The absence of a background geometry permits that tensor models exhibit phenomena that do not appear in local quantum fields theories. The first of these is the dynamical generation of multi-trace operators, which correspond to tensor model vertices with a geometric interpretation as boundaries formed by disconnected pieces of geometry (such as, e.g., the two circles in the boundary of a cylinder). These multitrace operators are however generated by connected Feynman diagrams. For instance, a matrix connected matrix model Feynman diagram may be dual to the triangulation of a cylinder connecting the two circles in the boundary. The corresponding interactions are thus generically generated by the flow, and are part of the quantum effective action. In particular one finds disconnected tensor invariants with 2n tensors of the form (T abc T abc ) n , which possess an enhanced O(N 3 )-symmetry, reducing the tensor model to a vector model and producing non-Gaussian fixed points, which do not represent extended three-dimensional geometries. In [65] we identified a mechanism that can be realized at suitable fixed points and prevents the production of multitrace operators. It is based on the observation that the generation of (T abc T abc ) 2 from connected vertices requires two cyclic 4-melons with distinct preferred colors to be nonzero. Thus, the fixed points in the theory space with the enhanced O(N ) ⊗ O(N 2 ) symmetry exhibited by the cyclic melons with one preferred color do not possess nonvanishing multi-trace operators. However, this theory space exhibits dimensional reduction. Dynamical dimensional reduction at high energies is an intriguing phenomenon in several models of quantum gravity, see, e.g., [125], that are four-dimensional at large scales. In tensor models, dimensional reduction differs in that it appears to be realized at certain classes of fixed points in rank-3 and rank-4 models, such that the continuum limit is not a candidate for three-or four-dimensional quantum gravity. (Although not yet explored explicitly, the same result should be true in any rank d > 2.) Specifically, an enhancement of the U (N ) symmetry goes hand in hand with an effective "fusion" of two indices into one "superindex", such that the model effectively reduces to a matrix model. This occurs at fixed points at which only cyclic melons (single trace, multi-trace or both) of one preferred color are present. Because of the enhanced symmetry, it is always consistent to set all other interactions to zero, as one can also check by inspecting the beta functions. To fully establish the dimensional reduction, the critical exponents of the matrix model should also be reproduced. Here, the freedom in choosing r and p in Eq. (11) becomes crucial: A matrix model features different canonical dimensions than a tensor model, essentially due to the reduced rank. The canonical dimensions are functions of r/p. To probe the matrix-model limit of rank-3 tensor models, one should choose r/p = 1/2 to obtain the canonical dimensions appropriate for a matrix model. With this choice for the scaling of the regulator, and for the prescription Eq. (25), the matrix-model exponent is approximately recovered from the fixed points in tensor models. In particular, the fixed points reported in Tables II, III and IV feature Table IV, whereas the purely cyclic melonic non-Gaussian fixed point only features one positive critical exponent in this truncation see Table II. It should be stressed that the deviation of θ 2 from zero at the fixed point in Table IV is smaller than the presumed systematic error of the truncation. As we have introduced the colors it is consistent to switch off the multitrace interactions for the matrix-model limit. In matrix-model RG flows this has not been possible, as multitrace interactions in matrix models are automatically generated from single-trace ones. Physically, this might suggest that configurations with disconnected boundaries do not have a significant impact on the path integral in two dimensions, as it appears to be possible to reach the same continuum limit both with an without the presence of multitrace interactions. Note that due to the symmetry breaking induced by the regulator, even at the single-trace cyclic melonic fixed point, interactions with nontrivial index-dependence outside this theory space are generated and could take finite values at the fixed point. The fixed point values of these operators are constrained by the modified O(N ) ⊗3 -Wardidentity, where the only symmetry breaking term is due to the regulator. This regulator term vanishes in the IR-limit, which implies that the O(N ) ⊗3 -Ward-identity turns into the constraint that all of these index-dependent interactions vanish. The analogous argument applies to all other fixed points: These fixed points will exhibit non-vanishing couplings for index-dependent vertices, but their values are constrained by the modified Ward-identity. In the IR, it turns into the constraint that all couplings associated with index-dependent operators vanish. To reach this point, the initial condition for the RG flow has to be chosen with an appropriate "amount" of symmetry-breaking operators, such that during the flow, the symmetry-breaking effect of the regulator compensates with that coming from the initial condition. B. Candidates with potential relevance for three-dimensional quantum gravity The fixed points with enhanced O(N 2 ) ⊗ O(N ) and O(N 3 ) symmetries appear to exhibit dimensional reduction. This might possibly be compatible with dynamical dimensional reduction in the physical UV limit, i.e., after the continuum limit has already been taken if these fixed points would possess a relevant direction that "inflates" additional dimensions in the IR. However, we consider this possibility unlikely, and consider it more likely that the continuum limit leads to the same topological dimension as the IR-limit of the corresponding spacetime exhibits. Note that the dimensional reduction in the spectral dimension observed in many quantum-gravity approaches is different, and does not imply that there is a reduction in the topological dimension. A different possibility to search for quantum gravity candidate fixed points is to search for fixed points that do not possess such an enhanced symmetry. In [65] we found two possible candidates. These fixed points are isocolored, i.e., they exhibit a global symmetry under color permutation. Note that such a symmetry is not linked to dimensional reduction. In fact, the presence of cyclic melonic interactions with all three different prefered colors is exactly what prevents the merging of two indices to one "superindex" linked to O(N 2 ) ⊗ O(N ) symmetry. Another hint about the "geometricity" of a fixed point might be the presence of the tetrahedral interaction T abc T ade T f dc T f be . An isocolored fixed point at which this tetrahedral interaction takes a non-vanishing fixed point value may describe the continuum limit of a geometric model. We stress that this is not sufficient for such a fixed point to be associated with quantum gravity. The identification as a quantum-gravity candidate can only be made when order parameters indicate a geometric interpretation. The fixed point possesses the positive critical exponents θ ± = 1.35 ± 1.56i ... 1.95 ± 0.69i, θ 3 = 0.38 ... 0.13. (30) in the full hexic trunction, where the range comes from several different schemes regarding the treatment of the anomalous dimension. We stress that it should not be taken as a complete estimate of the systematic truncation error. The leading critical exponents are roughly compatible with the critical exponents found for the Einstein-Hilbert truncation in three dimensions [126] with θ 1 ≈ 2.5 and θ 2 ≈ 0.8 which are also expected to come with significant systematic errors. We caution that this comparison is subject to systematic errors on both sides. In fact, in three-dimensional continuum gravity, fixed-point searches have only been conducted in the Einstein-Hilbert truncation, not including higher-derivative operators. Therefore it is not yet established whether there are indeed only two relevant directions, although the fact that four-derivative curvature invariants are canonically irrelevant could support such a conjecture. Accordingly, the comparison of critical exponents we perform here is to be understood as a proposal for a comparison that will become more meaningful in the future, when systematic errors are significantly reduced on both sides. Here, we only note that within the significant systematic errors that we expect these results to have, the critical exponents of the continuum and the tensor model setting do not appear to be incompatible. A second isocolored melonic fixed point with vanishing fixed-point value for the tetrahedral interaction was also found and discussed in the appendix of [65], but with slightly complex values for the coupling constants. The imaginary parts of the fixed point values of the couplings exhibit a scheme dependence that is consistent with vanishing imaginary parts of the couplings, which would make the fixed-point action real and thus physically admissible. VII. FIRST STEPS TOWARDS BACKGROUND INDEPENDENT FOUR-DIMENSIONAL QUANTUM GRAVITY In this subsection, we discuss the first results obtained for rank-4 tensor models using the FRG. The purpose of our presentation is illustrative and for this reason, we restrict the analysis to a simple truncation for the effective average action. An extensive analysis employing more sophisticated truncations will be presented elsewhere. Studying rank-4 tensor models, whose Feynman diagrams can be identified with 4 dimensional triangulations, is certainly of great importance from a quantum-gravity perspective. If a suitable continuum limit can be found, they could be candidates for a description of the microscopic structure of four-dimensional quantum spacetime. While results in tensor models point towards the existence of a branched-polymer phase [127], Monte Carlo simulations indicate that causal dynamical triangulations could also give rise to extended four-dimensional geometries [11,128]. The case of Euclidean dynamical triangulations is under renewed investigation [12]. The FRG is a suitable tool to complement such simulations and discover candidates for a universal continuum limit beyond branched polymers. We consider a complex rank-4 tensor model, i.e., we work with a random tensor T abcd and its complex conjugatē T abcd of size N . We focus on a model respecting the following symmetry where repeated indices are summed over. The matrices U (i) ab are unitary and therefore the model has a U (N ) ⊗4 symmetry. Eq. (31) shows that each index of the tensor transforms independently. Hence, U (N ) ⊗4 invariance requires that the only allowed index contraction is a first index of T with a first index ofT , a second index of T with a second index ofT and so on. Consequently, an interaction term which contains 2p tensors in total necessarily has p tensors T and p tensorsT . Invariance under Eq. (31) also ensures that the indices of the tensors do not have any permutation symmetry. A continuum limit in tensor models might fall into the universality class corresponding to the Reuter fixed point [3] (see [5,129] for recent reviews). Accordingly we bootstrap our truncation assuming a near-canonical scaling spectrum, and choose with and Γ N,4 =ḡ 2,1 4,1T a1a2a3a4 T b1a2a3a4Tb1b2b3b4 T a1b2b3b4 +ḡ 2,2 4,1T a1a2a3a4 T a1b2a3a4Tb1b2b3b4 T b1a2b3b4 +ḡ 2, 3 4,1T a1a2a3a4 T a1a2b3a4Tb1b2b3b4 T b1b2a3b4 +ḡ 2,4 4,1T a1a2a3a4 T a1a2a3b4Tb1b2b3b4 T b1b2b3a4 +ḡ (1,2) 4,1T a1a2a3a4 T b1b2a3a4Tb1b2b3b4 T a1a2b3b4 +ḡ (1,3) 4,1T a1a2a3a4 T b1a2b3a4Tb1b2b3b4 T a1b2a3b4 +ḡ (1,4) 4,1T a1a2a3a4 T b1a2a3b4Tb1b2b3b4 T a1b2b3a4 +ḡ 2 4,2Ta1a2a3a4 T a1a2a3a4Tb1b2b3b4 T b1b2b3b4 . For cyclic melons, which consist of contractions of neighboring tensors by either three or one line in alternating fashion, the superindices are not bracketed. The two superindices stand for the number of "submelons" and the preferred color i which is the color of the single line connecting neighboring tensors. Since this is a rank-4 model, there are four different melonic invariants, each one selecting one distinct preferred color. A symmetry-reduced theory space, the isocolored theory space is defined by a single coupling being assigned to all cyclic melons since those interactions have the same combinatorial structure and just differ by the preferred color. A distinct combinatorial structure is indicated by bracketed superindices: The couplingsḡ (1,i) 4,1 are associated to the necklaces diagrams. These interactions are such that a given white vertex is connected to a black vertex by exactly two edges. There are three such interactions due to three possible pairings of four indices into two groups of two. Each white vertex is connected to one of its neighbors by the colors (1, i) in the superindex, and by the remaining two colors to its other neighbor. Finally, the "double-trace" interaction is parameterized by the couplingḡ 2 4,2 : in the subindices, the number of connected components is two. The superindex represents the number of melons. At higher orders in the truncation, where the first subindex is larger than four, additional superindices must be introduced to distinguish all different combinatorial structures at fixed order in tensors and connected components. We aim at deriving the beta functions for the dimensionless couplings g I , wherē with [ḡ I ] being the canonical dimension of the couplingḡ I . We will now determine the canonical dimensions, cf. Sec. IV. For the application of the flow equation one has to choose a regulator function which acts as an "infrared" suppression term, cutting off modes with indices satisfying a p + b p + c p + d p < N r , where r, p > 0. Our regulator choice generalizes that in [65] R (r,p) The term in the second line of Eq. (37) does not yield a contribution to the flow of couplings of index-independent interactions, since the delta-distribution appears multiplied by its argument. With these definitions, the right-handside of the flow equation can be evaluated. To extract beta functions from it, suitable projections onto the monomials spanning the theory space have to be used. Specifically, the distinct combinatorial structures at a given order in tensors can easily be distinguished, as the flow equation generates combinatorially different contractions on the right-handside. To deal with the additional index-dependence of interactions that occurs due to the symmetry breaking induced by the regulator, we apply the prescription from [65]. Specifically, the regulator can either sit on an index forming a closed loop, or an index occurring on a tensor and antitensor. To project onto symmetry-invariant monomials only, we set indices in the regulator to zero, if they also occur on a tensor. This splits the index-trace into two parts: The contraction of tensors and their complex conjugates decouples from the regulator trace and is directly recognizable as one of the different combinatorial structures in Eq. (33) or (34). The regulator trace consists of a trace over indices running through the regulator and its derivative, which can be rewritten as an integral in the large-N limit. The resulting beta functions for the dimensionless couplings as well as the anomalous dimension η ≡ −∂ t Z N /Z N are, where I i j (p) are threshold integrals provided in App. A for p = 1, 2. The canonical dimensions for the couplings are not fixed in Eq. (38) to (42). They are fixed by demanding that Eqs. (38)-(42) admit a 1/N expansion starting with a non-trivial contribution at order (1/N ) 0 . In the expression for the anomalous dimension, Eq. (38), the large-N limit can be taken if the canonical dimensions satisfy the following bounds Eq. (39) imposes a new constraint for the canonical dimensions, namely while the beta function for the necklaces (41) does not introduce any new conditions. Finally, the beta function for the double-trace couplings constrains the canonical dimensions by From Eqs. (43), (44) and (45), we obtain upper bounds for the couplings canonical dimensions (or relations between them). Couplings decouple from the set of beta functions if their canonical dimension is chosen below the corresponding upper bounds. In this sense choosing the upper bounds as the canonical dimensions leads to the most non-trivial set of beta functions at large N . We tentatively consider a decoupling of interactions through such choices artificial, hence, we choose the upper bounds as the canonical dimension for the couplings. We start by demanding that This is exactly what one would expect based on [ḡ 2,i 4,1 ] rank−3 = −2r (for p = 1) in the rank-3 case: The contraction of one additional index in the rank-4 case requires an additional suppression by 1/N (for r/p = 1). The first inequality of (45) implies [ḡ 2 4,2 ] ≥ −4r/p. Yet, the third inequality of (43) enforces [ḡ 2 4,2 ] ≤ −4r/p. Therefore, given the choice in Eq. (46), the canonical dimension for the double-trace coupling is completely fixed, i.e., This is in accordance with the expectation from the rank-3 case, as well as the reasoning that the additional "trace" of this interaction in comparison to the quartic cyclic melonic interaction should lead to an additional suppression by 1/N (for r = p). Finally, using (46) and (47) in (43) The scaling dimensions are functions of the ratio r/p. Hence, if one chooses the "standard" scaling, i.e., setting the power of the infrared cutoff N equal to that of the "momentum" scale, r = p, the dimensions are always −3, −4 and −2 for the cyclic melons, multitrace and necklaces interactions, respectively. For those fixed points that do not feature dimensional reduction, the choice r = p is preferred based on a geometrical argument, see Sec. IV. Nevertheless, the threshold integrals I i j depend on those parameters in a non-trivial way which implies that different choices of (r, p) lead to different numerical coefficients in the beta functions. Thus, choosing different values r and p while keeping all canonical dimensions fixed tests the scheme/regulator dependence of our calculation. In the large-N limit and using Eq. (46)-(48) with r = p = 1, the system of beta functions reduces to which can be solved for η leading to and We highlight several key features of the above system: Firstly, unlike in the quartic truncation for the rank-3 complex tensor model [64], there is a class of interactions, the necklaces, which are not melonic. On the other hand, in the real rank-3 tensor model, see [65], a non-melonic interaction is present already at the quartic order. It does not contribute to the anomalous dimension at large N . As a difference to these two examples, Eq. (49) and (50) show that all couplings contribute to the anomalous dimension in the complex rank-4 model, including the non-melonic (necklaces) couplings. This is the first evident structural difference between the present beta functions and the rank-3 ones [64,65]. Secondly, after choosing the canonical dimension for the melonic couplingḡ 2,i 4,1 , the canonical dimension for the doubletrace coupling is fixed uniquely. Its value differs from the canonical dimension of the melonic coupling. Consequently, interactions which contain the same number of fields (tensors) as well as sums over indices (which are the analogue of an integral over momenta in ordinary quantum field theories on a background) scale with different powers at large N . This is an intrinsic property of the combinatorially non-trivial structure of the interactions in tensor models, see also [64,65,88]. We caution that if the double-trace interaction was not introduced in the present truncation, one could choose the canonical dimension for the necklaces to be the same as the canonical dimension of the melonic coupling. This would lead to the misleading conclusion that is possible to choose the same canonical dimension for all interactions with a given number of tensors. We look for fixed points of the system of beta functions Eq. (50)- (53). The strategy is the same as the one employed in [64,65]: Firstly, zeros of the beta functions are obtained in a perturbative approximation, i.e., the anomalous dimension is taken as a polynomial function of the couplings, With Eq. (54), the beta functions are polynomials in the couplings. Hence finding their zeros is easily achieved with computer software. Once the zeros are obtained, several criteria are applied to filter out candidates for physical fixed points. These include the regulator bound 9 η p < 1. Further, the critical exponents should stay bounded such that the bootstrap strategy for the choice of truncation is justified. Finally, we demand stability under extensions of the truncation. Given the limited nature of our investigation for the purposes of this review, the only extension is that from the perturbative form of the anomalous dimension in Eq. (54) to the full expression in Eq. (50). The resulting candidates for physical universality classes can be separated into two main classes: those with enhancement of the U (N ) ⊗4 symmetry to U (N 2 ) ⊗ U (N ) ⊗2 , U (N 3 ) ⊗ U (N ) or U (N 4 ) display dimensional reduction, i.e., the associated continuum limit would not correspond to four-dimensional geometries. In contrast, those with U (N ) ⊗4 symmetry might be possible candidates for a suitable continuum limit which could correspond to 4d quantum gravity. The following results are quoted for the case r = p = 1 unless stated otherwise. A. Symmetry-enhanced fixed points: Dimensional reduction in tensor models The U (N ) ⊗4 symmetric theory space contains symmetry-enhanced subspaces, such as, e.g., U (N 2 ) ⊗ U (N ) ⊗2 . To achieve the corresponding enhancement of symmetry, interactions which violate it have to be switched off. This happens at several fixed points in our truncation 10 . The enhanced symmetry is broken if there is at least one nonvanishing interaction for each of the four colors that treats this color differently form the remaining colors. Therefore, although it appears slightly paradoxically at first glance, fixed points which are not invariant under color permutations typically feature a larger symmetry than U (N ) ⊗4 . The breaking of the color permutation symmetry at the fixed point 11 allows for some interactions to vanish such that a pair, or even triple, of indices can be summarized into one "superindex". This superindex features an U (N 2 ) (or even U (N 3 )) symmetry. Consequently, the interactions which are turned on at the fixed point can be described by lower rank tensors. This is a form of dimensional reduction, i.e., the lower rank tensors "tessellate" lower-dimensional discrete geometries. For instance, at a fixed point at which two pairs of indices are summarized into two superindices, the rank-4-model reduces to a matrix model, which encodes random geometries in two dimensions. The enhancement in symmetry and dimensional reduction entails that universality classes of lower-rank-models can be reproduced. Two comments are in order here: Firstly, the recovery of "lower-dimensional" universality classes requires to exploit the freedom in the choice of regulator in Eq. (36) such that the canonical dimensions of the interactions agree with those of the lower-rank-model. For instance, the quartic cyclic melonic couplings have canonical dimension −3r/p in the rank-4-case and −2 in the rank -3 case for r = p = 1. Choosing r/p = 2/3 for the rank-4-case leads to an agreement in the canonical dimension of the cyclic melons. Analogous choices for different fixed points will be spelled out below. We emphasize that the choice of canonical dimension of quartic interactions is grounded in geometric arguments. Therefore, for each dimensionality d, there is a unique choice of r/p for each rank n, such that the canonical scaling exponents agree with those required for an identification of the dual of the tensor model with random geometries in d dimensions. This choice appears to be r/p = 1 for n = d, but differs if n = d. Secondly, the symmetry-enhanced fixed points are embedded in a larger theory space with symmetry-breaking directions. Therefore, additional relevant directions might exist which entail additional tuning required to reach the fixed point. Similar enlargements of universality classes are well known in statistical physics. For instance, the scaling exponents for the O(N + M ) Wilson-Fisher fixed point can be recovered within a O(N ) ⊕ O(M ) symmetric theory space. Yet, an additional relevant direction is associated to the additional tuning required to reach this critical point, see, e.g., [131]. We will check on a case-by-case basis whether dimensional reduction requires additional tuning, or whether it is a preferred IR-endpoint of tensorial RG flows. The set of beta functions given by Eq. (50)-(53) admits the following symmetry-enhanced fixed points: • Cyclic-Melonic Single-trace Fixed Point: Only one representative of the cyclic melonic interactions g 2,i 4,1 is nonvanishing at this fixed point. For a given cyclic melon, e.g., g 2, 1 4,1 , the interaction can be expressed as 4,1 , the index pair (1, 2) as well as the pair (3,4) can be summarized to two superindices, entailing a reduction to a matrix model. where the super-index I condenses three of the initial indices and thereby enhances the symmetry of the model to U (N ) ⊗ U (N 3 ). Consequently, the fixed-point dynamics is described by a single-matrix model. The corresponding continuum limit is not associated with 4d quantum gravity but rather expected to yield the well-known pure-gravity scaling exponent in 2d. For r = p = 1 this fixed point features two relevant directions in our simple truncation, θ 1 = 3.47 and θ 2 = 0.31. Due to the systematic error associated with the truncation the present results are insufficient to establish whether the second relevant direction turns into an irrelevant one. We provide a rough estimate for a lower bound on the systematic error by exploiting the freedom in the shape function: Considering, for instance, a "spherical" cutoff function, i.e., r = p = 2, this fixed point also displays two relevant directions with critical exponents θ 1 = 3.71 and θ 2 = 0.22. Although associated with a matrix model, the critical exponents reported are far from the exact result obtained for the pure-gravity scaling exponent in 2d, (θ = 0.8). This is similar to the result obtained in the rank-3 real model in [65] and a consequence of the canonical dimensional of the cyclic melonic coupling for r/p = 1. Instead setting r = 1/3 for p = 1 implies [ḡ 2,i 4,1 ] = −1 in agreement with the canonical dimension of the quartic interaction in matrix models. In this case, the fixed point has two relevant directions with critical exponents θ 1 = 1.05 and θ 2 = 0.11. For the prescription for critical exponents reported in Sec. V, we obtainθ 1 = 0.44 and θ 2 = 0.11. More sophisticated truncations are necessary to establish whether the second critical exponents is indeed positive. • Multitrace-Bubble Fixed Point: All interactions but the double-trace g 2 4,2 one vanish at this fixed point. The remaining interaction can be expressed as where all indices are collected in one single super-index I. Such a term is characterized by an enhanced symmetry U (N 4 ) and it describes a vector model. This fixed point displays four relevant directions: θ 1 = 4.69 and θ 2,3,4 = 0.20. The three-fold degeneracy is a consequence of an exchange-symmetry between the three directions that break the enhanced symmetry. The small absolute value of θ 2,3,4 does not permit to determine whether there are four relevant directions in total. In fact, the same fixed-point structure is seen in the rank-3 model [65]: there, it features two relevant directions in the quartic truncation while in the hexic truncation, only one relevant direction remains, see [65]. • Single-necklace Fixed Point: Only one necklace interaction is non-vanishing at this fixed point. All other interactions in our truncation vanish. Beyond the truncation, only interactions respecting the corresponding enhanced symmetry are present. Due to the color-permutation symmetry in the theory space, there are three such fixed points, each characterized by a different non-vanishing necklace. If one takes, e.g., g (1,2) 4,1 to be the non-vanishing necklace at the fixed point, the interaction term can be expressed as where the two index pairs are collected in one super-index I. Therefore, the interaction features an enhanced U (N 2 ) ⊗ U (N strongly from the exact result θ = 0.8. We attribute the difference to the fact that the canonical dimension for the necklaces couplings is −2 and not −1 as would be in the assignment of the canonical dimensions in matrix models, see [40]. By choosing r = 1/2 for p = 1, the fixed point exhibits one relevant direction with scaling exponent θ = 1.07 which gets closer to the exact result. The second prescription for the universal scaling exponents yieldsθ = 0.42. As a simple check of the robustness of these results, we explore the choice r = p = 2. We obtain one relevant direction with critical exponent θ = 2.37. Assigning dimension −1 for p = 2 requires r = 1. For this choice, the fixed point has one relevant direction with critical exponent θ = 1.09. The results are qualitative and even numerically compatible with those discussed for p = 1, giving a first hint towards stability under different choices of scheme. The above fixed points are characterized by a single interaction type. Symmetry-enhanced fixed points with more than one non-vanishing interaction are also possible. These include, e.g., • One Cyclic-Melonic Multitrace Fixed Point: At this fixed point, just one cyclic melonic interaction of a given preferred color and the double-trace interaction are non-vanishing. If one selects, e.g., the coupling g 2,1 4,1 to be the non-vanishing cyclic melon, the interactions at the fixed-point are given by Three indices are condensed in one super-index I, enhancing the symmetry to U (N ) ⊗ U (N 3 ). Accordingly, the fixed point is associated with a matrix model. It features one-relevant direction and the associated critical exponent is θ = 3.06. In fact for this case, one cannot reproduce both canonical scaling dimensions for matrix models. To obtain agreement for the single-trace quartic coupling, one should again choose r/p = 1/3. This yields a canonical dimension of -1 for the single-trace coupling, but -4/3 for the double-trace, whereas the corresponding dimensions in the matrix model are -1 and -2. Therefore it is not clear whether this fixed point admits an interpretation in terms of a matrix model for random geometries. Beyond the fixed-point candidates reported here, further zeros of the beta functions characterized by symmetry enhancement are also obtained. As particular examples, one finds zeros where one cyclic melonic interaction together with one necklace and the multitrace interactions are turned on. Due to the different combinatorial structures, the corresponding dynamics can be mapped to that of a rank-3 model as illustrated in Fig. 6. As the cyclic melons and necklaces feature different canonical dimensions, the corresponding model would presumably be a two-tensor model. A further zero of the system of beta functions features a single necklace and the double-trace interaction. However, in the present truncation, such zeros of the beta functions violate the regulator bound. Therefore we tentatively discard them and do not consider them as candidates for fixed points, i.e., universal scaling regimes. More refined studies are necessary to robustly confirm this characterization. B. Candidates for four-dimensional emergent space In this subsection, we discuss a fixed point which does not feature symmetry enhancement of the form as discussed previously and therefore cannot be mapped to a lower-rank single tensor model. Thus it might be a potential candidate for the description of 4d quantum gravity. Of course, establishing a universality class for 4d quantum gravity requires much more than just finding a fixed point without dimensional reduction of the form discussed above. After all, the Hausdorff and spectral dimensions as well as further properties of the emergent geometry have not been studied yet. Nevertheless, the existence of a fixed point that does not admit dimensional reduction to a model of lower rank is most likely a necessary requirement for a universality class for 4d quantum gravity. If corroborated by further studies, our discovery might therefore constitute the very first step on a path towards 4d quantum gravity from tensor models. Here, we focus on isocolored fixed points, i.e., those that display the same values for all couplings associated with different colors. In other words, we restrict the fixed points to a symmetry-enhanced subspace which explicitly realizes a discrete color permutation symmetry in all interactions. It is still characterized by the U (N ) ⊗4 symmetry and does not feature dimensional reduction to a model of lower rank. We conjecture that in the continuum, color-distinguishing structures should not play any role. This is based on the expectation that color is not associated with a physical property of continuum geometries and therefore only isocolored fixed points should matter. We caution that this might be a naive viewpoint, since the unequal treatment of colors could introduce more sophisticated structures. As stressed before, the identification of universality classes with actual relevance for 4d quantum gravity requires further insights into the emergent geometries. Here, we restrict ourselves to a very first mapping of different fixed-point structures in the theory space. In the quartic truncation, one completely isocolored fixed points is found. At this fixed point, all couplings are non-vanishing. As a consequence, there is no symmetry enhancement of the U (N ) ⊗4 symmetry (apart from the discrete color permutation symmetry) which would allow for an immediate identification of dimensional reduction. The fixed point as well as the corresponding critical exponents for r = p = 1 are displayed in Tab. V: the isocolored fixed point features three relevant directions. A simple test of the scheme-dependence of this result can be performed by changing the regulator to r = p = 2. The isocolored fixed point persists and features positive critical exponents θ 1 = 3.41 and θ 2,3 = 0.18 very close to the values for r = p = 1. A subset of these relevant directions could be associated with the tuning towards the isocolored symmetry. To isolate such directions, we re-investigate the fixed point in an isocolored truncation, where the different color couplings are identified in Eq. = g 4,1 . The theory space in this truncation is spanned by three couplings. For r = p = 1, the fixed point displays one relevant direction associated to θ = 3.44. Consequently, the two extra relevant directions that appear in the non-isocolored truncation are associated with an additional tuning of couplings to achieve the color symmetry at the fixed point. As discussed above, it remains to be investigated whether color-symmetry breaking can be given any physical meaning in random geometries. Therefore it is currently open whether one should only compare the leading relevant critical exponent θ 1 = 3.44 to the critical exponents characterizing gravity in the continuum limit, i.e., the critical exponents of the Reuter fixed point, or whether one should also include θ 2,3 in the comparison of universality classes. The isocolored fixed point serves as a prototypical example of a fixed-point structure which does not manifest dimensional reduction at the level of the basic building blocks used to generate random geometries. This is only a necessary condition for a universality class associated to 4d quantum gravity, and the physical nature of the continuum limit associated to such a fixed point still needs to be investigated. Going beyond the isocolored theory space, different fixed-point structures than the completely isocolored one which do not feature symmetry enhancement can be found. In particular, there are fixed points where all couplings are turned on, but for instance, not all couplings of a given combinatorial structure attain the same value. These fixed points are not color-permutation invariant. A detailed discussion of these new universality classes is beyond the scope of the present review and will be reported in a separate work. VIII. OUTLOOK: CONVERGING TO QUANTUM GRAVITY FROM DIFFERENT DIRECTIONS We advocate the point of view that an understanding of (key aspects of) quantum gravity can be achieved by making sense of the path integral for quantum gravity. In tensor models, the path integral is interpreted as a sum over random geometries. This sum can be tackled in a dual formulation, where rank d tensors form building blocks of d-dimensional space(time). The functional Renormalization Group equation is equivalent to the path integral, as it is simply a way of rewriting an integral into a differential equation that tracks the change of the integral under a change of a parameter in the integrand. This abstract setup translates into the well-known local coarse graining in quantum field theories defined on a background. We highlight that the notion of coarse graining also makes sense in a background-independent setting. In this setting, the number of degrees of freedom provides a background independent notion of scale. In accordance with the intuition behind the a-theorem, the RG flow goes from many to few degrees of freedom. For tensor models, this corresponds to an RG flow in the tensor size N . RG fixed points play a crucial role, as they provide universality in the large N limit. Physically, this provides a phase transition in the space of couplings that leads to a continuum phase that is independent of unphysical microscopic details. The path integral for quantum gravity is a point of convergence for a diverse set of of viewpoints, e.g., [2-19, 132, 133]. The configuration space that is summed over in these settings typically includes a sum over (discretized) geometries. The inclusion of non-geometric configurations, e.g., [133] or summation over topologies, is one distinguishing feature of the different approaches that could be of physical relevance. Restricting to the sum over geometries, different approaches to the path integral implement this summation in mathematically distinct ways. These have diverse advantages, such as a direct access to large-scale properties of emergent geometries from lattice simulations, see, e.g., [134], a straightforward way of discovering universality classes and characterizing them by their scaling exponents in tensor models [64,65], and a direct link to phenomenological questions and the interplay between quantum gravity and matter in the continuum asymptotic safety approach [5,129] to name just a few. We advocate the point of view that such different approaches need not necessarily be considered as competitors in the race towards the goal of discovering quantum gravity. Rather, these different approaches can be viewed as different windows that allow us to view and explore distinct aspects of quantum gravity. In the best case, these are complementary, and a comprehensive understanding of quantum spacetime can emerge if key results and strengths from these diverse directions are brought together to form one coherent big picture. As in many other settings, a diversity of viewpoints can accelerate the discovery of a solution to a tough challenge -in this case, quantum gravity. Yet, a diversity of viewpoints brings a new challenge, namely the potential lack of a common language. In quantum gravity, different approaches are often formulated in mathematically very dissimilar ways, making it challenging to extract common physics. Here, we advocate that the functional RG setup could provide one option for a common language shared by different approaches. In particular, it allows one to evaluate scaling exponents linked to a universal continuum limit. These universal exponents can be compared, e.g., from continuum asymptotic safety and tensor models 12 . Once the required precision has been reached in advanced approximation of the full RG flow, such a quantitative comparison will unveil whether these approaches to the gravitational path integral encode the same physics. In the most straightforward setup for tensor models, full agreement with scaling exponents from CDTs or continuum asymptotic safety is probably not expected. This is due to the difference in configuration spaces in the respective approaches to the path integral. For instance, the gluing rules encoding causality in CDTs are expected to lead to a restriction on the allowed (multi)tensor interactions. Following [73], the corresponding tensor model, once set up, can be explored by means of the FRG, and a characterization of the universality class is possible. Understanding the quantum structure of spacetime is a challenging goal. We advocate that the complementarity that different approaches to the path integral for quantum gravity exhibit is a highly promising starting point. Tensor models could be helpful in this quest as they could contribute to bridging the gap between discrete numerical and analytical continuum approaches by allowing for a discrete analytical approach. We propose that background independent functional RG techniques could potentially act as a catalyst for breakthroughs. Specifically, they allow us to discover and characterize universality classes for the continuum limit in tensor models. This could provide one of the missing links towards a background-independent understanding of quantum gravity. Even if this hope is not realized, tensor models could constitute a stand-alone approach to the path integral for gravity. To that end, going beyond the simplest form of the large N limit, which leads to a branched-polymer phase, appears to be necessary. We highlight the potential use of the FRG in this context, as it is a highly flexible tool allowing to search for universal scaling regimes in diverse tensor model theory spaces. In particular, setting up truncations adapted to different assumptions regarding the nature of the universality class (e.g., near-canonical vs. fully non-perturbative) could give access to different continuum limits.
2018-11-30T17:49:41.000Z
2018-11-30T00:00:00.000
{ "year": 2018, "sha1": "a465d8a0475d1515174fdd1339d2476798880f2b", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/universe5020053", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "02452344a411cc4cf4ed05f9eb308d3efdd4f5c8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
3829438
pes2o/s2orc
v3-fos-license
Enhancing Physical Layer Security in AF Relay Assisted Multi-Carrier Wireless Transmission In this paper, we study the physical layer security (PLS) problem in the dual hop orthogonal frequency division multiplexing (OFDM) based wireless communication system. First, we consider a single user single relay system and study a joint power optimization problem at the source and relay subject to individual power constraint at the two nodes. The aim is to maximize the end to end secrecy rate with optimal power allocation over different sub-carriers. Later, we consider a more general multi-user multi-relay scenario. Under high SNR approximation for end to end secrecy rate, an optimization problem is formulated to jointly optimize power allocation at the BS, the relay selection, sub-carrier assignment to users and the power loading at each of the relaying node. The target is to maximize the overall security of the system subject to independent power budget limits at each transmitting node and the OFDMA based exclusive sub-carrier allocation constraints. A joint optimization solution is obtained through duality theory. Dual decomposition allows to exploit convex optimization techniques to find the power loading at the source and relay nodes. Further, an optimization for power loading at relaying nodes along with relay selection and sub carrier assignment for the fixed power allocation at the BS is also studied. Lastly, a sub-optimal scheme that explores joint power allocation at all transmitting nodes for the fixed subcarrier allocation and relay assignment is investigated. Finally, simulation results are presented to validate the performance of the proposed schemes. I. INTRODUCTION D Ual-hop communication has recently gained significant attention in the field of wireless communication due to its better performance over single-hop communication [1]. In dual-hop communication a relay is used as an intermediate node between sender and receiver. It is generally used to enhance throughput, reduce power consumption, and to increase coverage area at the cell edges. There are two types of relaying protocols that are widely used: Amplify-and-Forward (AF) and Decode-and-Forward (DF). The AF relaying protocol first receives signal from the source and then forwards it to the destination with amplification, while DF relaying protocol first receives signal from the source, decodes it, re-encodes it and then forwards the resultant signal to the destination [2]. The broadcast nature of wireless communication provides many exciting opportunities, however it makes the security of link a challenging issue. Wireless communications can potentially be attacked by malicious nodes, and therefore, security issues have taken an important role in today's communications [3]. A promising technique for achieving secure communications is Physical Layer Security (PLS) [4]. A wireless link is considered to be secure if it provides a positive non zero secrecy rate and a link with higher secrecy rate is known to be more secure link [5]. To provide PLS in dual-hop single carrier networks, resource allocation has been widely studied under DF relaying protocol [6]- [12]. The authors in [6] and [7] studied the problem of optimal relay placement to enhance PLS. The work [8] considered joint relay selection and power optimization to maximize the system's secrecy rate. Further, [9] proposed a joint relay and jammers selection with power optimization. Recently, [12] discussed the the relay selection in the presence of adaptive eavesdropper. The dual-hop transmission under amplify and forward (AF) protocols has become much attractive due to its simple implementation [10]. However, the resource allocation in AF relay enhanced networks has always been a challenging task [11]. Different aspects of PLS in AF based single carrier systems has been studied in [13]- [17]. The authors in [13] investigated the impact of using an untrusted AF relay on secure communication and derived the exact Secrecy Outage Probability (SOP) under different transmission scenarios. With mulitple trusted relays, [14] proposed different relay selection strategies to enhance the PLS in multi-user cooperative relay networks. The work in [15] focused on achievability of secrecy rate under different channel conditions. The sub optimal relay selection with fairness is studied in [16]. The relay transmit power optimization protocols for secrecy maximization under both AF and DF has been studied in [17]. The multi-carrier transmission has become a fundamental choice for the next generation wireless communication networks because of its ability to combat multi path fading effects, high spectral efficiency, and provision of flexibility in resource allocation [18]. To provide PLS in multi-carrier systems, resource optimization is one of the popular technique and has been studied in [19]- [25]. In [19] a dynamic subcarrier allocation for secure transmission is studied in the presence of passive eavesdropper. The proposed scheme utilizes the Channel State Information (CSI) between legitimate users and drops out highly faded sub-carrier and modifies the modulation scheme for remaining good sub-carriers, to achieve better secrecy rate. Further, [20] provided a optimal sub-carrier allocation for outage probability minimization with secrecy constraint. In [21], authors proposed the optimal power allocation with sub-carrier allocation to maximize the sum rate with outage probability and fairness constraints. In [22], two categories of users were considered: the secure users and the non secure users. The task was to maximize the throughput of non-secure users via optimal power allocation subject to guaranteed average secrecy rate to secure users. The work in [23] extended the previous work to maximize secrecy rate in the presence of active eavesdropper which has the capability to jam the secret user transmission. In the presence of both active and passive eavesdroppers, the power allocation over sub-carriers to maximize the average secrecy rate has been investigated in [24]. Further, the authors in [25] considered multiple eavesdroppers and optimized sub-carrier assignment, power allocation, and secrecy data rate to maximize the energy efficiency. A. Related Work and Contributions Under the umbrella of OFDMA, resources allocation for PLS in dual hop with DF protocol has been studied in [26]- [28]. Power allocation problem to maximize the secrecy rate was investigated in [26]. The extension to this work with joint sub-carrier allocation and power loading was made in [27]. Recently, [28] proposed two different power optimization schemes at source and relay under individual power constraint, one achieves sum secrecy rate maximization in the presence of untrusted users while the other achieves fairness for minimum requirement of secrecy per user. The dual hop communication under DF relaying is allowed for the trusted relaying nodes only. However, if the relay node is un-trusted, AF protocol becomes the better choice as it does not require any decoding at the relay. However, the resource allocation schemes designed to enhance PLS under DF transmission can not be directly applied to AF scenario. Recently, different works on PLS in dual hop systems under AF relaying protocol have been reported [29]- [32]. The resource optimization in Orthogonal Frequency Division Multiplexing (OFDM) based single-user single-relay systems was considered in [29]. The authors studied the sub-carrier assignment and power allocation strategies under a total system power constraint. The optimization under sum power constraint provides a good analysis of power allocation, however it may not be an attractive solution for practical systems. Further, in [30] the authors investigated the power allocation at source node and sub-carriers allocation among users in a single relay multiuser system. Recently, [31] extended the work to multirelay scenario and considered the relay assignment and power allocation problem. However, both the works in [30] and [31] considered power allocation at the source node only while the power optimization at the relaying node(s) was missing. The power optimization only at the source node simplifies the solution at the cost of degradation in performance. More recently, [32] investigated the power allocation at the source and the relay nodes under a single user single relay scenario. The authors proposed a sub-optimal solution through alternate optimization approach. A joint optimization of power allocation at the source and the relay nodes along with subcarrier assignment and relay selection can provides much more benefits. This joint optimization is a challenging task and to the best of authors' knowledge has not been investigated yet. In this work, our aim is to maximize the sum secrecy rate under AF relaying protocol in single cell down-link transmission. We first consider the joint power allocation at the source and the relay nodes subject to separate power constraint at each node. The end to end secrecy rate under AF protocol depends on both hops, i.e., the power allocation at the two nodes is coupled with each other. Thus, instead of separate power optimization [32] at the source and relay, we propose a joint optimization solution. Then, we consider a joint sub-carrier allocation, relay selection, and power allocation problem in a multi-user multi-relay system. Various solution schemes are proposed to efficiently solve the problem. Our contributions are summarized as: • We solve a joint power allocation problem in an OFDM based dual hop network to optimize the power distribution among different sub-carriers at the source and the relay node. An efficient solution is obtained through Karush-Kuhn-Tucker (KKT) optimality conditions. • Later, a novel joint optimization problem is formulated which considers the power allocation at the source node, the optimal relay assignment to users, the sub-carrier allocation to each assigned relaying node, the power allocation at each relay node, and the sub-carrier allocation to each user subject to separate power constraint at the source and each of the relaying node. • A joint solution of the mixed integer programming problem is obtained through efficient dual decomposition techniques to maximize the overall system's secrecy rate. • To look into the effect of power optimization at the relaying nodes only, we redefine the joint problem for uniform power allocation the source node and similar techniques are used to solve this problem. • Finally, a low complexity sub-optimal algorithm is proposed which optimizes the power at the source and the multiple relaying nodes for the predefined sub-carrier allocation and the relay assignment. • Extensive simulation results are presented to evaluate the performance of the proposed schemes. The remainder of this paper is organized as follows. The joint power allocation at the source and the relay node in a single user single relay case is presented in Section-II. Under multi-user multi relay system, the proposed framework is elaborated in Section-III. The Section-IV includes proposed solution for power allocation at the relaying nodes along with sub-carrier allocation and the relay assignment, where the problem of power allocation at the source and the multiple relaying nodes without optimizing other parameters is considered in Section-V. Finally, the simulation results and the conclusion are presented in Section-VI and Section-VII, respectively. A. System Model and Problem Formulation In this section, we consider a dual hop multi-carrier system which consists of a source node (S), an AF relay node (AR), a destination node (D), and an eavesdropper (Eve) as shown in Fig. 1. We assume that all devices are equipped with single S Eve D AR h i antenna and D and Eve are co-located such that the direct path from S to D and S to Eve is missing due to large distance [27], [29], [30]. The channel gains of i-th sub-carrier over S-to-AR, AR-to-D, and AR-to-Eve links are denoted by h i , g i , and f i , respectively. In the first transmission slot, the AR receives a message signal over i-th sub-carrier and re-transmits with amplification factor Q i , given by where p i and q i are the power loading over i-th carrier at S and AR, respectively, and σ 2 denotes the variance of Additive White Gaussian Noise (AWGN). The received signal-to-noise ratio (SNR) at the D over i-th sub-carrier can be expressed as Similarly, received SNR at the Eve is given as The secrecy rate over i-th sub-carrier can be expressed as Let N be the total number of sub-carriers, the sum secrecy rate under high SNR approximation can be written as [33]: where , and the term 1 2 appears due to half duplex relay transmission. Our target is to maximize the sum secrecy rate of the system by optimizing power over the sub-carriers at the S as well as at the AR under individual power constraints. Thus, the optimization problem becomes The first constraint ensures that the total power allocated to the all sub-carriers at S must be within the total available power P t . Similarly, the second constraint ensures that the allocated power over all sub-carriers at AR node should not exceed the maximum limit Q t . B. Proposed Optimization Scheme The problem (6) is a convex optimization problem and we use the duality theory to obtain the solution. The optimal power loading can be obtained from the following dual problem where λ and V are the associated dual variables. Removing the constant terms, the problem can be re-written as Applying KKT conditions to the internal maximization, we obtain and where and the values of A i , B i and C i are given by Table 1. The problem in (8) becomes: To find the dual variables, we use the following iterative sub-gradient updates [34]- [36] where m represents the m-th iteration and δ is the step size. In each update of dual variables, the optimum power allocation at the BS and the relay are obtained from (9) and (10). At convergence, the optimum values of dual variables as well as of power variables are obtained. SELECTION AND POWER ALLOCATION In this section, we consider multi-user, multi-relay and multi-carrier dual hop communication with a single base station (BS), K number of secret users, J number of AF relays (ARs), N numbers of sub-carriers and a single Eve as shown in Fig. 2. The channel gain from BS to j-th AR node on ith sub-carrier is denoted by h i,j . The channel gain from j-th AR node to k-th user on i-th sub-carrier is denoted by g i,j,k , the corresponding channel gain from j-th AR node to Eve is denoted by f i,j and u i,j is the power allocated over the i-th sub-carrier at the j-th relay. With this, the secrecy rate over ith sub-carrier at the the k-th user communicated through j-th relay can be expressed as: where a i,j = |hi,j | 2 , and c i,j = |fi,j | 2 σ 2 . We adopt a fully flexible AR allocation strategy where a relaying node can be allocated to more than one users, and each user can be served with multiple AR nodes over different subcarriers. Furthermore, a sub-carrier is allocated to the same user over the two hops of transmission 1 . On account of subcarrier allocation and AR selection, we define two binary 1 The information received over i-th sub-carrier at the first hop can be forwarded over a different carrier in the second hop, however is beyond the scope of this work variables: α i,k ∈ [0, 1] such that α i,k = 1 when the i-th subcarrier is allocated to the k-th user and zero otherwise, and β j,k ∈ [0, 1] such that β j,k = 1 when the j-th AR is allocated to the k-th user and zero otherwise. With this, the sum secrecy rate of the system can be expressed as; A . Problem Formulation The aim is to maximize SR sum with jointly optimizing the AR assignment, sub-carrier allocation, BS's transmit power loading and power allocation at the relaying nodes over different sub-carriers. Let P t and Q t,j be the total powers available at BS and j-th AR, respectively. Then, the joint sub-carrier allocation, AR assignment, and power loading optimization can be formulated as: The first constraint ensures that a particular sub-carrier can not be assigned to more than one users. The second constraint represents that the sum transmit power on all sub-carriers at BS should be less than or equal to a maximum power limit P t and the last constraint guarantees that total transmit power over different sub-carriers at the j-th AR should be less than or equal to a maximum power budget Q t,j . B. Proposed Solution The problem (15) is a mixed binary integer programming problem, and a vast search over all variables is needed to find an optimal solution. Thanks to [35], the difference between the solution of dual problem and the solution of primal problem 2 becomes zero when we have sufficiently large number of subcarriers in OFDM based transmission regardless of convexity where λ and V j are the dual variables, and the dual function D(λ, V j ) can be expressed as To solve the dual problem we first solve the dual function D(λ, V j ) and similar to [36] we adopt dual decomposition approach. The problem in (17) can be rewritten as: Now for any given sub-carrier allocation and relay assignment, the optimal power allocation at BS and j-th AR can be obtained from max pi≥0,ui,j ≥0 The problem (20) is convex and closed form solution can be obtained by exploiting the standard techniques similar to section-II-B. Applying KKT conditions we get: where the values of X i , Y i and Z i are given in Table. II and Putting p * i and u * i,j into (19) the dual function can be rewritten as where SR * i,j,k is given by: Now, we need to find the optimal sub-carrier allocation and relay assignment. For immediate recovery of the binary variables α i,k and β j,k , we define a new variable π i,j,k ∈ {0, 1} such that π i,j,k = 1 if α i,k β j,k = 1 and zero otherwise. The above problem can be rewritten as The constraint in above optimization ensures that each subcarrier is assigned to one relay and one user. The optimum solution of above problem is to assign a sub-carrier AR pair (i, j) to user k which maximizes the SR * i,j,k , i.e., Thus, we have Now the optimum sub-carrier allocation and relay assignment are obtained. Let α * i,k and β * j,k denote the optimal assignment variables. Thus, substituting p i * , u * i,j , α * i,k , and β * j,k into (18) we obtain the dual function. Next similar to (11) and (12), we solve the dual problem (16) with the sub gradient method [34]- [36]. The sub gradient updates at (m + 1)th iteration are: In each sub-gradient update, the values of power variables as well as relay selection and sub-carrier assignment are obtained from (20), (21), and (25). The program is terminated at the convergence and the proposed joint optimization algorithm is completed. IV. OPTIMIZATION AT THE RELAY NODES FOR FIXED POWER ALLOCATION AT BS The previous works in [30] and [31] considered the power optimization at the BS for uniform distribution at the relay. The dynamic relay selection and sub-carrier allocation strategy adopted in this work may assign a relay to multiple users and each relay may have different number of sub-carriers. Thus, the power optimization at each relaying node with independent power constraint becomes more important. In this section, we consider the joint optimization over power allocation at the relaying node, the relay selection and the sub-carrier assignment for the uniform power allocation at the BS i.e., p i = P T /N, ∀i. The corresponding optimization problem can be written as This is a binary integer programming problem. Similar to section-III-B, we adopt dual decomposition approach. For any given relay assignment and sub-carrier allocation, the power optimization at different relay can be obtained by solving following J sub-problems ∀j ∈ {1, 2, . . . , J} and ζ j is the Lagrange multiplier corresponding to jth relay power constraint. The resultant value of u * i,j is given as where i,j p i +b 2 i,j,k c i,j +a i,j b 2 i,j,k c i,j p i . Now, similar to the previous section, we substitute the value of power variable in the corresponding dual function and the optimal relay assignment and sub-carrier allocation (α * i,j , β * j,k ) can be obtained in similar fashion. Finally, the dual problem is solved from the sub-gradient method. The detail steps of the solution are missing for simplicity and are similar to the solution proposed in section III-B. V. POWER OPTIMIZATION FOR GIVEN SUBCARRIER ALLOCATION AND RELAY ASSIGNMENT The solution proposed in Section III and Section IV first find the power allocation for all the possible relay assignment and the sub-carrier allocation and then based on the obtained optimal power optimization, select the best relay selection and sub-carrier assignment. This requires to solve N JK subproblems in each iteration of the sub-gradient update. In this section, we present a sub-optimal scheme where the joint power allocation at the source and relay is obtained for the predefined α i,j and β j,k . The steps involved in the algorithm are listed as follows: 1. Randomly allocate all the sub-carriers such that the i-th sub-carrier is exclusively allocated to a unique user-relay pair (j, k). Thus, both α i,k , and β j,k are obtained. 2. With obtained sub-carrier and AR allocation, the optimization is similar to single user single AR power allocation problem, however with J + 1 independent power constraints instead of two. i.e., This problem can be solved using similar dual technique in Section III-B. However note that now we need to find only N power variables instead of N KJ variables for power allocation at each step of the dual update. VI. SIMULATION RESULTS In this section, we present the simulation results to show the performance of the proposed schemes. We choose 6 tap channels taken from i.i.d Guassian random variables for all links and assume same noise variance at all nodes. For analysis of the results, we compare the performance of following schemes: OPT: This scheme includes optimization of power loading over all sub-carriers at both BS and AR node for the single relay case, presented in Section-II-B. Sub-OPT: In this algorithm, we consider the power optimization at the relay node only, while uniform power distribution is considered at the BS. Thus, it is similar problem as presented in Section IV, however for the single user single relay node case. The step wise detail of the scheme is missing due to Non-OPT: This refers to the case with fixed sub-carrier allocation, predefined relay selection, and equal power distribution among sub-carriers at each transmitting nodes. Hence, for single user single relay case, this corresponds to uniform power allocation among all sub-carriers at the two nodes. Figure 3 presents the results for single relay case where y-axis represents sum secrecy rate and x-axis represents total power budget. Same power budget is considered at BS and AR, while we have set N=64 and N=32 for the upper and the lower subplots, respectively. It can be clearly noted that OPT scheme outperforms the remaining two schemes and the Sub-OPT performs better than the Non-OPT as presented in Fig. 3. The performance gap between OPT and other candidates increases with the increase in number of sub-carriers and power budget. The better performance with increasing the number of sub-carriers is due to the higher degree of freedom in power allocation. The increase in the power budget not only increase the sum secrecy rate for both OPT and Sub-OPT but also increases the gap. This is because of the fact that the OPT scheme efficiently distributes the available power budget among different sub-carriers at the two transmitting nodes while the Sub-OPT allocates power uniformly among sub-carriers at the BS. Non-OPT does not provide secure communication as this scheme has zero sum secrecy rate i.e., the feasible solution does not exist with uniform power allocation. Hence, the resource optimization is mandatory for providing secure communication at the physical layer. Next, in Fig. 4 we show the convergence behavior of the dual variables for the two optimization schemes. Please note that, the OPT involve two dual variables while the Sub-OPT has a single dual variable. It can be observed that both the schemes converge within acceptable number of iterations. Further, it is noted that the OPT provides higher performance at the cost of few more number of iterations for convergence. On the other hand, the Sub-OPT provides much better performance over the Non-OPT without requiring high burden of time consumption in terms of number of iterations. In Fig. 5, we consider multiuser multi-relay scenario with J=4 and K=12. Similar to Fig. 3, same available power budget is assumed at all nodes and results are obtained with N=32 as well as for N=64. It can be seen from Fig. 5 that there is a clear gap between the J-OPT and the other schemes while Sub-OPT-I outperforms Sub-OPT-II. Increasing the number of subcarriers increases the degree of flexibility of power allocation which results in increasing secrecy rate of all schemes. It is also interesting to note that the enhancement in performance of J-OPT scheme with increasing N is higher than the other schemes. For Fig. 6 and Fig. 7 we have taken a single realization of Gaussian random channels to show the possible effects of adding a new relay or user in the system. Figure 6 shows the impact of varying number of relays from 1 to 4, with P t = Q t = 7 , K=12 and N=64. We can see similar trends as in Fig, 5. Increasing the number of relays provides enhanced performance for all schemes. However, the percentage increase of the sum secrecy rate of J-OPT and Sub-OPT-1 is much more than the other two players. This is because, both of these schemes involve optimizing relay selection and subcarrier assignment while the other two use fixed. Last but not least, the rate of increase in secrecy rate is more from J=1 to J=3 and becomes a bit low from J=3 to J=4. This is because, initial addition of relays provide a higher freedom in resource Table III. Finally, to complete the analysis, we check the sum secrecy rate for different number of users with fixed number of relays. The results are plotted in Fig. 7 with N=64, PT=QT=7, and J=4. Again, superiority of J-OPT over all competing candidates is clear. In J-OPT increasing the number of users enhances the performance if the channel gains of new user are better compared to old users. This is due to the fact that higher number of users provide better channel conditions and higher flexibility in power allocation. The J-OPT considers all parameters jointly, hence, we observe a significant performance gap increase with increasing the number of users. VII. CONCLUSION This paper considered resources allocation problem to enhance PLS in AF relay assisted wireless networks. Joint optimization problem of power allocation at different transmitting nodes, relay assignment and the sub-carrier allocation was studied. For practical reasons, separate power constraint was considered at the BS and each relaying node. A dual decomposition framework was adopted to find an efficient solution for sub-carriers allocation, relays assignment and power loading over all sub-carriers. The target was to maximize the sum secrecy rate of the system. Further, sub-optimal schemes were also presented. Simulation results validated the performance of all proposed schemes. Joint optimization and sub-optimal schemes outperformed the trivial solutions. It was observed
2018-01-11T11:58:07.000Z
2018-01-11T00:00:00.000
{ "year": 2018, "sha1": "c59279b438603d8cb18e2c693494c28332ce21ad", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1801.03728", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c59279b438603d8cb18e2c693494c28332ce21ad", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
119485321
pes2o/s2orc
v3-fos-license
The motion of color-charged particles as a means of testing the non-Abelian dark matter model A possibility is discussed for experimental testing of the dark matter model supported by a classic non-Abelian SU(3) gauge (Yang-Mills) field. Our approach is based on the analysis of the motion of color-charged particles on the background of color electric and magnetic fields using the Wong equations. Estimating the magnitudes of the color fields near the edge of a galaxy, we employ them in obtaining the general analytic solutions to the Wong equations. Using the latter, we calculate the magnitude of the extra acceleration of color-charged particles related to the possible presence of the color fields in the neighbourhood of the Earth. I. INTRODUCTION At the present time, it is believed that, beyond ordinary visible matter, the Universe contains some other, invisible gravitationally attractive substance. There is a number of observational evidence in favor of its existence [1]. In particular, this applies to the intriguing behavior of the galactic rotation curves. According to the Newtonian theory of gravitation, the circular velocity u of an object on a stable Keplerian orbit with radius r is u(r) ∝ M (r)/r, where M (r) is the mass enclosed within the sphere of radius r. Then, in performing observations in the region beyond the limits of the visible boundary of a galaxy where M = const., one could expect that the velocity u(r) ∝ 1/ √ r. However, current astronomical observations indicate that in the outer regions of most galaxies u becomes approximately constant. This implies that around the galaxies there exist halos within which the mass density behaves like ρ(r) ∝ 1/r 2 and the mass M (r) ∝ r. In addition, the measurements of peculiar velocities of galaxies in clusters and the effects associated with a gravitational lensing certainly indicate that these observational consequences also cannot be explained only by the presence of visible matter. Theoretical modelling of the aforementioned observational effects is usually carried out within the framework of two main directions. Firstly, it is assumed that the dominating form of matter in galaxies and their clusters is some invisible substance called dark matter (DM) (for a general review on the subject, see, e.g., Ref. [2]). It is commonly believed that in the present Universe DM is about 25% of the total mass of all forms of matter. The true nature of DM is still unknown. It is assumed that it may consist of as yet undiscovered particles. This cannot be baryons, since in this case the cosmic microwave background and the large-scale structure of the Universe would look radically different. Therefore, as candidates for dark matter particles, various exotic particles which either weakly or not at all interact with ordinary matter and electromagnetic radiation are being suggested (axions, sterile neutrinos, gravitinos, weakly interacting massive particles, etc.). Moreover, it is assumed that such particles can be confined on scales of galaxies and their clusters. Second direction of modelling of dark matter is based on the assumption that on galactic scales both the Newtonian and Einsteinian theories of gravity require a modification [3,4]. This allows the possibility of explaining the aforementioned observational effects without invoking the hypothesis of the presence of DM in galaxies. In summary, it is seen that the explanation of the observational data requires either introducing new, not yet experimentally discovered forms of matter or modifying theories of gravity by themselves. In the present paper, we work within the first approach assuming that galaxies can contain a special type of dark-matter particles modeled by color fields within the framework of the classical non-Abelian gauge Yang-Mills theory [5][6][7]. As discussed in those references, it is possible to construct the gauge field distribution which adequately describes the universal rotation curve of spiral galaxies. The color DM is described by a special ansatz which enables us to obtain static solutions of the SU(3) Yang-Mills equations. Invisibility of such DM is provided by the fact that color particles interact with ordinary matter and Maxwell's electromagnetic radiation only gravitationally. Working within the framework of this model, the aim of the paper is to study the influence which the presence of such color DM has on the motion of test color-charged particles (monopoles or quarks). For this purpose, we employ Wong's equations which are solved on the background of color electric and magnetic fields obtained for external regions of our galaxy (in the neighbourhood of the Earth). The paper is organized as follows. In Sec. II, a general description of the non-Abelian dark matter model used in the paper is presented. In Sec. III, we consider an approach for testing such DM model, within which the magnitudes of color field strengths are estimated (Sec. III A) and Wong's equations are solved analytically (Sec. III B). Finally, in Sec. IV, we summarize the obtained results and give some comments concerning the DM model under consideration. II. NON-ABELIAN DARK MATTER MODEL In this section we will closely follow Refs. [5][6][7] where the DM model described by a classical non-Abelian field has been considered. A. General equations It is assumed that a galaxy is embedded in a sphere consisting of SU(3) gauge fields. The modelling of DM is carried out using the classical SU(3) Yang-Mills equations where is the Yang-Mills field tensor, A a ν is the SU(3) gauge potential, µ, ν = 0, 1, 2, 3 are spacetime indices, g is the coupling constant, f abc are the SU(3) structure constants, and a, b, c = 1, 2, . . . , 8 are color indices. In order to test this DM model experimentally, one can consider the motion of color-charged particles (monopoles or single quarks) placed in such gauge fields. The color-charged particles are a non-Abelian generalization of a classical electric charge in Yang-Mills gauge theories. They are characterized by the color charge T a . The motion of a test particle with the mass m under the action of external color electric and magnetic fields is described by Wong's equations [8] mc The right-hand side of Eq. (2) is a generalization of the Lorentz force from Maxwell's electrodynamics to the color fields, and the right-hand side of Eq. (3) describes the rotation of the vector T a in the space of color charges. B. Distribution of color dark matter The Yang-Mills equations (1) are solved using the following static ansatz for the classical SU(3) gauge field A a µ [9]: Here the components of the gauge field A 2,5,7 µ ∈ SU(2) ⊂ SU(3); i, j, k = 1, 2, 3 are space indices; ǫ ijk is the completely antisymmetric Levi-Civita symbol; λ ajk are the Gell-Mann matrices; χ(r), h(r), v(r), and w(r) are some unknown functions. This ansatz is written in cartesian coordinates x, y, z with r 2 = x 2 + y 2 + z 2 . Substituting the components (4)-(6) into (1) and setting for simplicity χ(r) = h(r) = 0, one can obtain the following set of equations for the auxiliary functions v and w: Here the prime denotes differentiation with respect to the dimensionless radius ξ = r/r 0 , where r 0 is a constant. The asymptotic behavior of the functions v(ξ) and w(ξ) at ξ ≫ 1 is as follows, where φ 0 and α are constants, and A 2 = α(α − 1)/3. The corresponding DM energy density for the system described by Eqs. (7) and (8) can be obtained in the form where the expression in the square brackets is the dimensionless energy density. Taking into account the asymptotic solutions (9) and (10), one can show that the gauge field distribution under consideration has an infinite energy, as a consequence of the asymptotic behavior of the energy density (11) (for more details, see Ref. [7]). Consequently, one has to have some cut-off mechanism for such a spatial distribution of the classical gauge fields. In our opinion, this can be done as follows. As one can see from Eqs. (9) and (10), the gauge potentials are oscillating functions whose frequency increases with increasing distance. Then at large distances from the center the frequency of such oscillations becomes so large that one has already to take into account quantum fluctuations. Thus at some distance from the origin the gauge field should undergo the transition from the classical state to the quantum one. In turn, the quantum field transits very rapidly to its vacuum expectation value. Then the distance at which the transition from the classical state to the quantum one takes place can be regarded as a cut-off radius up to which the solutions of Eqs. (7) and (8) remain valid. Also, it is very important here to note that the gauge field in the vacuum state must be described in a nonperturbative manner (see below in Sec. II D). C. Invisibility of color fields Let us now say a few words about the main feature of any dark matter -its invisibility. Within the framework of the DM model under consideration, this is achieved in a very simple way: the SU(3) color matter (dark matter in the context of the present paper) is invisible because color gauge fields interact only with color-charged particles; but at the present time particles possessing a SU(3) color charge have not still been registered experimentally. In principle, as a candidate for such particles, one may consider SU (3) monopoles. For such a case, one can write down the SU(3) Lagrangian describing monopoles interacting with matter in the form of quarks. Here (the gauge potential A µ is given here in the matrix form). From the term igA µ q in Eq. (13), one can see that the SU(3) color field interacts with quarks only. But free quarks are not observable in nature. All other forms of matter are colorless, including baryonic matter (as a consequence of the confinement of quarks in hadrons) and photons. Therefore color DM being considered here does not interact with them directly and can be observable only through its interaction with a gravitational field. It is interesting that in this respect the problem of dark matter in astrophysics is related to the problem of confinement in high energy physics. D. Transition from the classical phase to the quantum one As emphasised earlier, in describing DM we use a classical SU(3) gauge field which undergoes the transition from the classical state to the quantum one at some distance from the center of a galaxy. Without such a transition, the energy of the sphere filled with this field would be infinite. We assume that by taking into account nonperturbative quantum effects the energy of such a field configuration can be made finite. Here we consider a possibility, in principle, of introducing a cut-off mechanism for the spatial distribution of the classical gauge fields at some distance from the center of a galaxy. To do this, let us employ the Heisenberg uncertainty principle according to which Here ∆F a ti is a quantum fluctuation of the color electric field F a ti ; ∆A ai is a quantum fluctuation of the color potential A ai ; ∆V is the volume where the quantum fluctuations ∆F a ti and ∆A ai occur. Using the ansatz (4)-(6), we have the following components of the gauge potential written in spherical coordinates: gr w(r) sin 2 θ sin(2ϕ); −2χ(r) cos θ; w(r) sin 2 θ cos(2ϕ); w(r) sin(2θ) cos ϕ; 2χ(r) sin θ sin ϕ; w(r) sin(2θ) sin ϕ; −2χ(r) sin θ cos ϕ; −w(r) Using these components and taking into account that in our case h(r) = χ(r) = 0, one can find that there are three non-zero components of the color electromagnetic field tensor F 2 tθ , F 5 tϕ , and F 7 tϕ that can appear in the left-hand side of the relation (14), and all of them ∝ vw/(gr). For our purpose, we can either employ all three components or take any one of them. In the latter case, calculations are more simple and they can give the required rough estimate. Let us therefore use in (14) the following component: Introduce the physical componentF 2 tθ , Then, apart from a numerical factor, the fluctuations of the SU(3) electric field are In turn, from Eqs. (15)-(18), we have Introducing the physical components we assume that Next, the period of spatial oscillations at r ≫ r 0 can be defined as follows, Suppose that the distance at which the classical SU(3) color field becomes quantum one is defined as the radius where magnitudes of the quantum fluctuations of the field enclosed in the volume ∆V = 4πr 2 ∆r with ∆r become comparable with magnitudes of this classical field. That is, at the transition distance, we assume that Substituting Eqs. (9), (10), (21), (24), (26), and (27) into (14), we obtain where g ′ = c/4πg is the dimensionless coupling constant, similar to the fine structure constant in quantum electrodynamics α = e 2 / c. In quantum chromodynamics, β = 1/g ′ 2 1. If one chooses g ′ ≈ 1 and A ≈ 0.4 (this value follows from the numerical computations of Refs. [5][6][7]), then that is comparable with 2π ≈ 6.28. Thus we have shown that if the condition (28) holds at some distance from the center, then there occurs the transition from the classical phase to the quantum one. Unfortunately, the obtained rough estimate does not enable us to calculate the radius at which such a transition takes place. For finding this radius, it is necessary to have nonperturbative quantization methods which are absent at the moment. A. Estimating the field strength and gauge field potentials We propose here an approach which permits us to test the DM model described above by studying the motion of a color-charged particle (monopole or single quark) under the action of color electromagnetic fields. For this purpose, we will use the Wong equations (2) and (3). To simplify them, we restrict ourselves to considering a trajectory of a particle moving in the equatorial plane (i.e., when θ = π/2) at the fixed distance from the center r = const. Also, since the dimension of the experimental set-up is assumed to be much less than the radius of a galaxy, one can set the angle variable ϕ ≈ 0. In this case the potential and the field strength are especially simple. Taking all this into account, let us write down the non-zero components of the color electromagnetic field strength and of the gauge potential: where the functions v and w appearing here are the asymptotic solutions given by Eqs. (9) and (10). For a rough estimate of the magnitudes of the color fields, let us employ Newton's law of gravitation which gives the following relation for a test particle located near the edge of a galaxy and rotating around its center with the circular velocity u, Here γ is the Newtonian gravitational constant; M = M v + M DM is the total mass of the galaxy, including the masses of visible, M v , and dark, M DM , components; r g is the radius of the galaxy. From this relation, one can find the mass of DM as The radial distribution of the energy density of the color electromagnetic field describing DM is Here E a i = F a ti is the chromoelectric field and H a i = 1 2 √ −gǫ ijk F ajk is the chromomagnetic field. The expression (35) corresponds to the energy density (11), where the asymptotic solutions (9) and (10) have been used and only the terms giving leading contributions have been kept. On the other hand, the DM energy density occurs in the expression (for simplicity, we assume here that the electric and magnetic fields are distributed homogeneously). Comparing the expressions (34) and (36) and neglecting the mass of the visible component M v compared with M DM , we can get the following rough estimates for the field strengths: where the physical components of the fields arẽ In turn, using Eq. (32), one can introduce the following physical components for the potential: gr , The obtained estimates will be used below when considering the motion of test particles under the action of the given color fields. B. Solving Wong's equations As mentioned earlier, the dimension of the experimental set-up for studying the motion of color-charged particles is assumed to be much less than the radius of a galaxy. Assuming also that the velocities of test particles are small compared to c, it is sufficient to consider the nonrelativistic limit of the Wong equations. In this case, setting ds ≈ cdt, we have from Eqs. (2) and (3) As before, here i = 1, 2, 3 is a space index. As pointed out in section III A, we consider the case where the chromoelectric and chromomagnetic field strengths are of the same order of magnitude. This allowed us to neglect the spatial components of the four-velocity in the above equations. Correspondingly, Eq. (40) contains now only the components F it a describing the chromoelectric field, but not the terms with the chromomagnetic field. For the sake of simplicity, it is convenient to solve Eqs. (40) and (41) in cartesian coordinates x, y, z, where the coordinate x is directed along the radius r, the coordinates y and z -along the angle variables θ and ϕ, respectively. The resulting set of equations describing the motion of a test particle with the mass m and the dynamics of the color charge vector T a is The numerical values of the components E 3 r , E 2 θ , E 5 ϕ , and A 3 t appearing here are taken from the estimates (37) and (39). Let us now estimate the acceleration of a color charge. This can be done using Eq. (44), from which we have: (1) The x-component of the acceleration is Substituting into this expression the numerical estimate for E 3 r = −Ẽ 3 r ≈ −B [see Eqs. (30), (37), and (38)] and taking into account that g = 4π/( c)g ′ , we have If for a test particle we choose, say, the 't Hooft-Polyakov monopole with the mass m ≈ 10 −8 g, then, taking into account that the radius of our galaxy is r g ≈ 10 23 cm and the velocity of DM particles at the edge of the galaxy u ≈ 2.5 × 10 7 cm sec −1 , we have By choosing g ′ ∼ 1 and assuming that the expression in the brackets is also ∼ 1, one sees that the magnitude of the extra acceleration related to the presence of the color fields in the neighbourhood of the Earth is of the order of 3 × 10 −3 % of the free-fall acceleration on the Earth. Evidently, for monopoles with smaller masses there will be even larger extra accelerations. (2) To estimate the accelerations along the coordinates y and z, it is necessary to calculate the frequency of oscillations ω from (45). To do this, let us assume that the numerical values of the components E 2 θ , E 5 ϕ appearing in Eq. (44) are approximately equal to the numerical values of the corresponding physical componentsẼ 2 θ ,Ẽ 5 ϕ from (37) and (38). That is, we assume that E 2 θ = E 5 ϕ ≈ B. Also, taking into account Eqs. (32) and (39), one can assume as a rough estimate that the component of the potential A 3 t ≈ r g B. Then the resulting frequency is It is seen from this expression that the color field exhibits extremely high-frequency oscillations. Correspondingly, there will be a large number of oscillations during the time of the experiment, and one can therefore average the functions T 2 and T 5 from (45) over these oscillations. The result is T 2 = T 5 = 0, and correspondingly Eq. (44) yields i.e., a uniform motion of a test particle along the coordinates y, z with a y = a z = 0. IV. CONCLUSION AND FINAL REMARKS The nature of dark matter is one of key questions of modern cosmology and astrophysics. The most popular hypothesis is the assumption that DM consists of some exotic particles which either weakly or not at all interact with ordinary matter and electromagnetic radiation. This has the result that their experimental detection runs into great difficulty. Various experimental searches have been made to find such particles (see, e.g., Ref. [10] where direct and indirect methods are reviewed), but unfortunately they have so far yielded a negative result. In the present paper we suggest a way for testing the DM model supported by color electric and magnetic fields described within the framework of the classical SU(3) Yang-Mills theory. Apart from the gravitational interaction with other forms of matter, such fields can interact directly only with color-charged particles like monopoles and single quarks. This has the result that, when studying the motion of color-charged particles in a laboratory on the Earth, there will appear an extra acceleration provided by the color fields. To calculate the magnitude of such extra acceleration, we have used the known Wong equations describing the classical motion of non-Abelian particles which are acted upon by the background color fields. When solving this problem, one needs to find the magnitudes of the color field strengths in the neighbourhood of the Earth. In order to make a rough estimate, we have considered the motion of test particles under the action of the DM gravitational field near the edge of a galaxy. Using the obtained estimates for the field strengths, we have found general analytic solutions to the nonrelativistic Wong equations which enabled us to calculate the magnitude of the extra acceleration. It was shown that it is inversely proportional to the mass of a test particle [see Eq. (47)]. For a grand-unified-theory monopole the extra acceleration is of the order of a few thousandth of a percent of the free-fall acceleration on the Earth. In principle, such magnitudes can be registered experimentally. Let us now give a word about strong and weak sides of the non-Abelian dark matter model used here. Notice, first of all, that practically all known dark matter models suggest the existence of as yet not discovered forms of matter. This is obviously a weak side of such models. That is why the fact that within the model considered here DM is described by the well-known non-Abelian SU(3) gauge field is a strong side of such a model. As a weak side of the DM model discussed here one can consider the assumption itself on the possibility of the presence of a classical non-Abelian field on galactic scales. It can be said here that there are no strong objections to the existence of such fields. Thus, for example, there exist classical Abelian U(1) gauge electric and magnetic fields. Hence the assumption of the existence of classical non-Abelian fields is, in principle, similar to the assumption of the existence of classical Abelian fields. In actuality, the principle difference is in an asymptotic behavior of the fields: the Abelian fields have the Coulomb behavior and the non-Abelian fields -the non-Coulomb one (they decrease slower than 1/r 2 ). In this connection, at a qualitative level, we have discussed a possible solution of this problem (see in Sec. II D). Finally, notice that the motion of color-charged particles studied here can be regarded as a problem of primarily academic interest. Indeed, from the experimental point of view the most evident weak side of the non-Abelian DM model is the actual absence of test particles: monopoles have not yet been discovered; free quarks do not exist (at low energies/temperatures). Nevertheless, experimental searches for magnetic monopoles continue [11] and perhaps they will be discovered in the future.
2018-05-27T02:28:06.000Z
2018-05-27T00:00:00.000
{ "year": 2019, "sha1": "0a54708163be1fbbf5529f8b50875a2fb5a53238", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1805.11440", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0a54708163be1fbbf5529f8b50875a2fb5a53238", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
269647962
pes2o/s2orc
v3-fos-license
Prevalence and contributing factors of anemia in patients with gynecological cancer: a retrospective cohort study This retrospective cohort study aimed to determine the prevalence of anemia among patients with gynecological cancer prior to any treatment and to identify contributing factors associated with anemia in this group. We retrospectively analyzed data from female patients aged 18 and above, diagnosed with various forms of gynecological cancer at The Affiliated Hospital of Southwest Medical University between February 2016 and March 2021. Anemia was assessed based on the most recent CBC results before any cancer treatment. Eligibility was based on a definitive histopathological diagnosis. Key variables included demographic details, clinical characteristics, and blood counts, focusing on hemoglobin levels. Statistical analysis was conducted using logistic regression models, and anemia was defined as hemoglobin levels below 12 g/dL for women, according to WHO criteria. Of the 320 participants, a significant prevalence of anemia was found. Correlations between anemia and factors like age, educational level, and biological markers (iron, folic acid, and vitamin B12 levels) were identified. In our study, we found that the prevalence of anemia among patients with gynecological cancer prior to any treatment was 59.06%, indicating a significant health concern within this population. The study highlights a significant prevalence of anemia in patients with gynecological cancer, emphasizing the need for regular hemoglobin screening and individualized management. These findings suggest the importance of considering various characteristics and clinical variables in anemia management among this patient group. Further studies are needed to explore the long-term effects of these factors on patient outcomes and to develop targeted interventions. Several studies have underscored the negative implications of anemia on prognosis in cancer patients.Anemic cancer patients often exhibit diminished physical function, lower overall well-being, and reduced tolerance to cancer therapies, which can compromise treatment efficacy 9 .Furthermore, anemia has been associated with poorer prognosis and decreased survival rates in various cancer types 10 .In gynecological cancers, specifically, anemia prevalence has been reported to vary, influencing treatment decisions and outcomes 11 . Managing anemia in patients with gynecological cancer is paramount, as correction of hemoglobin levels has been shown to improve treatment response, quality of life, and survival rates 12 .However, the heterogeneity in anemia's onset, severity, and etiological factors across different gynecological cancers complicates the formulation of uniform management strategies.This complexity underscores the need for a deeper understanding of anemia's prevalence, risk factors, and impact in the context of gynecological malignancies 13 . Moreover, while the global burden of anemia has been extensively studied, there are geographical and demographic disparities in the available data 10 .Most existing research focuses on populations in high-income countries, with less known about anemia's characteristics in low-and middle-income regions 14 .These gaps highlight the necessity for localized studies that consider regional medical practices, demographic factors, and access to healthcare services (Fig. 1). This study aims to fill these gaps by exploring the prevalence and risk factors of anemia among patients with gynecological cancers in a retrospective cohort.By analyzing demographic, clinical, and laboratory data, this research seeks to identify significant predictors of anemia in this population, contributing to more personalized and effective management strategies for affected patients.The findings are expected to provide healthcare professionals with insights to enhance anemia screening, prevention, and treatment measures in patients facing gynecological cancers, ultimately aiming to improve patient quality of life and survival outcomes. Study design and participants This retrospective cohort study involved a carefully selected sample of 320 patients diagnosed with various forms of gynecological cancer, out of a larger pool of cases at The Affiliated Hospital of Southwest Medical University.The study spanned February 2016 to March 2021.Eligibility required female patients aged 18 or older with a confirmed histopathological diagnosis of gynecological cancer, including ovarian, cervical, and endometrial cancers.Comprehensive medical histories and records were essential for inclusion.Patients with severe concurrent diseases affecting hemoglobin levels, prior cancer treatments, or incomplete records were excluded.This selection ensured a focused analysis on the relationship between gynecological cancer and anemia. Data collection We reviewed detailed medical records to collect demographic, clinical, and laboratory data.This included age, marital status, economic status, education level, tumor type and stage, treatment history, and more.Laboratory data focused on hemoglobin levels, red blood cell count, and other relevant parameters.All data were anonymized to uphold ethical standards.Table 1 presents a comprehensive comparative analysis of the demographic and clinical characteristics among the overall cohort, and distinctively between non-anemic and anemic patients diagnosed with gynecological cancer.The study encompassed a total of 320 patients, subdivided into 131 non-anemic and 189 anemic individuals based on predefined hemoglobin criteria.The median age for the entire cohort was 60 years, with a discernible age difference between non-anemic (median age 52 years) and anemic patients (median age 62 years), indicating a statistically significant association of older age with anemia (P < 0.001).Body weight and height measurements across the groups showed median values of 71 kg and 1.66 m, respectively, with no significant differences observed (Weight P = 0.492, Height P = 0.805).Similarly, Body Mass Index (BMI) comparisons revealed a median of 26.139 for the overall cohort, showing no significant difference between the non-anemic and anemic groups (P = 0.634).The distribution of marital status, economic status, and education level across the study population demonstrated a varied demographic profile with no significant differences in these socio-economic factors between non-anemic and anemic patients.This is highlighted by the comparable percentages across marital statuses and the slight variances in economic status and education levels that did not reach statistical significance.Clinically, hemoglobin levels displayed a marked difference, serving as the basis for distinguishing between anemic and non-anemic participants.The mean hemoglobin concentration was significantly lower in anemic patients (11.2 g/dL) compared to non-anemic patients (13.3 g/dL, P < 0.001).The analysis further explored red blood cell count, hematocrit, and mean cell volume, with no significant differences found between the two groups, emphasizing the specific impact of hemoglobin levels on anemia classification in this context.A closer look at biochemical markers revealed statistically significant lower levels of iron, folic acid, and vitamin B12 in anemic patients compared to non-anemic ones, underscoring the nutritional and metabolic factors contributing to anemia in this patient population.The types of tumors also showed a significant association with anemia prevalence, particularly noting a higher occurrence of cervical cancer among anemic patients, while the distribution of tumor stages and treatment history across both groups showed no statistical significance, indicating the inherent nature of anemia as a condition influencing the patient group regardless of cancer stage or treatment modality.In terms of reproductive health history, menstrual regularity and childbirth counts were considered, revealing no significant differences between the anemic and non-anemic groups, thus indicating the multifactorial causes of anemia beyond reproductive factors.The assessment of complications, medication history, nutrition intake, quality of life, and prognosis did not exhibit significant differences between the two groups, further emphasizing the complex interplay of factors contributing to anemia in patients with gynecological cancer. Univariate and multivariate analyses of factors associated with anemia in patients with gynecological cancer Table 2 presents a comprehensive analysis of the various factors potentially influencing the prevalence of anemia among the studied population.The multivariate analysis, which adjusts for potentially confounding variables identified in the univariate analysis, highlights several parameters with statistically significant associations with anemia.Age demonstrated a notable influence, with an odds ratio of 1.034, indicating that as the participants' age increased, so did the likelihood of anemia, a relationship that was statistically significant (P < 0.001).This finding underscores the importance of age as a factor in anemia prevalence.Interestingly, educational level emerged as another significant factor.Individuals with primary education levels were significantly more likely to experience anemia, with an odds ratio of 2.479 (P = 0.026), compared to those with secondary education levels.Furthermore, postgraduates showed an increased likelihood of anemia, with an odds ratio of 2.235, which was also statistically significant (P = 0.039).Several biological markers were prominently associated with anemia.Lower iron levels, lower folic acid levels, and lower vitamin B12 levels were all significantly associated with a higher likelihood of anemia, with P values of < 0.001, indicating strong statistical significance.These findings reinforce the known biological pathways of anemia, where deficiencies in these critical components often manifest in anemic Vol:.( 1234567890 www.nature.com/scientificreports/symptoms.In terms of gynecological health, the type of tumor also influenced anemia prevalence.Specifically, individuals with cervical tumors were more likely to be anemic, with an odds ratio of 1.933, though this result bordered on statistical significance (P = 0.056).In contrast, several factors, including marital status, economic status, family history, and certain health markers (red blood cell count, hematocrit, mean cell volume), did not exhibit a significant association with anemia, underscoring the complexity of anemia's etiology.Overall, Table 2 elucidates the multifaceted nature of anemia's contributing factors, emphasizing the need for a holistic approach to patient assessment and treatment.By understanding these associations, healthcare professionals can better identify at-risk individuals and implement appropriate preventive and therapeutic measures. Discussion This study illuminated several critical factors associated with anemia among patients with gynecological cancer, drawing attention to the intricate interplay between demographic, clinical, and socioeconomic variables.The findings underscore the necessity for a multifaceted approach to patient care, considering not only clinical symptoms but also the broader social determinants of health.Age emerged as a significant predictor of anemia, with older patients exhibiting a higher likelihood of this condition.This trend aligns with existing research that has documented physiological changes related to aging, such as decreased bone marrow response and nutritional deficiencies, contributing to anemia's development 15,16 .Furthermore, older individuals often have comorbid conditions, complicating their clinical presentations 17 .Our study reinforces the importance of comprehensive geriatric assessments and tailored care strategies, acknowledging the unique physiological and social challenges this demographic faces. The association between anemia and specific gynecological cancers, particularly cervical cancer, was a notable discovery.This outcome suggests that the biological characteristics of tumors, possibly related to their metabolic demands or cytokine-mediated systemic effects, play a role in modulating anemia risk [18][19][20] .These findings underscore the necessity for tumor-specific screening protocols and possibly differential management strategies, catering to the individualized needs of patients based on their cancer type. Our study's revelation of the strong association between anemia and deficiencies in iron, folic acid, and vitamin B12 amplifies the conversation around holistic patient care.It's a reminder that clinical management should extend beyond treating cancer itself, encompassing aspects like nutritional counseling 21 .These deficiencies could be reflective of broader issues, including dietary habits, socioeconomic status, and even the impact of cancer therapies 22 .Incorporating nutritional assessments and interventions into patient care protocols could mitigate these risk factors, potentially improving treatment outcomes and quality of life. The socioeconomic and educational disparities highlighted in our findings present a more systemic challenge.Lower educational levels correlated with higher anemia prevalence, potentially indicating gaps in health literacy, accessibility to healthcare resources, and overall health awareness 23 .This observation aligns with existing literature documenting health outcome disparities based on socioeconomic status 24 The insights gleaned from this study have several implications for clinical practice.They advocate for a more integrated approach to care, encompassing routine anemia screening, nutritional counseling, and targeted interventions for at-risk demographics.Furthermore, healthcare providers should be cognizant of the broader socioeconomic factors at play, advocating where possible for policy changes or support mechanisms to bridge these gaps. It is crucial to acknowledge the role of our study's exclusion criteria, particularly the decision to omit patients with severe concurrent diseases known to affect hemoglobin levels.This choice was aimed at minimizing confounding factors and isolating the impact of gynecological cancers on anemia.However, this approach also means that the broader influence of comorbidities, such as chronic kidney disease, inflammatory diseases, and nutritional deficiencies, was not directly addressed within our analysis. In reflecting upon the scope of our study, it is essential to acknowledge certain limitations that bear implications for the interpretation of our findings.The retrospective nature of our analysis, while offering a comprehensive overview, inherently limits our ability to infer causality between the incidence of anemia and gynecological cancers.Moreover, our examination did not extend into depth regarding lifestyle factors and other possible contributors to anemia, potentially overlooking significant determinants of its prevalence.A particularly noteworthy consideration is the variance in anemia prevalence across different stages of cervical cancer, which may be attributed to factors such as bleeding in advanced stages.This aspect was not delineated in our study, suggesting a pivotal area for subsequent research to explore the impact of cancer progression on anemia.Additionally, the homogeneity of our study population restricts the extrapolation of our findings to more diverse demographic groups.This limitation underscores the need for future research endeavors to embrace a broader demographic spectrum, thereby enhancing the generalizability and applicability of the findings.These considerations highlight the necessity for future studies to adopt a prospective design for establishing causality, delve deeper into the multifaceted contributors to anemia, and assess the influence of cancer staging on its prevalence, especially in the context of cervical cancer.By addressing these gaps, the research community can further enrich our understanding and management strategies for anemia in patients with gynecological cancer. In addressing the crucial aspect of anemia prevalence among patients with gynecological cancers before the commencement of any treatment, our findings reveal a significant rate of 59.06%.This rate is particularly noteworthy in the context of the broader literature on the subject.For example, research conducted by Alghamdi et al. (2021) at King Abdulaziz Medical City, Jeddah, identified a prevalence rate of 90.7% among patients receiving active treatment, highlighting the impact of chemotherapy and radiotherapy on hemoglobin levels 11 .The difference between these rates underscores the importance of recognizing anemia as a pre-existing condition in a considerable proportion of gynecological cancer patients, which may be further exacerbated by the treatment process. The distinction between pre-treatment and treatment-induced anemia emphasizes the necessity for early detection and management strategies tailored to address this condition from the point of cancer diagnosis.Integrating anemia management into the overall treatment plan for gynecological cancers is crucial, not only to improve patient quality of life but also potentially to enhance the efficacy of cancer treatment protocols. Our study contributes to the growing body of evidence suggesting that anemia is a multifactorial issue in the context of gynecological cancers, with implications for both pre-treatment condition management and the Table 1 . Comparative Analysis of Demographic and Clinical Characteristics Across Overall Cohort, Non-Anemic, and Anemic Patients with Gynecological Cancer.This table presents a comprehensive comparison of demographic, clinical, and laboratory characteristics across the entire study cohort (n = 320), further delineated between non-anemic (n = 131) and anemic (n = 189) patients.Values for continuous variables are expressed as median (Interquartile Range, IQR) for non-normally distributed data, and mean ± standard deviation (SD) for normally distributed data.Categorical variables are represented as counts (n) and percentages (%).Statistical significance between non-anemic and anemic groups was assessed using the Mann-Whitney U test for continuous variables and the Chi-square or Fisher's exact test for categorical variables, with a P value of < 0.05 indicating statistical significance.This combined analysis aims to provide an in-depth understanding of the demographic and clinical landscape of our study population, emphasizing the distinct characteristics associated with anemia status.healthcare systems to adopt more inclusive strategies, ensuring that education, economic background, or social circumstances do not disadvantage patients in their healthcare journeys. Table 2 . Univariate and Multivariate Analyses of Factors Associated with Anemia in Patients withGynecological Cancer.This table presents the results of univariate and multivariate logistic regression analyses assessing various factors associated with the risk of anemia.The Odds Ratios (ORs) and 95% Confidence Intervals (CIs) provide estimates of the effect size of each factor on the risk of anemia.A P-value of < 0.05 was considered statistically significant.In the multivariate analysis, adjustments were made for all variables that showed potential relevance in the univariate analysis.Significant values are in bold.
2024-05-11T06:17:34.750Z
2024-05-09T00:00:00.000
{ "year": 2024, "sha1": "f055be1dee52f3b7b3d3b11eb52fcc64113ecebc", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "48b6ae76d4fb6e2a024c039092d09c0fe48bc98b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
100568745
pes2o/s2orc
v3-fos-license
A novel TiO2 nanostructure as photoanode for highly efficient CdSe quantum dot-sensitized solarcells Key Laboratory for Photonic and Electronic B School of Physics and Electronic Engineer 150025, PR China. E-mail: physics_lin@hot Institute of Opto-electronic Materials a University, Guangzhou, 510631, PR China Key Laboratory of In-Fiber Integrated Opti Science, Harbin Engineering University, Har School of Materials Science and Engineerin 150001, P. R. China † Electronic supplementary informa 10.1039/c6ra26029b Cite this: RSC Adv., 2017, 7, 9795 Introduction The rapid development of the global economy, people's increasing demand for energy, global environmental problems, and depletion of fossil fuels have forced the exploration of clean, regenerative energy.The development of photovoltaic devices have opened up a new opportunity to use solar energy. 1,24][5][6][7][8] Furthermore, with the possibility of multiple exciton generation, the theoretical photovoltaic conversion efficiency in QDSSCs is 44%, which is much higher than that of semiconductor solar cells (31%) according to the Schockley-Queisser limit. 9,102][13][14] However, there is still a large gap between the conversion efficiencies of practical QDSSC devices and the theoretical limit (44%), indicating that the photoanode, QDs, interface recombination, electrolyte and counter electrode in QDSSCs have still not been optimized.As a signicant part of a QDSSC, the metal-oxide photoanode acts as a scaffold to anchor the QD sensitizers and provides a pathway to transport the photo-generated electrons.Generally, TiO 2 nanoparticles (TNPs) with sizes of 10-30 nm, which offer sufficient specic areas for QDs loading, are the most widely used material to fabricate photoanodes.However, the TNPs-based photoanode provide a random a pathway for electron transport, where exists large numbers of defects and grain boundaries, [15][16][17] which seriously restricts the transport of electrons and enhances electron recombination, thus further increasing the electron transport time and decreasing the electron lifetime.Finally, the TNP-based photoanode would suffer from poor charge transport and collection efficiency.The TiO 2 nanoparticles also show inefficient light scattering ability, resulting in poor lightharvesting efficiency.One-dimensional (1D) nanostructures such as nanowires and nanotubes are considered as promising candidates to overcome the shortcomings of TNPs because they can provide direct pathways for electron transport, thus facilitating electron transport and suppressing charge recombination. 18,19However, 1D nanostructures usually have low specic surface areas, leading to low QD loading and low QDSSC conversion efficiency compared to TNP-based cells. 20Specially, some photoanodes with 1D nanostructures have been grown directly on conductive substrates, which is not convenient for the fabrication of large surfaces; however, fabricating photoanodes via screen-printing methods could overcome this problem. In this paper, a unique TiO 2 nanostructure was synthesized by a simple one-pot solvothermal reaction, which is formed by self-assembly of TiO 2 nanoparticles into one-dimensional nanostructure.These nanostructures are termed 1D connected TiO 2 nanoparticles (1D CTNPs).To test the performance of the photoanode based on 1D CTNPs in QDSSCs, the 1D CTNP-based photoanode fabricated via screen printing was used to assemble CdSe QDSSCs.The performance of the new CdSe QDSSCs was compared to that of CdSe QDSSCs based on a conventional TNP photoanode synthesized in the same way but with a different reaction time.The 1D CTNP-based CdSe QDSSC shows an impressive light-to-electricity conversion efficiency of 5.45% accompanying an open-circuit voltage (V oc ) of 596 mV, a ll factor (FF) of 0.52, and a short-circuit current density (J sc ) of 17.48 mA cm À2 .This conversion efficiency is much higher than that of the TNP-based cell (4.00%).The signicant enhancements in the open-circuit voltage, short-circuit current, and power conversion efficiency (PCE) of the 1D CTNP-based CdSe QDSSC compared to the TNP-based cell are explained as follows.The 1D CTNP photoanode has large pores and a relatively high specic surface area, facilitating the loading and lling with CdSe QDs and, most importantly, providing an efficient electron transport pathway, which effectively facilitates electron transport and prolongs electron lifetime. Synthesis of oil-soluble CdSe QDs Briey, a Cd stock solution (0.4 M) was prepared by dissolving CdO in oleic acid and ODE (v/v, 1 : 1) at 250 C under N 2 atmosphere.A Se stock solution (1 M) was obtained by dissolving Se power in TOP under ultrasonic sonication.Se stock solution (0.2 mL), 4.5 mL OAM and 0.3 mL TOP were mixed in a 50 mL three-necked ask, and the mixture was heated to 300 C under N 2 atmosphere with stirring.Subsequently, 0.5 mL of Cd stock solution was injected into the reaction ask, and the temperature was set to 280 C for the growth and annealing of nanocrystals.The obtained CdSe QDs were precipitated by adding methanol and acetone into the hexane solution and further isolated and puried by centrifugation. Preparation of water-soluble MPA-capped CdSe QDs Typically, 1.0 mL of MPA-methanol solution (160 mL MPA added to 1 mL methanol and adjusted to pH 11 with 30% NaOH aqueous solution) was added into 20 mL of a solution of CdSe QDs in chloroform and stirred for 2 h to precipitate the QDs.Then, 20 mL water was injected into the mixture and stirred for another 10 min.The aqueous phase containing QDs was collected and puried with acetone.Finally, the water-soluble MPA-capped CdSe QDs aqueous solution was obtained by adding 1 mL water. Synthesis of two TiO 2 nanostructures To compare the effects of the TNP-based and 1D CTNP-based photoanodes on the performances of the QDSSCs, the two TiO 2 nanostructures were synthesized under the same conditions but with different reaction times.In detail, 1 mL of TBT was added dropwise to 30 mL HAc with rigorous magnetic stirring and kept for 30 min at room temperature.The obtained white solution was transferred into a 50 mL Teon-lined stainless-steel autoclave, which was placed in an electronic oven and maintained at 160 C for 10 min (TNPs) or 30 min (1D CTNPs).Aer the autoclaves were cooled to room temperature naturally, the two nanostructured products were obtained by freeze-drying and then annealed at 500 C for 3 h to remove the residual organics. Preparation of screen-printing pastes To prepare the mesoporous TiO 2 pastes, 0.5 g TNPs or 1D CTNPs were added to 3.8 mL anhydrous ethanol and sonicated for 30 min.Then, 2.0 g of terpineol and 2.6 g of an ethanol solution containing 10 wt% EC were added and sonicated for another 1 h in above anhydrous ethanol containing TNPs or 1D CTNPs.To obtain the nal TiO 2 pastes, the ethanol was completely removed from the above solution using a rotary evaporator. Fabrication of the TiO 2 photoanodes and CdSe QDSSCs To prepare the mesoporous TiO 2 photoanode lm, uorinedoped tin oxide (FTO) conducting glass (14 U per square resistance) was rst cleaned using an ultrasonic machine in acetone for 15 min, ethanol for 15 min, and deionized water for 15 min, respectively.Subsequently, the clean FTO glass was immersed in a TiCl 4 solution (40 mM) and stored in a closed vessel for 30 min at 70 C to form a TiO 2 barrier layer.The glass was then washed with deionized water and ethanol.The prepared TiO 2 paste was sequentially screen-printed on the pre-treated FTO glass and dried in an electronic oven at 120 C for 7 min each time.When all the printing steps were completed, the photoanode lms were gradually heated at 325 C for 5 min, 375 C for 5 min, 450 C for 15 min, and nally at 500 C for 15 min.The thicknesses of the photoanode lms could be controlled by controlling the screen-printing time; the average thickness of the two nanostructured photoanode lms was about 21 mm. The resultant TiO 2 photoanode lms were sensitized with pre-prepared, water-soluble, MPA-capped CdSe QDs via a pipetting method. 21Briey, 40 mL of an aqueous solution of CdSe QDs was dropped onto the TiO 2 photoanode lm, kept at 35 C for 5 h, and then washed with deionized water and ethanol.Aer nishing the CdSe QD deposition process, the CdSesensitized TiO 2 lm was coated with a ZnS passivation layer through four cycles of immersing the lm into an aqueous solution of 0.1 M Zn(OAc) 2 and 0.1 M Na 2 S for 1 min per dip and rinsing with deionized water between dips. 7,22To efficiently suppress the recombination of photogenerated electrons in the QDs with holes residing in the electrolyte, aer coating with the ZnS layer, further SiO 2 coating was carried out by dipping the ZnS-coated photoanode lms in 0.01 M tetraethylorthosilicate ethanol solution containing 0.1 M NH 4 OH for 2 h and then rinsing with water and drying in air. 23,24Finally, the resulting TiO 2 lms were subjected to a sintering process at 305 C for 2 min. Cu 2 S counter electrodes were fabricated according to the literature. 25Typically, Cu(CH 3 COO) 2 H 2 O (0.64 g) or thiourea (0.37 g) were each added to 30 mL of ethylene glycol.The two obtained solutions were mixed and transferred into a 100 mL, Teon-lined, stainless-steel autoclave and maintained at 180 C for 5 h.The products were washed with deionized water and ethanol and nally dried in a vacuum oven at 60 C for 4 h.Similar to the TiO 2 pastes, the Cu 2 S pastes were obtained by adding the Cu 2 S nanoparticles into anhydrous ethanol under continuous sonication together with 1.0 g of terpineol and 2.3 g of an ethanol solution containing 10 wt% EC.Aer further sonication for 5 min to obtain a uniform mixture, the ethanol was removed via rotary evaporation.The Cu 2 S paste was deposited onto the cleaned FTO glass by screen-printing. The QDSSCs were constructed in a sandwich structure by assembling the TiO 2 photoanodes, Cu 2 S counter electrodes, and the polysulde electrolyte (2.0 M Na 2 S, 2.0 M S, and 0.2 M KCl) in the interspaces. Characterization The morphologies and sizes of TNPs and 1D CTNPs were characterized by scanning electron microscopy (SEM; Hitachi SU-70) and transmission electron microscopy (TEM; FEI Tecnai G2 F20) at a 200 kV accelerating voltage.An energy dispersive Xray (EDX) spectroscope coupled to a FE-SEM was used to analyze the composition of the samples.The crystal structures of the TiO 2 materials were analyzed by X-ray diffraction (XRD; Rigaku D/max-2600/PC) using Cu-Ka (l ¼ 1.54178 Å) radiation over the 2 theta range of 10-80 .The specic surface areas and pore size distributions of the two TiO 2 powders were determined from nitrogen adsorption-desorption isotherms using a Micromeritics surface area analyzer (ASAP 2010) and calculated using the Brunauer-Emmett-Teller (BET) method.The optical properties of the TiO 2 materials were determined by ultravioletvisible spectrophotometry (PerkinElmer, Lambda 850) over the wavelength range of 350 to 700 nm.The current densityvoltage (J-V) curves of the two devices were characterized using a Keithley 2400 sourcemeter under an AM 1.5 G solar simulator with an intensity of 100 mW cm À2 .The incident light intensity was calibrated with a national renewable energy laboratory-certied silicon reference cell (Zolix QE-B1).An active area of 0.25 cm 2 was accurately dened using a mask placed in front of the cell.The incident photon-to-current efficiency (IPCE) spectra were measured as a function of wavelength from 400 to 750 nm using a Spectral Products LSP-X150 monochromator.Electrochemical impedance spectroscopy (EIS) was conducted on an electrochemical workstation (VMP3, France) in dark conditions at the negative bias of the V oc for each sample; the frequency range was varied from 200 kHz to 1 MHz.Intensitymodulated photovoltage spectroscopy (IMVS) and intensitymodulated photocurrent spectroscopy (IMPS) measurements were carried out on a electrochemical work station (Zahner Elektrik, Germany) with a frequency response analyzer under a white light-emitting diode (wlr-01); the frequency range was 1 kHz to 0.1 Hz. Results and discussion Fig. 1 shows SEM and TEM images of the TiO 2 powders prepared under the same synthetic conditions but with different reaction times (10 and 30 min).As shown in Fig. 1(a), TNPs with sizes in the range of 15 to 30 nm were easily prepared aer 10 min of reaction time.Fig. 1(b) shows a typical TEM image, which shows circular and ellipse-shaped TiO 2 nanoparticles.The high-resolution TEM (HRTEM) image in Fig. 1(c) depicts the clear lattice fringes of the TNPs, indicating well-dened interplanar distances of 0.355 nm.These interplanar distances indicate the good crystallinity of the TNPs and correspond to the (101) crystal plane of anatase TiO 2 . 26Fig. 1(d) and (e) show that the 1D CTNPs are 1D chains of nanospheres with an average diameter of $25 nm and lengths of $2 mm.These chains of nanospheres were formed by the ordered selfassembly of TiO 2 nanoparticles when the reaction time was extended to 30 min.The HRTEM image shown in Fig. 1(f) shows clear lattice fringes of the nanoparticles in the 1D CTNPs; the joints of two adjacent nanoparticles also show clear lattice fringes, indicating the good crystallinity of the 1D CTNPs and guaranteeing an efficient electron transport pathway.The selected-area electron diffraction (SAED) pattern of the two typical samples (insets in Fig. 1(c) and (f)) show that the TNPs and 1D CTNPs were polycrystalline. The phase structures and crystallinities of the two TiO 2 samples were characterized using XRD.As shown in Fig. 2, the diffraction peaks of the two samples were indexed to the anatase phase of TiO 2 (JCPDS no.73-1764).In detail, the major peaks at 25.37 , 37.90 , 48.16 , 54.05 , 55.20 , 62.87 , 68.97 , 70.48 and 75.28 correspond to the reections of the (101), (004), ( 200), ( 105), ( 211), ( 204), ( 116), (220), and (215) crystal planes of anatase TiO 2 , respectively.The intensities of the diffraction peaks of the anatase phase of the two samples at 25.37 were stronger than those of the other diffraction peaks, indicating that the main growth direction of the as-prepared TiO 2 samples was along with (101) direction.Growth along the (101) crystal plane is advantageous for electronic applications as it results in higher electron transport speed, lower carrier recombination rate, better optical transparency, and higher specic surface area compared to growth along other directions. 27he specic surface areas and pore size distributions of the TNPs and 1D CTNPs were characterized based on nitrogen adsorption-desorption isotherms (Fig. 3).Both samples exhibited type-IV isotherms according to the Brunauer-Deming-Deming-Teller classication. 28For the TNPs, the pore size distribution (inset in Fig. 3) calculated from the desorption branch of the nitrogen isotherm by the BET method indicates a wide range of pore sizes from 5 to 60 nm with a maximum pore diameter of about 7.55 nm.For the 1D CTNPs, the pore size distribution (inset in Fig. 3) indicates a maximum pore diameter of about 12.45 nm.The ordered self-assembly of TNPs in the 1D CTNPs (see the SEM and TEM results) clearly resulted in greater pore sizes and smaller BET specic surface area (84.11 versus 111.94 m 2 g À1 ) compared to the TNPs.Although the BET specic surface area of the 1D CTNPs was slightly lower than that of the TNPs, the large pore size distribution would be extremely useful in QDSSCs by providing pathways for colloidal QDs and electrolyte molecules.For comparison, two kinds of TiO 2 photoanode lms (lm 1, TNPs; lm 2, 1D CTNPs) with similar average thicknesses were deposited on FTO glasses coated with TiO 2 barrier layers using the same screen-printing times.Fig. S1 in the ESI † shows the cross-sectional SEM images of seven TNP lms and seven 1D CTNP lms formed using the same screen-printing time.The lm thicknesses were slightly different, and the average thickness was about 21 mm.Fig. 4 shows the cross-sectional SEM images of the photoanode lms prepared using the TNP and 1D CTNP pastes.The thicknesses of the two photoanode lms were similar (average thickness ¼ 21 mm), and the TiO 2 nanostructures in the photoanode lms were largely retained aer screen printing compared with the as-prepared TiO 2 powers (as shown in Fig. 4(b) and (d)).CdSe QDSSCs based on the two lms were fabricated by assembling sandwich structures with poly-sulde electrolytes and Cu 2 S counter electrodes.Fig. 5(a) shows typical J-V curves of the two CdSe QDSSCs irradiated by AM 1.5 G simulated sunlight (100 mW cm À2 ).The detailed photovoltaic parameters, including J sc , V oc , ll factor (FF), and overall PCE, are summarized in Table 1.The two CdSe QDSSCs have similar FFs of 51-52%.However, the V oc of the 1D CTNPs-based cell was higher than that of the TNP-based cell (550 versus 596 mV), and the J sc values were signicantly different (14.29 mA cm À2 for the TNP-based cell and 17.48 mA cm À2 for the 1D CTNP-based cell).As a result, the PCEs different signicantly (4.00% for the TNPbased cell compared to 5.45% for the 1D CTNP-based cell).To further illustrate the effects of the TNP-and 1D CTNP-based photoanodes on the performances of the CdSe QDSSCs, the monochromatic IPCE spectra were collected as functions of wavelength from 350 to 700 nm (Fig. 5(b)).Compared with the TNP-based cell, the 1D CTNP-based cell resulted in an obvious enhancement in quantum efficiency over the entire tested wavelength range.Generally, IPCE can be dened as IPCE(l) ¼ LHE(l)4 inj (l)4 reg (l)h cc (l), where LHE(l) is the light-harvesting efficiency, 4 inj (l) and 4 reg (l) are the quantum yields for electron injection and quantum dot regeneration, respectively, and h cc (l) is the charge collection efficiency. 291][32] and s r ¼ 1/2pf r , where f d and f r are the characteristic minimum frequencies of the IMPS and IMVS imagery components, respectively. 23,33Fig. 7(a) shows the s d and s r values of the QDSSCs based on the above two photoelectrodes as functions of light intensity.Both s d and s r decreased with increasing light intensity due to the fact that at higher light intensities there are more photo-generated electrons available to ll the deep traps, and since electron trapping/detrapping occurs at shallower levels, the transfer of other free electrons becomes faster the closer to full the traps are. 29Compared with the QDSSC based on the TNP photoanode, the QDSSC based on the 1D CTNP photoanode exhibited a faster electron transport rate (lower s d ) and slower electron recombination rate (larger s r ).This was attributed to the fact that: compared with TNPs photoanode, self-assembly of TiO 2 nanoparticle into 1D nanochains in 1D CTNPs, which provide a shorter and efficient electron transport pathway, thus leading to the faster electron transport rate, moreover, the smaller specic surface area would decrease additional recombination centers to suppress charge recombination, leading to prolonged electron lifetime.The charge collection efficiency of the QDSSCs based on the two types of TiO 2 photoanodes were estimated using the following equation: h cc ¼ 1 À (s d /s r ); the results are shown in Fig. 7(b). 34The h cc values of the QDSSC based on the 1D CTNP photoanode were higher than those of the QDSSC based on the TNP photoanode at different incident light densities.Taking all factors into account, the higher IPCE of the 1D CTNP-based cell compared to the TNP-based cell can be rationally attributed to the enhanced light-harvesting efficiency and good charge collection efficiency of the 1D CTNP-based cell, which led to higher J sc . To deeply investigate the interfacial electron transfer resistance, the EIS spectra of the two cells were also collected (Fig. 8).The Nyquist curves exhibited two semicircles corresponding to electron injection at the counter electrode/electrolyte interface and transfer in the electrolyte at high frequencies (R 1 , the rst semicircle), and recombination resistance for the electrontransfer process at the TiO 2 lm/QDs/electrolyte interface and transport in the TiO 2 lm (R 2 , the second semicircle). 35The tting results show the value of R 2 of 1D CTNP-based cell increases from 31.20 U to 124.50 U compared with TNPs based cell, which predicted the reduced interfacial recombination, and thus leading to higher V oc , which is in agreement with above photovoltaic data.The increase in R 2 is mainly attributed to the reduction in the contact area of TiO 2 with the electrolyte because the relatively low specic surface area and high loading of QDs on the 1D CTNP-based photoanode impede the direct exposure of TiO 2 to the electrolyte.Meanwhile, the 1D CTNPs may be efficiently coated by the inorganic ZnS/SiO 2 double barrier layer.Furthermore, R s corresponds to the sheet resistance of the FTO glass substrate and the contact resistance at the FTO/TiO 2 interface. 29The R s values of QDSSCs based on TNP and 1D CTNP photoanodes calculated according to the equivalent circuit were similar (14.17 and 14.75 U), indicating that the electronic contact between two TiO 2 nanostructures and FTO glass is almost the same. Having considered all the above factors, both the charge transfer resistance and light scattering capacity affect V oc and J sc , which further directly caused change of PCE based on the QDSSCs.In short, compared with the TNP-based cell, the higher short-circuit current density of the 1D CTNP-based cell can be mainly explained by the suitable specic surface area and large pore diameter for adsorbing more CdSe QDs, prominent lightscattering capacity, which enhances the light-harvesting efficiency, and faster electron transport time and slower charge recombination rate compared to the TNP-based cell, leading to a higher V oc .Finally, an impressive PCE was obtained for the CdSe QDSSC based on the 1D CTNP photoelectrode. Conclusions In summary, novel 1D CTNPs formed by the ordered selfassembly of TNPs have been synthesized by a facile one-pot solvothermal route and used to fabricate efficient CdSe QDsensitized solar cells.The 1D CTNP-based cell shows an impressive PCE of 5.45%, representing an $36% enhancement compared to the TNP-based cell.The enhanced PCE is attributed to the improvement of J sc and V oc .The considerable improvement in the photovoltaic performance of the 1D CTNPbased cell can be attributed to enhanced light-harvesting efficiency and charge collection efficiency resulting from the fast electron transport rate and long electron lifetime.The new 1D CTNP-based photoanode is an optimal candidate for obtaining highly efficient QDSSCs. Fig. 2 Fig. 2 XRD patterns of the two TiO 2 powders obtained after different reaction times (10 and 30 min). Fig. 6 Fig. 6 (a) Diffuse reflectance spectra of the two TiO 2 films.(b) UV-visible absorption spectra of the two TiO 2 /CdSe films. Fig. 7 Fig. 7 (a) Electron lifetimes and electron transport times and (b) charge collection efficiencies of CdSe QDSSCs based on TNPs and 1D CTNPs photoanode films. Table 1 Detailed photovoltaic parameters of the CdSe QDSSCs based on the two TiO 2 films
2019-04-08T13:11:26.778Z
2017-01-30T00:00:00.000
{ "year": 2017, "sha1": "c52426291f5e3cdf5928b040cb15445614202fc9", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2017/ra/c6ra26029b", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f50dbc4442efce97b16c81c0b19cdbb38ee56ced", "s2fieldsofstudy": [ "Materials Science", "Engineering", "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
218642105
pes2o/s2orc
v3-fos-license
Subtly encouraging more deliberate decisions: using a forcing technique and population stereotype to investigate free will Magicians’ forcing techniques allow them to covertly influence spectators’ choices. We used a type of force (Position Force) to investigate whether explicitly informing people that they are making a decision results in more deliberate decisions. The magician placed four face-down cards on the table in a horizontal row, after which the spectator was asked to select a card by pushing it forward. According to magicians and position effects literature, people should be more likely to choose a card in the third position from their left, because it can be easily reached. We manipulated whether participants were reminded that they were making a decision (explicit choice) or not (implicit choice) when asked to select one of the cards. Two experiments confirmed the efficiency of the Position Force—52% of participants chose the target card. Explicitly informing participants of the decision impairs the success of the force, leading to a more deliberate choice. A range of awareness measures illustrates that participants were unaware of their stereotypical behaviours. Participants who chose the target card significantly underestimated the number of people who would have chosen the same card, and felt as free as the participants who chose another card. Finally, we tested an embodied-cognition idea, but our data suggest that different ways of holding an object do not affect the level of self-control they have over their actions. Results are discussed in terms of theoretical implications regarding free will, Wegner’s apparent mental causation, choice blindness and reachability effects. Introduction We like the feeling of being in control of our thoughts and our actions, and yet many of our behaviours are systematically influenced by external and internal factors (Ariely, 2008;Bargh & Chartrand, 1999;Bargh, Chen, & Burrows, 1996;Loewenstein, 1996). Likewise, our thoughts are often less unique than we intuitively believe them to be, and research on population stereotypes illustrates that most people will choose or think about similar things or objects when asked to make a decision (French, 1992;Grimmer & White, 1986;Marks & Kammann, 1980). Understanding the external factors that influence our behaviours may help individuals make more informed and freer choices (Appourchaux, 2014). Baumeister suggested that free will is predominantly associated with cognitive processes involving conscious and controlled activity (i.e. System 2), rather than the nonconscious and automatic processes associated with System 1 (Baer, Kaufman, & Baumeister, 2008;Kahneman et al., 2002). Accordingly, a more useful view of free will is to think in terms of autoregulation and self-control mechanisms, a perspective that allows us to take advantage of the parameters influencing our thoughts and actions during our day to day lives. Magicians are masters at deception and creating the illusion of conscious will, and they use a wide range of forcing techniques, to give spectators the illusion that they freely and consciously chose a card, which in reality is predetermined by the conjurer (Kuhn, Amlani, & Rensink, 2008). This paper uses a forcing technique to investigate whether explicitly informing people that they are making a decision leads to a more deliberate decision. Forcing Techniques Forcing refers to conjuring techniques which allow magicians to covertly influence a spectator's choice or its outcome (Pailhès & Kuhn, 2019;Pailhès & Kuhn, submitted). These techniques are often used to create the illusion of precognition or mind reading and magicians have extensive real-world experience in manipulating the decisions people make. Back in 1894, Alfred Binet investigated magicians' deceptive craft scientifically, and he observed that conjurers exploit spectators' "laziness" without them becoming aware of it (Binet, 1894). In other words, conjurers intuitively try to manipulate the spectator into using more automatic cognitive processes, which are easily exploited to trick the mind. He further noted that magicians often use circumstantial influences to push a person to act in a predictable way. Nowadays, we refer to these processes as automatic behaviours, which often rely on heuristics, or a System 1 type of thinking (Kahneman, 2011;Kahneman et al., 2002). By observing conjurers performing tricks, Binet noted that if you are presented with three different objects, one alongside the other, most people choose the middle one. He also points out that this is probably due to the ease by which people execute certain grasping actions. Likewise, he noted that when people are presented with a sheet of paper that has been divided into 16 equal size squares, and they are asked to draw a dot into one of them, most people will choose the middle squares. As he writes, "there is therefore a kind of attraction exerted by the centre of the figure. Probably also because they provide more convenience to the hand." (Binet, 1894, p.150/151). Magicians frequently exploit these types of cognitive heuristics and population stereotypes to force a decision (Annemann, 1940;Banachek, 2002;Jones, 1994). Magicians' real-world experience and expertise in performing these tricks for large audiences have allowed them to identify psychological factors that enhance the possibility of the spectator selecting the forced item. Several other papers have investigated forces that rely on different techniques and it is likely that spectators simply choose the easiest option. The "Classic Force" relies on the timing in which the magician is handling the deck of cards while asking the spectator to pick a card (Shalom et al., 2013). Shalom et al. showed that most people pick the card which is subtly handled by the magician who physically restricts the choice. Olson, Amlani, Raz, & Rensink (2015) investigated the "Visual Riffle Force" in which spectators are asked to visually select a card when the magician flips through the deck in front of their eyes-most spectators choose the card which is the most visually salient. Both forces have high success rates and showed that participants felt free even when they chose the target card. Magicians have developed a large assortment of forcing techniques that rely on a wide range of cognitive processes (Pailhès & Kuhn, 2019). In this paper, we examine a forcing technique that relies on population stereotypes: the Position Force. This technique is based on the observation that people's choices for random objects are influenced by the object's physical position. According to the magic literature, people will be inclined to select the card that is the easiest to reach in the row (Banachek, 2002;Binet, 1894). This force is most commonly used with five playing cards (Banachek, 2002), but we decided to investigate the force with four cards to compare the results to forcing from our research program: here, the magician places four cards on the table in a horizontal row, after which the spectator is asked to select a card, by physically touching it. Results from an online survey on 91 magicians showed that most of them (68%) think that when we present four cards in a row on a table to spectators, the majority will choose the third card from their left. Their mean estimation of the percentage of people who would choose this target card was 57% of the spectators (SD = 15.9). Indeed, a recently published study from our laboratory using the Position Force found that 60% of the participants select the third card from their left while feeling free for their choice and underestimating the proportion of people who would select the same card (Kuhn, Pailhès, & Lan, 2020) (Fig. 1). Moreover, research in other domains suggests that people's choices are influenced by the physical positioning of an object. Position effects Nisbett and Wilson showed in 1977 that when presented with four identical pairs of stockings, people tend to prefer the far-right one (Nisbett & Wilson, 1977). Nowadays, consumer psychology (Chae & Hoegg, 2013), and nudge techniques (Dayan & Bar-Hillel, 2011) often rely on manipulating an object's physical positioning with the intention of influencing people's behaviour and choices. For example, people are more likely to choose an item, such as food (Kim, Hwang, Park, Lee, & Park, 2019), if it is positioned in a specific location, and this can be used to lead people (Bucher et al., 2016). There are however some discrepancies about the exact way in which positioning affect people's choices, with studies showing both edge advantage and aversion. Bar-Hillel suggests that these inconsistencies result from different choice characteristics, such as whether it is interactive or not (Bar-Hillel, 2015a). Accordingly, a choice is interactive when the payoff for someone's decision is affected by another interested person. For example, in a game such as rock, paper, scissors, each player's choice payoff is determined by the joint choices of both players. A further factor involves the amount of cognitive processing a choice requires to figure it out. Situations in which all items are evidently identical (such as the back of playing cards) fall into the category of choices that neither require processing nor interaction. In this case, we observe that people present an edge aversion rather than edge advantage (Bar-Hillel, 2015a, b). Indeed, when presented with a selection of similar options, or identical items, individuals tend to choose items located in the middle position rather than those located at the edges (Christenfeld, 1995). This effect has been found with a range of items. For example, participants prefer middle items and avoid items located at the extremes when choosing among a row of arbitrary symbols, a toilet paper roll within a stall, a bathroom stall, and when picking products from shelves in supermarkets. The principle ruling these effects is thought to be based on a minimal mental effort (Christenfeld, 1995;Shaw, Bergen, Brown, & Gallagher, 2000). Indeed, research showed that when participants are asked to choose between similar highlighters, survey papers, or seats, they reliably prefer the middle items (Shaw et al., 2000). Bar-Hillel (2015a, b) notes that in such situations, it is not necessarily mental effort, but also physical one which is at play. The author further suggests that in these type of tasks, middle items are more reachable than those at the ends, because they are closer to the participants. Indeed, her principle of reachability dovetails this idea in that when all things are equal, people prefer objects that can be reached more easily. Accordingly, when people are presented with a horizontal physical display, their choice will be biased by this reachability principle, which might explain why they favour central items. This behaviour, using a principle of least effort, is linked to dual-system theories of cognition (Chaiken, Liberman, & Eagly, 1989;Chen, Duckworth, & Chaiken, 1999;Evans & Stanovich, 2013;Kahneman, 2011;Kahneman, Frederick, Kahneman, & Frederick, 2001;Petty & Cacioppo, 1986) which argue that most of the time we use automatic, rapid, stereotyped responses rather than controlled ones (Tomlin, Rand, Ludvig, & Cohen, 2015). Research on the psychology of the self suggests that one of the most important human characteristics is the ability to modify our responses and therefore remove ourselves from effects of situational stimuli (Baumeister & Heatherton, 1996). It has been shown that self-control requires attention and effort (Baumeister, 2002;Baumeister & Heatherton, 1996;Hagger, Wood, Stiff, & Chatzisarantis, 2010) and that one of the main functions of our reflective system is to control thoughts and actions suggested by our automatic, impulsive system (Kahneman, 2011). Our System 1 (automatic type of thinking) is associated with greater use of diverse biases and heuristics, rather than our deliberative, reflective processes (Kahneman et al., 2002). Therefore, encouraging people to reflect before making a decision is expected to lead to lesser use of impulsive behaviours. Although there is some research examining the psychological factors that activate our automatic type of thinking (e.g. cognitive load and time pressure, Baumeister & Heatherton, 1996;Hwang, 1994;Vohs & Heatherton, 2000), less is known about how to activate more deliberate decisions. This paper seeks to document the Position Force, investigating its success rate and how free participants feel even when they are influenced by the trick. At the same time, we seek to investigate whether it is possible to encourage participants to make more deliberate choices, impairing the success of the force. In Experiment 1, we examine whether a simple change in phrasing, making the choice explicit, can lead to this effect. Experiment 1 Experiment 1 aimed to empirically examine how effective the Position Force is in terms of forcing participants to choose a target card, and to investigate whether the nature of the choice affects the extent to which participants choose the predicted item. Participants were either asked to simply push a card towards the experimenter (implicit choice), or they were explicitly asked to choose a card before the physical selection (explicit choice). Previous research shows that deliberative decision-making can be induced by simply framing tasks as decisions rather than intuitive reactions (Small, Loewenstein, & Slovic, 2007;Zhong, 2011). For example, participants were asked to "decide" rather than "to feel" to induce a deliberative decision (Zhong, 2011). Deliberative decisions are thought to lead to less reliance on heuristics and impulsive, automatic behaviours (Boureau, Sokol-Hessner, & Daw, 2015;Kahneman & Frederick, 2002;Stanovich & West, 2000). We therefore predicted that participants would be less likely to choose the target card (i.e. card that could be reached more easily) when they were encouraged to deliberately think about the choice (i.e. explicit choice) rather than when they made the selection implicitly. In line with previous research on the reachability bias (Bar-Hillel, 2015a, b; Bar-Hillel, Peer, & Acquisti, 2014), we predicted that the force would only work for participants who used their right hand to reach for the card, and thus it should be more effective for right-handed participants. Our second objective was to examine the extent to which participants were aware of the force. To our knowledge, none of the previous studies on position effects and reachability has done this (though see Kuhn et al., 2020). Two key elements make a force successful: participants must select the target object, and this selection must feel free. Therefore, we assessed how free people felt about their choice and their awareness about the bias itself. Since the Position Force is commonly used in the context of a magic performance, we predicted that participants should feel free about their selection and that they are unaware of this behavioural bias. Participants One hundred participants (50 females, 50 males) between 18 and 60 years old (M = 29.71, SD = 11.65) recruited on Goldsmiths University campus took part in the experiment. Goldsmiths Psychology Department provided ethical approval for the two experiments. Before the experiment and to maximize the power of our results, we ran an a priori power analysis for a Chi-squared test with w = 0.30 (moderate effect size), α = 0.05, and a power of 0.8. The output required 88 participants and the chosen effect size was based on prior results using the Position Force (Kuhn et al., 2020). We confirm that for both experiments, we report all measures, conditions and data exclusions. Procedure The experimenter/magician sat at one of Goldsmiths' cafeteria table with the four cards already on the table, all spaced by approximately 5 cm, and positioned on the table in a way which made the row as symmetrical as possible. Participants sat to face the experimenter. Participants were randomly allocated to one of the two selection types (implicit choice or explicit choice) and consent forms presenting the experiment as a study about magic tricks and decision-making were signed. In the implicit choice condition, participants were asked to "push a card toward [the experimenter]". The procedure for the explicit choice condition was identical with the exception that they were instructed to "choose a card, and then push it toward [her]". The experimenter then noted the chosen card and the hand with which the participant pushed the card. The participants were then asked to complete a questionnaire which asked them (1) how free they felt about their choice (from 0, not free at all to 100 completely free), (2) the percentage of people they thought would have chosen the same card as them, and (3) the Dutch Handedness Questionnaire (van Strien, 2003). Efficiency of the force and main manipulation The first analysis aimed to assess the efficiency of the Position Force and the impact of the nature of the choice on participants' selection. Figure 2 shows the percentages of participants who chose each of the four cards as a function of the nature of the choice and the hand that was used to make the selection. Eighteen percent of participants used their left hand, compared to 82% who used their right hand. Overall, 55% of the participants chose the target card, which was the most chosen card, significantly more than chance (i.e. 25%) (X 2 (1, N = 200) 18.75, p < 0.001, φ = 0.293). This result very closely matches the mean of magicians' estimates (57%). A visual inspection of the graphs illustrates a systematic difference in selections as a function of the hand used Fig. 2 Percentages of choices as a function of the experimental conditions and the hand used to make the selection. a The choices made by participants who used their left hand to push the card, b for those who used their right hand. Position 1 is the first card from the left of the participants 1 3 to make the selection. 1 Although the graph highlights clear differences in the success of the force as a function of hand selection, the difference did not reach statistical significance (X 2 (3, N = 100) 3.95, p = 0.27, φ = 0.195). However, since only 18% of the participants used their left hand it is likely that non-significant difference is due to a lack of power. As we expected the force to work only when people used their right hand, we focused the rest of the analyses for the righthanded selection only. Participants in the implicit choice condition were significantly more likely to choose the target card than those in the explicit choice condition (X 2 (1, 82) 4.32, p = 0.038, φ = 0.224). This suggests that as we predicted, people tend to act in a more deliberate way when they are reminded that they are making a decision. Awareness of the force Our next analysis examines the impact that the nature of the choice (explicit vs. implicit) and the choice itself (forced or not) has on people's feeling about how free the choice was. Kruskal-Wallis tests show that neither the choice of card nor the selection method had an impact on participants' feeling of freedom for their choice (H(1) = 1.77, p = 0.18 and H(1) = 0.17, p = 0.68, see Fig. 3). This shows that participants are unaware of their bias, as well as a dissociation between their behaviour and their conscious introspection. Next, we examined participants' metaknowledge of the bias by examining their estimates for the percentage of people who would choose the same card. Kruskal-Wallis test shows that whatever card the participants chose, they did not give different estimations of the percentage of people who would choose the same card as they did (H(3)2.46, p = 0.48). Interestingly, participants who chose the target card underestimate the fact that they used a population stereotype, and the other participants overestimate the number of times their card would be chosen (see Fig. 4). This shows again These results suggest that the Position Force is effective-a large proportion of our participants chose the target card, while not being aware of their bias. This confirms Wegner's theory (Wegner, 2002a, b), showing that people tend not to have access to the real causes of their behaviours, which are often unconsciously rooted. Here, most participants' decision seems to have been guided by the position of the card while they underestimated the number of people who would have made the same decision. A simple change in phrasing negatively impacted the success of the Force. This suggests that participants rely less on automatic/impulsive biases when asked to choose before acting. Handedness also plays a role in this force-the force only worked when people used their right hand. The results confirm previous literature about reachability and edge aversion when presented items are identical, as participants favoured items which were easier to reach according to the hand their used while avoiding the cards at the ends of the row. Experiment 2 The second experiment aimed to replicate the results from Experiment 1 and confirm whether explicitly informing people about the choice before their selection would impair people's stereotypical behaviour. This time, rather than letting people use their preferred hand, we forced them to use their right hand by restricting the use of their left hand. We used this experiment to test a controversial idea in embodied cognition which suggests that the nature in which they are asked to hold an object influences the level of self-control they have over their actions. This idea is based on the observation that people may clench their fists, tense their muscles or grit their teeth when firming willpower, and argues that such actions could also help us firm willpower and consequently improve self-control (Hung & Labroo, 2010;Niedenthal & Barsalou, 2005). Past research on embodied cognition shows that participants' self-control is enhanced when they firmly grasp an object while making a choice (Hung & Labroo, 2011;Niedenthal & Barsalou, 2005). The explanation behind these findings is that our memories would be composed of multimodal experiences, which also spread throughout our body. One consequence of this would therefore be that bodily actions accompanying thoughts could generate the associated cognitions and influence our behaviours (Briñol & Petty, 2003;Cacioppo, Priester, & Berntson, 1993). If true, it predicts that participants would experience greater self-control, therefore choosing the target card less often when they were asked to firmly grasp a glue stick rather than simply hold it. Finally, we decided to investigate participants' sense of freedom more thoroughly, using Thompson's 3 components of a free choice (Thompson, Locander, & Pollio, 1990): being deliberate, in control, and free from restriction. Procedure The experiment took place at the same venue, with the same setting at Goldsmiths University, where the participants were recruited. This time, every participant was asked to hold a glue stick in their left hand. The experimenter either asked them (while doing the gesture herself) to simply hold the glue stick in their open palm or to firmly grasp it between their fingers and their palm. As in Experiment 1, participants were randomly allocated to one of the selection conditions and either asked to "choose a card and then push it towards [the experimenter]" (explicit choice), or to "push a card towards [the experimenter]" (implicit choice). Participants were then asked to put the glue stick down and answer the paper questionnaire. The questionnaire was composed of 0-100 scale questions about their feeling of freedom ("How free did you feel for your choice?"), its three components ("How restricted did you feel for your choice?" "How impulsive/deliberate did you feel in making your choice?" and "How much control did you feel you had over your choice of card?"), as well as two measures about how firmly and tightly they felt their hand while making the choice to ensure they did tense their muscles more in the self-control condition. Finally, their writing hand, gender and age were also recorded. Efficiency of the force and main manipulations Our first analysis tested the efficiency of the Position Force and our two main manipulations. Figure 5 shows the percentages of participants who chose each of the four cards as a function of the two experimental manipulations. Overall, 48/100 chose the target card, which was the most frequently chosen one. Comparing our results to a random distribution (25% choice per card), a Chi-squared showed that our participants chose the target card significantly more often than the others (X 2 (1, 200) 11.41, p < 0.001, φ = 0.232). Regarding the experimental conditions, participants chose the target card significantly less often in the explicit choice condition than in the implicit choice condition (36% vs 60% of choices, X 2 (1, 100) 5.77, p = 0.016, φ = 0.234). This confirms that when participants are forced to use their right hand to make their choice, and therefore when the most convenient card to choose is indeed the forced one, the phrasing of the choice does have an impact on whether or not participants use a stereotypical answer. It appears that simply using the sentence "choose a card, and then push it towards me" rather than just "push a card towards me" subtly make the choice more salient and explicit, therefore activating a more deliberative process in participants' decision. However, no significant difference was found regarding the effect of embodied self-control (X 2 (1, 100) 1.44, p = 0.23, φ = 0.119), even though participants did feel their hand muscles were significantly tighter (W = 1954, p < 0.001) and firmer (W = 1980, p < 0.001) when they were asked to firmly grasp the glue stick rather than simply hold it. Several explanations seem possible in regard of these null results. First, studies using this type of procedure have suggested that firmly clasping an object could enhance self-regulation and control (e.g. withstand pain, overcome food temptation, consume unpleasant medicines). But we cannot rule out the possibility that this does not apply to the current specific situation. It is also possible that the present study does not necessitate participants to use their self-control to choose a card other than the forced one, and therefore an enhanced self-control would not affect the results. However, embodied cognition theories have also suffered from important criticism regarding their grounding in theoretical background, and several papers have put in doubt the validity of research on the subject (Caramazza, Anzellotti, Strnad, & Lingnau, 2014;Goldinger, Papesh, Barnhart, Hansen, & Hout, 2016;Mahon & Caramazza, 2008) or lack of replication (e.g. Chabris, Heck, Mandart, Benjamin, & Simons, 2019). It has been pointed out that within most experiments on embodied cognition, the expected behaviours tended to be overarching ones (e.g. completing a task), and our study was probably looking for a more specific outcome (Goldinger et al., 2016). Feeling of freedom First, regarding the overall general sense of freedom, participants felt significantly freer in the explicit choice condition than in the implicit one (W = 1540, p = 0.034, r pb = 0.232, see Fig. 6). No significant difference was found for the embodied self-control variable (W = 1330, p = 0.56, r pb = 0.064). Taking a closer look at the components of the feeling of freedom (Fig. 6), participants felt significantly more free from restrictions when the choice was explicit (M = 80.06) rather than implicit (M = 66.52, H(1) = 6.63, p = 0.01). No other significant result was found regarding either the self-control variable or the other components of freedom (i.e. the feeling of control and deliberation). The mean of the feelings of control, restriction and deliberation was correlated with the general feeling of freedom (r s = 0.619, p < 0.001, see Table 1). However, the feeling of deliberation did not seem to correlate with those of control, restriction, and general freedom. A calculation of Cronbach's alpha appeared to be only 0.34 for the three items but went up to 0.60 if the item of the feeling of deliberation was removed. It then appears that contrary to Thompson's definition (1990), the feeling of deliberation is not a reliable component of people's general feeling of freedom. Finally, we looked at how the feeling of freedom and its components were linked to participants' choice of card. Results of the logistic regression indicated that there was no significant association between participants' feeling of freedom, restriction, control and their choice of cards (X 2 (95) = 4.93, p = 0.295). However, the feeling of deliberation was significantly associated with the participants' choices (p = 0.045). Indeed, the more participants felt their decision was deliberate, the more likely they were to choose another card than the forced one. During debriefings, participants who did not choose the target card typically reported first thinking about taking it and then changing their mind for another card. In summary, this experiment replicated experiment 1, showing that most participants tend to choose the target card and that the Position Force is extremely effective. We confirmed that the nature of the choice has an impact on whether they choose the target card or make a more deliberate choice and go for another one. Moreover, the more participants felt their choice was deliberate, the less likely they were to choose the target card. Also, participants felt less restricted and more generally free when they were asked to "choose" a card (explicit choice) rather than simply "push" it (implicit choice). However, the embodied self-control variable did not show to have any impact on any measure. General discussion This paper sought to document the Position Force, as well as investigate whether it was possible to lead people to act more deliberately when making a simple decision. For this, we used a subtle change in the phrasing of the choice, making it either explicit or implicit (Experiment 1 and 2), as well as a controversial idea in embodied cognition (Experiment 2). Position Force's efficiency and choice variable Both experiments confirmed that the Position Force is efficient and replicate previous results (Kuhn, Pailhès & Lan 2020), with an overall 52% of participants choosing the target card (the third one from participants' left). These results closely match the mean of magicians' estimates (57%) and demonstrate that magicians' intuition about the effectiveness of the force is pretty accurate and precise. Our results further show that a position effect influences people's choice, and they clearly illustrate an edge aversion effect, which dovetails previous findings that have used identical items (Bar-Hillel, 2015a, b;Christenfeld, 1995). It is interesting to note that some other related forcing techniques might rely on this principle as well. Dai Vernon's five cards force is thought to rely on reverse psychology and five cards are placed in a horizontal row with the target card located fourth from the left. In this force, five cards are carefully chosen, namely the king of hearts, seven of clubs, ace of diamonds, four of hearts and nine of diamonds (from left to right). The spectator is primed to be suspicious as the magician insists the Fig. 6 General feeling of freedom and its components as a function of the experimental conditions. Bars are 95% confidence intervals for each condition General freedom 0.137 0.525*** 0.684*** 0.619*** -selection must be a free choice and points out that the ace is in the middle and the seven is the only black card. These statements are thought to eliminate these two cards as they were mentioned. The two last cards are situated at the end of the row, and the king is the only picture card which is suggested to make it suspicious. As stated by Banachek (2002), the four of hearts is more likely to be chosen as it is not at the end of the spread and is in the fourth position. It would be interesting to investigate whether this force truly relies on reverse psychology, or simply on the position of the cardagain seemingly the most reachable one. The two experiments also showed that asking participants to make an explicit decision impairs the success of the Force. When participants were asked to choose a card and then push it rather than simply push it, they chose the target card less often. These results suggest that the subtle change in the presentation of the choice resulted in less automatic, and more deliberate choices. Therefore, it seems that making a choice explicit leads to a less automatic, impulsive decision: a more deliberate one. Awareness of the bias Experiment 1 investigated participants' awareness of their bias, asking them to estimate what percentage of people would have chosen the same card as they did in the same situation. The results show that participants' choice of card had no impact on their estimation. Across the four different types of choices (the four cards), participants estimated that between 33 and 43% of other people would choose the same card as theirs. Participants who chose the target card underestimated the fact that their choice was a population stereotype, their mean estimation being 40%, compared to the 55% of participants who chose the identical item. However, participants who did not choose the target card, gave overestimations of the frequency of other people's choices. The mean of their estimations across the three cards was 38% compared 15% who chose these cards. This adds to previous literature in choice blindness and highlights a dissociation between our behaviour and our conscious introspection (Hall, Johansson, Tärning, Sikström, & Deutgen, 2010;Hall et al., 2013;McLaughlin & Somerville, 2013). As Wegner noted, the actual causal paths of an action are not present in the person's consciousness, and the experience of conscious will arises as we infer this path from our thought to our action (Wegner, 2002a, b). According to his theory, we unconsciously decide upon an outcome, and if this decision coincides with our conscious intention, we experience having made this choice independent of the unconscious processing. This phenomenon appears to be what happened in the implicit choice condition: most participants used an automatic behaviour influenced by external factors (position and reachability effects), but were not consciously aware of these influenced, underestimating the number of people who would have chosen the same card as they did. Feeling of freedom We also measured participants' general sense of freedom for their choice (Experiment 1 and 2) as well as its three components (Experiment 2) according to Thompson's definition (Thompson et al., 1990). Participants' feelings of deliberation, restriction, and control for their choice were measured, alongside their general feeling of freedom. Regarding the general sense of freedom across both experiments, participants' choice of card did not have any significant impact. This shows that whether people were influenced by the force or not, they felt the same degree of freedom for their choice. As Binet already noted (Binet, 1894), "each individual placed in certain conditions, and thinking to be acting freely, is, in reality, behaving in the same way as other individuals, and what they have in common is automatic activity" (p. 151). This adds to previous results regarding people's awareness of their bias, and support the choice blindness literature, showing that people tend to be blind to the reasons for their choice (Hall et al., 2010;Johansson, Hall, Sikström, Tärning, & Lind, 2006;Rieznik et al., 2017). However, participants felt their choice was more deliberate when they did not choose the target card. The feeling of deliberation was the only component which was not correlated with the general sense of freedom or its other two components. During the debriefing, participants who did not choose the target card typically reported first thinking about taking it and then changing their mind for another one. This suggests that people can be aware of their metacognition about their choice, while still being blind to why they are acting in the way they do. Towards freer choices? Our results highlight important new pathways to explore the nature of free will. If people can become aware of their metacognition about their decisions, they can inhibit their initial impulsive and automatic behaviour and decide not to act upon them. Baumeister, notes that one needs to go through an inner process of choosing for free will to be relevant. He describes how the role of free will would be to alter the flow of our behaviour, and how "the capacity for rational thought and decision-making lies atop an irrational, impulsive beast, and so it only sometimes can alter the cause of action that that impulsive beast will take" (p. 71). Dovetailing this idea, our results suggest that we should refocus the debate on determinism vs. free will and frame the latter in terms of degrees. Baumeister linked free will and dual-process theories of human mental functioning by pointing out that free will could be mainly associated with what is called System 2, or the cognitive processes involving conscious and controlled activity rather than the nonconscious and automatic one associated with System 1 (2002). Investigating freedom in terms of autoregulation and selfcontrol might help us find ways to conquer these degrees of freedom of choice. Such empirical findings may help us understand the mechanisms that underpin our reasoning and help us make more deliberate choices, rather than simply acting on habits and automatic behaviours. This research may help us find concrete and practical ways to enhance our deliberate and rational cognitive processes. Our paper suggests that simply making people more aware that they are making a decision could be one efficient solution.
2020-05-15T11:28:15.548Z
2020-05-14T00:00:00.000
{ "year": 2020, "sha1": "7d4fdf486420880ca7a196aeaaa5e454739f0b5f", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00426-020-01350-z.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "d41ed1f83dc985653f1cdbf7656155b67081d9d3", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
15682497
pes2o/s2orc
v3-fos-license
Preparation of porous bio-char and activated carbon from rice husk by leaching ash and chemical activation Preparation porous bio-char and activated carbon from rice husk char study has been conducted in this study. Rice husk char contains high amount silica that retards the porousness of bio-char. Porousness of rice husk char could be enhanced by removing the silica from char and applying heat at high temperature. Furthermore, the char is activated by using chemical activation under high temperature. In this study no inert media is used. The study is conducted at low oxygen environment by applying biomass for consuming oxygen inside reactor and double crucible method (one crucible inside another) is applied to prevent intrusion of oxygen into the char. The study results shows that porous carbon is prepared successfully without using any inert media. The adsorption capacity of material increased due to removal of silica and due to the activation with zinc chloride compared to using raw rice husk char. The surface area of porous carbon and activated carbon are found to be 28, 331 and 645 m2 g−1 for raw rice husk char, silica removed rice husk char and zinc chloride activated rice husk char, respectively. It is concluded from this study that porous bio-char and activated carbon could be prepared in normal environmental conditions instead of inert media. This study shows a method and possibility of activated carbon from agro-waste, and it could be scaled up for commercial production. environmental impact, because of their widespread use and their potential to toxic aromatic amines accounting approximately 50 % of worldwide production (Guettai and Amar 2005;Rys and Zollinger 1992). About 20 % of synthetic dyes are lost in the waste water stream that is also important source of pollution (Zollinger 1991). Due to the release of dyes in aquatic ecosystem a dramatic consequences caused through the toxicity, aesthetic pollution and perturbations in aquatic life. Since the modern dyes are stable in aquatic system, biological treatments for eradication of textile and dyeing effluents are ineffective (Dai et al. 1995(Dai et al. , 1996Chung et al. 1981). Activated carbon derived from alkali treated rice husk indicated good adsorption capacity for methylene blue in aqueous solutions (Lin et al. 2013), and for nitrogen adsorption/desorption (Liu et al. 2016). Activated carbon has several important usages including solution purification, removal of tastes and odors from domestic and industrial water supplies, vegetable and animal fats and oils, alcoholic beverages, chemicals and pharmaceuticals and in the waste water treatment. It is a versatile product with good market demand. The non-availability of high quality activated carbon in the Bangladesh market, to cater to the needs of the pharmaceutical and fine chemical sector has necessitated imports into Bangladesh. Despite its prolific use of activated carbons in water and wastewater industries, commercial activated carbons remain an expensive material. This has led to a search for low-cost, easily available materials as alternative adsorbent materials. Proper utilization of agro industrial by-product is very much important for economy of a nation. A wide variety of carbons have been prepared from agricultural waste such as coconut shells (Laine et al. 1989), cotton stalk (Grigis and Ishak 1999), sugarcane bagasse (Ahmedna et al. 2000), coir pith (Kadirvelu et al. 2003), straw (Namila and Mungoor 1993), rice husk (Lin et al. 2013;Liu et al. 2016). Each adsorbent has its drawbacks and advantages. Previous study reported that significant changes observed in activated carbon with gradually increased in activation temperature ranging from 600 to 800 °C (Lin et al. 2013). Inert media (nitrogen) is commonly used for activation of carbon at high temperature in the range of 600 to 900 °C (Ahmedna et al. 2000;Shimda et al. 1999;Guo et al. 2008;Lin et al. 2013;Liu et al. 2016). Activated carbon preparation under a nitrogen medial adds cost of production of activated carbon and also complex experimental setup. The present work demonstrates the feasibility of activated carbon preparation form of rice husk locally available as low-cost adsorbent materials at normal environmental condition. Char production Rice husk is washed thoroughly with tap water initially to remove mud and other water soluble impurities and then the materials are washed with distilled water to remove other impurities. After washing the materials are dried in an oven at 105 °C for 24 h. The samples are preserved in desiccators to avoid further absorption of moisture. Dried rice husk sample is taken in a porcelain crucible and covered with lid and placed in a proportional-integral-derivative (PID) control muffle furnace at 650 °C for an hour. The carbonized husk samples are cooled and preserved for using for the next step of the procedure. Experimental treatments for preparation of activated carbon The aim of the different treatments for preparation of activated carbon is to identify the best way of producing quality activated carbon. In this context, a single variety of rice husk is taken to investigate the activated carbon preparation with a temperature level 600, 700, 800 and 900 °C. Heating duration ranges from 30 to 120 min with a 30 min interval. As ash in rice husk char retards its' porousness, it is treated with alkali solution (sodium hydroxide) for leaching ash from it. After removal of ash from rice husk char, the char is washed and dried in oven and kept in desiccators for next use. The treated rice husk char then activated with zinc chloride. The chemicals are used at different ratio level of char and chemical at different temperature level. Oxygen-cut down environment by double crucible method In general activated carbon is prepared in an inert media (nitrogen or argon). However the preparation of inner environment for activated carbon is little bit inaccessible in general at local condition. Then an alternative method was applied in this study called double crucible method. In this method the char was placed in a smaller silicon crucible covered with ground silicon lid (Tatlok brand). The small crucible containing rice husk char is put in a bigger porcelain crucible. The gap inside the bigger crucible is filled up by raw husk to make reduce oxygen environment inside the crucible and finally the bigger crucible was covered with lid. In this method, the bulk volume of air inside the bigger crucible is replaced by raw char, again when the crucible is heated up; firstly the volume of air inside the crucible is expanded and a portion of the air comes out from both the small and bigger crucible; secondly the volatile part of rice husk is reacted with oxygen and it is assumed that the entire amount of oxygen is exhausted from the crucible during the heating process. Since there is no inert media is used in this study it will be cost effective process. Chemical activation of rice husk char Two grams (2 g) of rice husk char was soaked in zinc chloride solution for 24 h. Then the crucible was placed inside a muffle furnace to give a required heat treatment. After activation with zinc chloride, the samples are washed with 0.3 (N) hydrochloric acid solutions firstly and then washed with distilled water until the pH value reached 7.0. After washing the samples are dried in an oven at 105 °C for 24 h. Then the dried samples are preserved in desiccators to avoid further absorption of moisture. Characterization of the activated carbon The activated carbon is firstly evaluated with methylene blue (MB) dye adsorption test. One of the main purposes of this study is to find out the best combination of activation factors (precursor, temperature, activation agent, degree of heat treatment) to obtain best product from rice husk. Therefore, several steps and procedures are followed successively to reach the ultimate goal. The effect of temperature on activation of rice husk char was studied. Three levels of temperature viz. 600, 700 and 800 °C are employed for 2 h of heating for each level. The heat treated rice husk chars are then evaluated by adsorbing the methylene blue. Rice husk char (0.05 g of each sample) is weighed by a 4-decimal digital balance (Model: OHAUS) and mixed in 100 mL of 10 −4 M MB solution. The mixture of dye and char is then stirred and kept for about 7 h. After adsorption the dye solution is taken into a centrifuge tube. The dye solution is then centrifuged for 20 min at 1500 rpm to settle down the char particle at the bottom of the tube. Clean solution is taken from upper part of the tube. Then the solution is put in Vis UV equipment to measure the maximum absorbance at 664 nm of wave length. MB adsorbed is measured by using visible ultra violet (Vis UV) equipment (model 6715 UV/Vis spectrophotometer, JENWAY) in the Environmental Engineering Lab of Civil and Environmental Engineering, Dept of Islamic University of Technology. A calibration chart of MB with respect to absorbance of ultra-violet (UV) is determined. Coefficient of extinction is found to be 65,280 L mole −1 cm −1 . The concentration of MB after adsorption is determined using the following equation: where C e = concentration of MB solution after adsorption; A = absorbance of Vis UV, cm −1 ; E = coefficient of extinction, 65,280 L M −1 cm −1 .Then the amount of MB adsorbed is calculated by using the following equation: where q e = uptake of dye by adsorbent, mg g −1 ; C 0 = initial concentration of dye, M L −1 ; C e = final concentration of dye, M; V = volume of dye solution, L; m = weight of activated carbon, g; W = mole weight of MB (319.86 × 1000), mg/mole. Specific surface area determination by MB adsorption Specific surface area is calculated from the amount of MB adsorbed. The occupied surface area by one molecule of MB is considered to be 130 Ǻ 2 . Then the specific surface area is calculated by using the following equation: where S s = specific surface area, m 2 g −1 ; q e = amount of MB adsorbed, mg g −1 ; W = molecular weight MB, mg/mole; A V = Avogadro's number (6.02 × 10 23 per mole); A MB = area covered by one molecule of MB (130 Ǻ 2 ). Contact time study To determine the equilibrium time a kinetic study is carried out. The time to reach equilibrium is determined by a series of measurements over the range of 30-480 min at room temperature. Adsorption kinetics study To study the adsorption kinetics of activated carbon three different models are studied. The models are (1) pseudo-first order kinetic model, (2) pseudo-second order kinetic model and (3) intra particular diffusion model. (1) Adsorbent dosage study The effect of adsorbent dosages on the equilibrium adsorption of MB in solution is studied. In this experiment the initial concentration of dye is used as 10 −4 M (32 mg L −1 ). The dosages of activated carbon are used in the range of 0.01-0.035 g for each sample. The solution is stirred with a magnetic stirrer and kept the solution until the equilibrium of adsorption. The amount of dye adsorbed in the mg g −1 at equilibrium is calculated using Eq. 2. Adsorption isotherm model study To study the adsorption isotherm models, three different models are studied. The models are (1) Freundlich isotherm model, (2) Langmuir isotherm model, and (3) Langmuir-Hinshelwood isotherm model. Effect of temperature on raw rice husk char The MB number ranged from 10 to 12 mg g −1 (Fig. 1) and it decreases in trends with respect to increase in temperature. The decreasing trends might be due to the increase in ash content of rice husk char with increase in temperature. The specific surface area ranges from 25 to 28 m 2 g −1 . Effect of temperature on sodium hydroxide treated rice husk char Ash in the rice husk char is removed by dissolving it in the sodium hydroxide solution. The ash content of rice husk char is lowered as low as 4 % from 54 % at initial condition. After removal of ash from rice husk char, 2.0 g of char is heated at 600, 700 and 800 °C of temperature and left for 2 h for each sample. After activation the samples are cooled and evaluation tests are done. The results showed that the adsorption of MB increased significantly compared to the previous results. The MB numbers are found to be 66, 71 and 135 mg g −1 at 600, 700 and 800 °C, respectively (Fig. 1). Corresponding values of specific surface area are 162, 174 and 331 m 2 g −1 . Effect of temperature on zinc chloride impregnated activated carbon The ratio of zinc chloride and rice husk char is used 3:1 (Ahiduzzaman 2011). Four different temperature regimes are selected viz. 600, 700, 800 and 900 °C. The MB numbers Fig. 1 Effect of preparation temperature of activated carbon on MB adsorption are found to be 149, 203, 262 and 264 mg g −1 at the temperature of 600, 700, 800 and 900 °C, respectively (Fig. 1). The MB number increased with increase in temperature significantly in the range of 600-800 °C. The increase in MB number does not increase significantly between 800 and 900 °C temperature regimes. The specific surface area ranges from 365 to 645 m 2 g −1 in the temperature regime of 600 to 900 °C. The specific surface area does not varied significantly beyond the temperature regime 800-900 °C. The duration of heat treatment in this study is 1 h. Further study might need to examine the effect of duration of heat treatment for preparation of activated carbon. Rice husk char contained more than 50 % of ash which creates problem for pore development in the activated carbon. After removing the ash from the husk char the porousness of char is increased significantly. After activation with chemical like zinc chloride the char develops more pores. Scanning electron microscopy (SEM) analysis Scanning electron microscope images of a sample is done by scanning it with a highenergy beam of electrons. The electrons interact with the atoms those make up the sample producing signals that contain information about the sample's surface topography. In this study SEM images of raw rice husk, rice husk char, rice husk ash, rice husk char treated with sodium hydroxide and activated with zinc chloride. SEM image of activated carbon is taken by Hitachi N-3400 equipment at Bangladesh Council of Scientific and Industrial Research lab, Dhaka. SEM of raw rice husk and rice husk char shows surface topography without any pore (Figs. 2, 3). SEM image of rice husk ash indicates a porous structure in nature (Fig. 4) which means that after combustion organic compound including carbon is driven off and silica remains as its structure in the husk. In reverse, if the silica is driven off from the rice husk char by the treatment of sodium hydroxide then a porous carbon structure is obtained as shown in Fig. 5. For further development of porous structure, the sodium hydroxide treated rice husk is activated with zinc. The SEM image of zinc chloride treated activated carbon shows the well developed micro pore structure (Fig. 6). Effect of heating duration on zinc chloride impregnated activated carbon From Fig. 1 it is clear that the highest MB number is found at the temperature of 800-900 °C regimes. Therefore, the temperature is selected 900 °C and the heating duration employed in the range of 30-120 min with an interval of 30 min. The MB number of the activated carbon increases with increase in heating duration. The MB numbers are found to be 224, 249, 262 and 269 mg g −1 for the heating duration of 0.5, 1.0, 1.5 and 2.0 h, respectively (Fig. 7). Corresponding specific surface area are found to be 548, 608, 649 and 659 m 2 g −1 for the heating duration of 0.5, 1.0, 1.5 and 2.0 h, respectively. Contact time study Contact time study examines the rate of adsorption by activated carbon prepared in this study. MB solution of 5 × 10 −5 M (16 mg L −1 ) concentration is used in this investigation. 100 mL of MB solution is taken in reagent bottle and 0.01 g of activated carbon is mixed and stirred thoroughly. Sample solution is withdrawn at a predefined interval of time. The solution is centrifuged and carbon particle free solution is placed in a UV Vis spectrophotometer to determine the absorbance and finally to calculate the adsorbed amount of MB after a specific time. The kinetic curve of the activated carbon is shown in Fig. 8. The extent of MB dye removal by activated carbon increased with the increased of contact time. The removal percentage of dye is found to be rapid at the initial stage and becomes slower with the increase of contact time. This is due to the strong attractive forces between the dye molecules and the activated carbon. Adsorption kinetics There are several kinetic models are proposed to illuminate the mechanism of solute adsorption by adsorbent. The analysis of adsorption dynamics illustrates the solute uptake rate and clearly controls the residence time of adsorbate uptake at the solid-liquid interface. In this investigation, the kinetics of MB adsorption on activated carbon is analyzed using three adsorption kinetic model. Those are pseudo first-order kinetic model of Lagergren (1898) (reported in Ho 2004), Pseudo second order kinetic model of Ho and McKay 2000 and intra-particle diffusion model of Weber and Morris (1963) (reported in Srivastava et al. 1989;Maiti 2007). The adsorption kinetic model analyzed using the adsorption data obtained from 0.01 g of different types of activated carbon dosage in 16 mg L −1 MB solution at predefined time interval up to 7 h of uptake time. The pseudo first-order kinetic model Model of Lagergren (Table 1) is known as the pseudo first order kinetic expression. The plots of log (q e − q t ) versus t clearly illustrates a very good correlation with a coefficient value of r 2 in the rage of 0.974 (Fig. 9). k 1 and q e are calculated from the slope and Table 1 The adsorption constant rate of different kinetic models at 0.01 g of the activated carbon dosage and 16 mg L −1 q e = uptake by the activated carbon at equilibrium, mg g −1 ; q t = uptake by the activated carbon at time t, mg g −1 ; k 1 = rate of constant of pseudo first order adsorption, min −1 ; k 2 = rate of constant of pseudo second order sorption, g mg −1 min −1 ; R = the fraction solute adsorbed; t = contact time; a = gradient of linear plots and depicts the adsorption mechanism; K id = intra-particle diffusion rate constant (h −1 ) intercept of the plot, respectively. The rate constants of first order kinetic models for different types of activated carbon are presented in Table 1. Pseudo second-order kinetic model In order to differentiate the kinetics of second order rate expressions on the sorbent concentration from the models and on solute concentration, a pseudo second order rate expression was used to evaluate the adsorption kinetic of activated carbon. The plots of t/q t versus t clearly illustrate a very good correlation with a coefficient value of r 2 in the rage of 0.998 (Fig. 10). The rate constants of pseudo order kinetic models for different types of activated carbon are presented in Table 1. Intra-particle diffusion The adsorbate species are most probably transported from the bulk of the solution into the solid phase through intra-particle diffusion process, which is often the rate-limiting step in many adsorption process. The possibility of intra-particle diffusion is explored by using the intra-particle diffusion model (Table 1). The plots of log (R) versus log(t) clearly illustrates a very good correlation with a coefficient value of r 2 in the rage of 0.965 (Fig. 11). The rate constants of intra-particle diffusion kinetic model for different types of activated carbon are presented in Table 1. Adsorbent dosage study This study investigates the effect of adsorbent dosage on MB uptake of the kinds of activated carbon by varying the adsorbent dosage. The results of adsorbent dosage study are illustrated in Fig. 12. The results reveal that there is variation of dye removal with respect to dosage. The equilibrium condition of activated carbon is found at the dosage of 0.149 g L −1 of activated carbon from rice husk for 10 −4 M (32 mg L −1 ) concentration of dye. The removal efficiency increases with increase in dosage of adsorbent. Study of adsorption isotherm models Adsorption isotherm provides useful information to estimate the performance of a respective carbon in a full-scale process stream. The isotherms help to determine the possibility to reach a desired purity level with activated carbon treatment. Also they help to calculate the loading of activated carbon at equilibrium, which is a major impact on process economics. Adsorption equilibrium provides fundamental physiochemical data for evaluating the applicability of sorption process as a unit operation. To have a quantitative means of comparing adsorption strength and to design adsorption process effectively, it is useful to use mathematical models for predicting the adsorption. The adsorption capacity depends on the chemical and physical properties of the adsorbent, in which the porosity is one of the important factors. The Freundlich (reported Fig. 11 Plot of first order kinetic model for activated carbon from rice husk with zinc chloride activation Fig. 12 Effect of adsorbent dose of methylene blue removal for activated carbon from rice husk with zinc chloride activation in Ofomaja 2010; Kannan and Veemaraj 2009;Theivarasu et al. 2011), Langmuir (1918 adsorption isotherms (reported in Maiti 2007;Theivarasu et al. 2011;Safa and Bhatti 2011), and Langmuir and Hinshelwood model (reported in Guettai and Amar 2005) are used to analyze the results in describing adsorption behavior of the adsorbent. Freundlich isotherm model The Freundlich isotherm model is defined by an equation (Table 2). The Freundlich constant k f and n are to be calculated from the intercept and slope of the equation. The constant values of the Freundlich isotherm model are shown in Table 2. The isotherm model curves followed linear equation with correlation coefficient in the range of 0.842. Langmuir isotherm model The Langmuir isotherm model is defined by an equation (Table 2). The constant K L and b are calculated from the intercept and slope of the equation. The isotherm model curves of activated carbons followed a linear equation with very good correlation coefficient in the range of 0.940. Langmuir isotherm shows better correlation compared to the Freundlich plot. Langmuir-Hinshelwood plot The Langmuir and Hinshelwood model can be expressed by an equation (Table 2). The constant values of the Langmuir-Hinshelwood isotherm model are shown in Table 2. The isotherm model curves of activated carbons followed linear equation with very high correlation coefficient in the range of 0.995. Langmuir-Hinshelwood isotherm shows better correlation compared to the Freundlich and Langmuir plots. The surface area of the activated carbon is calculated by using the constant value obtained from Langmuir-Hinshelwood plot (Fig. 13). Table 2 shows the calculation of specific surface area of the activated carbon using the Q max value. The specific surface area is estimated to be 737 m 2 g −1 of carbon which indicates a quite satisfactory level of quality of activated carbon. It is mentioned that this area is calculated based on the area covered by MB adsorption which is an indicator of meso-porous area. If the area is calculated by using liquid nitrogen adsorption methods then this would give higher value. Conclusion Activated carbon is an essential material for various industrial usages making environment safe. Agro-waste could be a source material for activated carbon. This study reveals that the porous bio-char can be prepared by leaching ash from rice husk char at Table 2 Adsorption characteristics of activated carbon prepared from rice husk q e = uptake by activated carbon at equilibrium, mg g −1 ; C e = concentration of solution at equilibrium, mg L −1 ; n = Freundlich constant; k f = adsorption coefficient, L g −1 ; K L = Langmuir adsorption constant, mg g −1 ; b = constant related to energy of adsorption, L g −1 ; Q max = the maximum adsorbed quantity, mole g −1 ; K ads = Langmuir adsorption constant related to energy of adsorption, L M −1 Freundlich isotherm constant ln(q e ) = ln(k f ) + 1 n ln(C e ) low temperature instead of high temperature application. Furthermore, pore structure of porous bio-char increases with increase in temperature of heat treatment. Rice husk char contains a huge percentage of ash that retards the porousness. The study shows that the ash can be removed by alkali treatment. The alkali treated rice husk char develops porousness at intermediate level, further, the pore structure alkali leached rice husk char could be developed with chemical agent. The results also reveal that activated carbon and porous carbon can be prepared with double crucible method to ensure oxygen free or less oxygen environment in the furnace instead of application of inert media. This study shows a method and possibility of activated carbon from agro-waste, and it could be scaled up for commercial production.
2018-04-03T05:20:15.333Z
2016-08-03T00:00:00.000
{ "year": 2016, "sha1": "7315180b42545d288000844811f26d9f5366115a", "oa_license": "CCBY", "oa_url": "https://springerplus.springeropen.com/track/pdf/10.1186/s40064-016-2932-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7315180b42545d288000844811f26d9f5366115a", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Engineering" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
221464548
pes2o/s2orc
v3-fos-license
AmpliconReconstructor integrates NGS and optical mapping to resolve the complex structures of focal amplifications Oncogene amplification, a major driver of cancer pathogenicity, is often mediated through focal amplification of genomic segments. Recent results implicate extrachromosomal DNA (ecDNA) as the primary driver of focal copy number amplification (fCNA) - enabling gene amplification, rapid tumor evolution, and the rewiring of regulatory circuitry. Resolving an fCNA’s structure is a first step in deciphering the mechanisms of its genesis and the fCNA’s subsequent biological consequences. We introduce a computational method, AmpliconReconstructor (AR), for integrating optical mapping (OM) of long DNA fragments (>150 kb) with next-generation sequencing (NGS) to resolve fCNAs at single-nucleotide resolution. AR uses an NGS-derived breakpoint graph alongside OM scaffolds to produce high-fidelity reconstructions. After validating its performance through multiple simulation strategies, AR reconstructed fCNAs in seven cancer cell lines to reveal the complex architecture of ecDNA, a breakage-fusion-bridge and other complex rearrangements. By reconstructing the rearrangement signatures associated with an fCNA’s generative mechanism, AR enables a more thorough understanding of the origins of fCNAs. In this manuscript, Luebeck and colleagues describe a new computational approach for the reconstruction of amplicon architecture via the integration of NGS and optical mapping data (AmpliconReconstructor, AR). Their manuscript builds on previous work from this group which used WGS NGS data to generate accurate breakpoint graphs for copy number alterations (AmpliconArchitect, AA). Unambiguous reconstruction of these alterations have remained a challenge with short-read data alone, and here they integrate OM with NGS to produce such reconstructions and discuss their functional implications. The authors are right to point out that such ambiguities are an important problem in the field, with long-standing issues in understanding copy number alterations and the genetic structures within. This is especially true with the recent wave of studies exploring extrachromosomal circular DNA and the implications of such discoveries at the level of both the genome and epigenome, as well as with older questions involving BFBs and translocations. Their approach and algorithm appear to be adept at reconstructing fCNA architecture, even if currently limited to samples with both NGS and OM data. The manuscript is well written, and highlights both the advantages and caveats to their approach, which is appreciated. The manuscript could be further improved by addressing some of the following points. Main comments: 1) Given the great interest in the field in computationally distinguishing between extrachromosomal and intrachromosomal amplifications, can the data/approaches here offer any insight into whether HSRs might be genetically joining the linear chromosomes to which they appear to be physically attached as observed in FISH? As the authors point out, the differences in topology between ecDNA and HSR-like fCNAs don't lend themselves to differences in the breakpoint graphs, but these breakpoint graphs are in fact derived from the focal amplifications mapped in AA, correct? If HSRs are at all directly fused to the linear genome (and they may not be), are there any 'edge' cases demonstrating something resembling an 'exit' from the fCNA to the associated linear chromosome in NGS or OM data? 2) As the authors point out, a previous effort integrated NGS (WGS and HiC) and OM data to study structural variants (Dixon et al, Nature Genetics 2018). The Dixon et al analysis of data from the renal cancer cell line CAKI-2 yielded a chr2-chr3 fusion (Fig 1) as well as a chr6-chr8 fusion (Fig 2), whereas AR results on CAKI-2 in this paper yielded a segment of chr3 joined to chr12 (Supplemental Fig. S11c,d). It's clear that different approaches will always have certain advantages and disadvantages, but can the authors explain the discrepancies here? Also, given the increasing abundance of HiC maps, might they be used in future iterations to aid the AA if not the AR steps in the current approach? 3) In the authors' 2019 paper (Deshpande et al), they use Amplicon Architect in an innovative way to find human-viral fusion segments in cancer genomes and explore their functions and potential origins. With the OM strategy, how might incorporation of these additional (non-human) reference sequences work with the AR approach? 4) The authors state that future iterations of AR may involve input from other long-read sequencing technologies. Perhaps outside the scope, but given that they've worked with PacBio data in the past (Deshpande et al 2019), can they comment on read lengths/accuracies that would make such efforts worthwhile for increasingly used platforms like PacBio and Nanopore? 5) Given that all of the data in this manuscript has been generated from cell lines, can the authors comment on challenges they expect to encounter when attempting to reconstruct amplicons in primary patient data with this approach? Reviewer #3 (Remarks to the Author): Expert in optical mapping and genome assembly Luebeck et al. "AmpliconReconstructor: Integrated analysis of NGS and optical mapping resolves the complex structures of focal amplifications in cancer." This is a well-written paper that describes an exciting new piece of software AR. AR can combine sequencing (WGS) and optical mapping (OM) data to accurately reconstruct the rearrangement events that commonly occur in various tumor lineages. Specific examples of various rearrangement and copy number amplification mechanisms were inferred and illustrated nicely. I believe that AR would be a useful software for researchers aiming for accuracy and resolution of such events. There are still some issues though as I went through this paper. I think one of the immediate questions a reader might have -what is the added benefit of OM on top of WGS? The authors need to make it explicit during the Discussion. There have been many separate studies looking at WGS alone, or OM alone, what are they missing? AR makes heavy use of the breakpoint graph from WGS, does it somehow limit AR from the discovery of additional events that are difficult to identify using WGS (e.g. lack of coverage for certain regions, variation in mappability across the genome). There are some brief hints of this mentioned in the Results but not summarized systematically which I think would be highly valuable. Similarly, what are the added benefits of WGS+OM on top of OM alone? Some in-depth analyses may be needed to look at the results from simulation studies to understand the common error modes. What are the characteristics of the false positive or false negative, or poor LCS predictions from AR? While this reviewer has little doubts that AR would be useful -it is important to inform the readers when the software fails and how they fail. Would supplying more input data (both the WGS and OM) help in those cases? The mixture experiment on heterogeneity is very nice. Can we learn what is the limit of capability of detection from the mixture experiment? For example, in terms of the minimum level of abundance or the number of clonal subpopulations in the sample. In the Introduction, please introduce the key algorithmic challenge and what is the main algorithm and statistical models employed in tackling this challenge. Please mention any prior statistical modeling work on the properties of NGS-read breakpoint graph and optical mapping data. For example, the source of variation on segment size, missing labels, uncertainties in the direction and connection within the breakpoint graph, etc. to provide some theoretical ground for the AR method later in the paper. There were a number of issues in the Methods section. Please check the formulas and pseudocode more carefully. While I could conceptually follow most of it, I had some issues going through. Check DP recurrence equation, especially the sub for the max (line 578), I don't think I agreed with the current upper bounds for i and p. Algorithm 2: explain what c and k are and whether they are fixed parameters or estimated from the data (and how). What is the rationale behind this scoring function? Specifically, what is the role of each of these terms? Explanations would help. Poisson distribution (line 658), should it be a minus or equal sign? The formula also seems wrong for the exponential of e part. Should it be -E instead of -a? Some parametrization needs a little more explanation. Explain why different modes of alignment warrant different levels of P-value threshold, and how are these thresholds determined. Parameterization of Irys and Saphyr is different due to their properties (e.g. Fig. S1d) -Supplementary Table 5 contains at least 5 parameters that are different, as well as the minimum number of labels (line 671). What is the process to tune these parameters per platform, at least on a high level? This will be relevant as Bionano continues to upgrade the chemistry and specs, and for people trying to optimize those parameters for their own data. Finally, as a sincere suggestion to improve software usability, please include visualization or succinct textual outputs on the front page of README of the repo to make it friendlier to the new users so they can know what they expect. Please also consider including a docker image that has everything installed and a copy of the test data that contain a full pipeline script end-to-end, including the command of PrepareAA to build the initial breakpoint graph. "The area of applying OM to the study of structural rearrangements in cancer is an important one, and in need of integrative algorithms like the authors propose. The method is novel and has interesting elements, including the realignment approach and a reasonable framework for integrating NGS with OM." We appreciate this comment from the reviewer. 1. "The paper / field would benefit from a better discussion of the ambiguities of amplicon reconstruction. Even with OM, the order of segments in a nested duplication may very likely be ambiguous, especially if the locus is much larger than an OM map. The authors acknowledge this ambiguity in the methods (mentioning that AR will only resolve events with fewer than 1024 possible paths) and a little in the main text (mostly referencing previous work / state of the field). However it would be useful to have a more textured discussion of this ambiguity and what cases may be particularly difficult for AR / OM. "A key ambiguity is distinguishing between reconstructions that place many duplicated segments on the same allele vs different alleles. The most extreme example is distinguishing an eccDNA vs chromosomally integrated segmented dup (see below); however, any complex nested (tandem or inverted) duplication pattern might suffer from the same issue. For example, distinguishing between an large eccDNA with many tandem copies of a region vs a collection of small eccDNA's each with a single copy of a duplication. Can the authors add a discussion of these issues and relate them to the parameters of SV nesting / molecule size. Simulation could help inform this intuition." Response: We agree with the reviewer that a discussion of reconstruction ambiguity, and the ability or inability to distinguish between large duplications inside an amplicon are important to provide a complete characterization of the field, and this method. There are many complexities to this, and we will address them one by one. Simple segmental tandem duplications inside an amplicon can be disambiguated provided the duplication is spanned by long reads. We do not give specific limits on detection in this paper as the read length for Bionano changes depending on instrument, sample prep, other factors. We made numerous changes to the Discussion section (lines 518-526, 561-570) to more thoroughly discuss the reasons for ambiguity and a description of why the "nested duplication" case is sometimes difficult to resolve. While the reviewer suggests that an additional simulation study could inform our understanding of complex SV nesting behavior, we argue that there are too many variables involved in resolving such nested structural variants to create a meaningful or accurate simulation which simulates such structures from the ground upespecially in the OM case. At least, the following four interacting variables are involved: 1) the size of the nested structural variant, 2) the number of times (n) it is duplicated inside the fCNA, 3) the fraction of reads or maps which span the n copies of the nested variant, anchoring it on either side, and 4) the properties of variation in molecule length, molecule error and assembly error. A very basic simulation would reasonably anchor three of these variables while measuring the effect of varying the other. We argue that the results of this may not be very meaningful as it would only capture the nature of this phenomenon for very restricted set of conditions, without interaction. Due to those complex interactions between molecule length, assembly ability, repeat number and amplicon size, we deemed a simulation strategy attempting to account for all these variables and their interactions to be combinatorially infeasible. We instead leveraged existing data and used a data-driven approach to fCNA reconstruction which used the state of the art for putative fCNA structure in order to inform the accuracy of our method. We discuss our simulation strategy and new results in more depth in our response to point #3 by this reviewer. We note that such nested duplications are in fact captured in our current simulation strategy based on suggested amplicon structures derived from NGS data on cancer cell lines (response to point 3c). We report in the simulations that AR resolved 75% (45/60) of these duplications. We reiterate that AR also successfully resolved a segmental tandem duplication of 430 kbp in T47D. The ability to do this however is constrained by 1) length of read/map, 2) length of duplication. Effective resolution occurs when length of read >= length of duplication ( Figure R1). It is also important to describe the modes by which such a "nested duplication" pattern arises in the context of resolving the biological mechanism of its genesis. We believe aggregated/agglomerated ecDNA (ecDNA which has joined in tandem, as reported in Turner et al., Nature 2017) to have the "nested duplication" property ( Figure 2b). We cannot reliably distinguish this event from non-agglomerated ecDNA with AR alone. However, it is somewhat immaterial as the reconstruction of a cyclic ecDNA is enough to inform the biological mechanismwhether it is currently agglomerated or not. We address this in more detail in the response to point #2 by the reviewer. 2. "Related to above, a key (biological) question in resolving amplicon structure is whether these structures are circular / extrachromosomal vs chromosomal / integrated / HSRs. A "cycle" an the NGS breakpoint graph can be equivalently interpreted as a tandem segmental dup or a eccDNA. i.e. these patterns will look identical on NGS WGS. Indeed, in practice an (arbitrary, heuristic) copy number threshold is used to distinguish between (lower copy) tandem dups and (very high copy) DMs. OM can potentially distinguish between these two possibilities since the OM maps are on the scale that they may be able to unambiguously place a (reasonably small) amplicon on a chromosome. Do the authors claim that their method is able to make this eccDNA vs integrated seg dup distinction reliably? How confidently can the authors make this assessment? Ideally this would be shown with simulations where the simulated ground truth is eccDNA vs integrated. Such a simulation, if working properly would fail with NGS WGS short reads and would (for a certain event scales) succeed in resolving amplicons with OM." Figure R1. Nested duplications can theoretically be resolved with long reads when both errors in graph and CN estimate exist. Response: The reviewer asks an important question that is central to the field itself. The first is whether we can distinguish ecDNA from structures that were extrachromosomal but aggregated and reintegrated chromosomally. Second, is whether we can distinguish chains of tandem duplications from ecDNA amplification. The first point overlaps a question raised by Reviewer #2 as to where these structures integrate -which we addressed with additional computational analysis in the first response to reviewer #2 and present in Figure R6. The results of this analysis were incorporated into the revised manuscript (Results: "Integration points of focal amplifications"). To validate our claim that circular genomic structures are indicative of ecDNA, we provide some data from concurrent work (Kim, H. et al. Frequent extrachromosomal oncogene amplification drives aggressive tumors" bioRxiv 2019), in which we performed FISH analysis on 81 samples from 54 different cell lines, including 19 glioma derived models, three medulloblastoma models and 32 non-brain tumor models, and described a classification scheme based solely on NGS reads that predicts ecDNA with 83% sensitivity (29 of 35 FISH validated samples were classified as ecDNA positive, Figure R2A). Moreover, 41 of 47 non-circular structures did not show ecDNA in FISH. We have also tested predictions on primary whole-genome sequencing (WGS) dataset of 15 neuroblastoma tumors, to which the Henssen group also applied Circle-Seq to detect ecDNA (Koche et al, Nature Genetics 2019, PMID 31844324). Circle-Seq is a sequencing library enrichment approach optimized for circular DNA detection (Møller et al. PNAS, 2015, PMID 26038577). We observed a very high concordance between WGS and Circle-Seq approaches in distinguishing circular from non-circular DNA amplicons ( Figure R2B). Specifically, 4 of 65 WGS-detected amplicons were classified as circular by both methods and one of five Circle-Seq derived regions was classified as non-circular. These results translate to a 100% specificity and 80% sensitivity. All 60 amplicons classified as non-circular were not detected by Circle-Seq, implying 100% specificity. Figure R2. A. FISH based counts of extrachromosomal signals per cell, for 81 amplicons classified using AmpliconArchitect. Using 0.5 ecDNA per cell as a cutoff, the sensitivity and predictive positive value for detection of circular DNA from whole-genome sequencing data is respectively 83% and 85%. No-fSCNA -No focal somatic copy number amplification detected. B. Summary of whole-genome sequencing versus Circleseq derived amplicons from 15 neuroblastoma tumors. With respect to the second question, it is important to describe the modes by which a tandem duplication pattern arises in fCNA ( Figure R3). Extreme copy number amplification with conserved breakpoints is an indicator of an ecDNA-derived mechanism (See Figure R3, (i)). While one could also explain very large amplifications using tandem duplications, that would require for the same breakpoint to be used again and again ( Figure R3, (iii)). However, in other studies (Kim et al., bioRxiv, 2020), we have found that the exact breakpoint varies across different samples while amplifying the same gene. This apparent contradiction is resolved if high copy numbers with conserved breakpoints arise due to ecDNA formation. We have added a note explaining this in the discussion section (lines 561-570) which is reproduced in the following paragraph. "One traditionally difficult fCNA case to reconstruct involves the nested duplication of genomic segments inside an amplicon. Unless a significant fraction of reads or genomic maps have a length greater than the duplicated element, the duplication status may not always be accurately resolved, leading to ambiguity in the possible set of reconstructions. The use of cyclic structures as a signature for ecDNA is based on the reuse of breakpoints. Multiple tandem duplications can also give rise to a cyclic breakpoint graph structure. However, for that to happen, the same breakpoint would need to be reused repeatedly, and evidence points against that possibility. Instead, ecDNA provide a simpler and arguably correct interpretation of cyclic structures, and that has been validated using cytogenetics and comparison to Circle-seq experiments." Based on the results of Kim et al., bioRxiv, 2020, we can say that even short read amplicon structures provide good evidence of separation between intra-chromosomal and extra-chromosomal amplification ( Figure R3). This is, however, a question we do not re-address in this paper. We present AmpliconReconstructor as a tool not specifically for distinguishing between extrachromosomal and intra-chromosomal amplifications, but rather to strengthen the findings of circularity, and to provide better resolved amplicon structure, including improved prediction of mechanisms such as breakage fusion bridge and translocations, and sites of possible integration of ecDNA that have re-integrated into non-native locations. This leaves the question of structures that were ecDNA but have reintegrated. Indeed, we classify these as ecDNA as our cytogenetic evidence suggests that cells often carry ecDNA that extrachromosomal as well as structures that have reintegrated. As part of the revision for this paper, we have used optical map data to identify sites of integration (Figure R7 below). 3. a) "The simulation seems a bit simple and biased towards the authors' existing tools. Firstly the simulation models molecules rather than whole chromosomes, and not clear whether it is diploid or matches the ploidy of real cancer genomes". Response: We believe that we may not have properly conveyed the nature of the simulation process and its complexity. Therefore, to address the reviewer's comment, we have created Figure R4 (on following page, in text as Supplemental Fig. S3) to help address any confusion. Our simulation design captures many critical sources of OM error and is based on real structural variants detected in cancer cell lines. The simulations created individual unassembled OM molecules derived from a diploid reference genome. These individual molecules are sampled randomly from chromosomes and an error profile is applied, using the tool OMSim. The simulation of thousands of individual molecules introduces more realistic sources of error such as OM assembly error, missing labels, etc. We argue this is a superior strategy than end-to-end generation of preassembled contigs, where realistic errors do not arise naturally. The computational time required to generate these simulations was non-trivial. On our cluster computer with dual Intel Xeon E5-2643 (3.5 GHz) CPUs (24 logical threads), 128 Gb RAM, the assembly process of the background reference took four days alone. The total amount of simulated data we generated for all samples in this simulation study was over two terabytes. One terabyte for the 85 original simulations + 20 de novo simulations discussed in point 3c at approximately 10 Gb per simulation, and another terabyte of data for the 123 heterogeneity simulation examples. In total, the process for generating, assembling, and running AR on these datasets required > three weeks of runtime on three parallel nodes and used the vast majority of our lab's computational resources during that time. The reviewer raises a very valid question as to whether this is biased in favor of our previous tools. Furthermore, the effect of changing the background reference genome to more closely mirror a real cancer genome is a point well taken. We respond to that in the answer to point 3c, where we introduce a tumor reference into de novo simulations of circular ecDNA. b) "Secondly, the simulated chromosomes are currently based on paths outputted from AA breakpoint graph, which are likely short and fragmented since they derived from NGS WGS? These graphs also likely do not have connectivity in repeat regions since the AA graphs would miss these SV's in the short read NGS data." Response: While it is true that our simulated amplicons are based on AA results, the role of AA in this simulation was to create an underlying set of realistic structures from which OM molecules were sampled and simulated. Importantly the molecules simulated from these structures were added into a set of molecules generated from whole genome simulation. Thus, the simulations capture the complexity of the whole genome and amplicons combined. The average length of simulated optical map molecules in our study was 150 kbp. The paths in our simulation have a median length of 1.1 Mbp and contain a mean of 17.5 genomic segments, which we believe are similar to the size and complexity of real focal amplifications in cancer, and importantly contain many repeat elements such as LINEs and SINEs (Deshpande et al. Nature Communications 2019). AA uses a database to filter low-complexity regions from SV analysis which is derived from the database used by another state-of-the-art SV caller, Lumpy. The issue of missed SVs in low complexity regions is a standing problem in the SV field. We note that AR can use the Bionano data to enable the discovery of SVs missed by AA in such regions. We are not aware of a diverse set of possible reconstructions of focal amplifications which are not generated by AA -and if we had been able to identify another tool which output such data it would possibly be biased in favor of that tool's results. However, to address the reviewer's concerns, we designed a new set of "ground-up" simulations to address two points. This simulation strategy is described in the response to point 3c. Figure R4. Flow diagram for AR simulation strategy. Either hg19 or a simulated tumor reference based on hg19, using SCNVSim was used. Simulated amplicons were either derived from prior AA results or created de novo by simulated amplicons using ecSimulator. 8 c) "Finally, it's unclear whether the AA-derived simulated rearrangements would have the sort of nested (tandem or inverted) duplication structures that would make these kinds of reconstructions truly challenging in practice. It would be more convincing to use a simulation that rearranges a fully diploid genome end to end and explicitly models a BFB and complex double minute. This would also help determine when / how well AR can resolve essential features of amplicons (ie chromosomal vs ecc)." Response: This is a good point raised by the reviewer, and in addition to answering the reviewer's questions we include a description of new simulations performed to address this point. In response to the point as to whether it is unclear if these structures contain internal duplications, we analyzed the duplications present in our simulation set. Across the 85 original examples plus the 20 de novo simulated circular ecDNAs discussed in the following paragraphs, 60 duplications of chains of breakpoint graph segments were present in the simulated amplicons. The average size of duplications present in these examples was 281 kbp (minimum 224 base-pairs, max 849 kbp). AR resolved the duplication status in 75% of cases (45/60). We have clarified this point on lines 189-191, reproduced below: "Large duplications inside a rearranged amplicon represent a challenging case to reconstruct. We identified 60 duplications of one or more graph segments (mean length 281 kbp) in the simulated amplicons, and we report that AR resolved 75% (45) of these duplications." The initial simulations we performed did contain circular paths (double minute structures). In total, 44% (37/85) simulated paths were circular. These were of course based on putative circular ecDNA structures as detected by AA with NGS data from real cancer cell lines. We have clarified this on lines 152-155. As AR does not perform an automated analysis to assign a biological mechanism (ecc vs chromosomal), as described in the Discussion section, that part of the process requires manual interpretation. Ultimately, the simulation study measures the accuracy of AR's reconstructions in the aggregate, across amplicons generated by many mechanisms. To address the point related to an end-to-end model which explicitly models complex DMs, we designed and implemented the following strategy. 1) In the original simulations, the simulated molecules from the background reference (non-amplicon) were generated with hg19 and assembled. In our new set we instead used SCNVSim, a tumor genome simulator capable of generating SNVs, CNV/ploidy changes and structural variants on a flat reference to create a simulated cancer reference genome with hg19 as input. 2) The original simulations used putative AA reconstructions from the 2019 paper, which were used as input for simulating OM data. To provide an unbiased alternative to this, we created a ground-up simulation utility, called ecSimulator (https://github.com/jluebeck/ecSimulator), which generates random extrachromosomal DNA structures given some user-specified parameters. EcSimluator is a new utility we are developing that selects random intervals from the reference genome and assigns breakpoints along those intervals. It then conducts SV operations on those breakpoints, including duplications, deletions, inversions and translocations. Importantly, we note that AR had no statistically significant difference in performance (measured by F1 scores using the "Length (bp)" metric between the original simulations and the de novo simulations. This suggests that our simulations are likely not biased towards AA. As we state in the updated results section: "In this final simulation study we created 20 new amplicons which were subject to our simulation pipeline. BFB has classically been detected through signatures of BFB (foldback reads, FISH, etc), not through reconstruction of the BFB itself. As a result, the field lacks wide-spread and detailed knowledge of real BFB structures, from which to perform a data-driven simulation and thus we do not explicitly model a BFB structure in our simulations. In this paper we are not attempting to make an automated prediction of the fCNA's biological mechanism but rather to resolve the fine structure as accurately as possible. The resolved fine structure can enable a user to interpret the underlying biological mechanism, as we did with BFB where we were aided by the established theoretical model for BFB formation produced by Zakov et al. a) " Can the authors draw a better conceptual comparison / contrast of AR vs AA, including comparing results? I get it that AR incorporates OM data, but don't the tools otherwise have the same goal? It seems that the authors only utilize AA for breakpoint graph inference, but if I recall AA also generates paths. How do the AR vs AA paths compare? Similarly, AR integrates the NGS and OM data and presumably also should create a modified (improved) breakpoint graph, which better account for copy number changes (see below). How do the AR vs AA breakpoint graphs compare? These comparisons would help provide intuition as to the value added by OM." Response: This is a great question. AA indeed produces a cycle-decomposition of the breakpoint graphwhich represent possible reconstructions based purely on NGS data. We have conducted new analysis to answer this question and described the results presented below ( Figure R5) which compare AA and AR under a subheading in the Results section, "AR provides a reconstruction improvement over AA." We have reproduced the added section below: "AA can attempt to identify some heaviest weight paths and cycles in the breakpoint graph only using NGS data. We demonstrate that for complex amplicons, AR provides an improvement to the fraction of the amplified genomic segments which are explained by the output structure compared to the heaviest path or cycle generated by AA (Supplemental Fig. S15a). As OM data may suggest additional amplicon junctions not observed in NGS, we observed that the resulting set of amplified segment junctions observed in the AR output was equal to (GBM39, T47D) or larger than (CAKI-2, H460, HCC827, HK301 and K562) the number of junctions suggested in the AA breakpoint graph alone (Supplemental Fig. S15b)." One primary difference however is that AR does not attempt to construct a breakpoint graph. In fact, AR does not generate a new breakpoint graph as output. It uses the AA graph to identify higher-quality scaffolds and these scaffolds may infer junctions not observed in the AA breakpoint graph. AR can be thought of as method to simplify the breakpoint graph and identify chains of amplicon segments. To show how the paths suggested by AA (with NGS alone) and the paths suggested by AR compared, we created a supplemental figure shown in Figure R5 (Supplemental Fig. S15a,b). This figure shows the proportion of the breakpoint graph explained by AA and AR (A) and the contribution of AR to adding junctions to those identified by AA (B). Some samples such as GBM39, H460 and HK301 show very similar performance. It is important to keep in mind that samples were selected non-randomly. Those were less complex samples included to help make the development process of AR more feasible. The more complex samples (K562 and HCC827) show better performance with AR. b) "e.g. Figure 2A shows a breakpoint graph with many apparent copy number changes that are not associated with a structural rearrangement. This is presumably the NGS WGS graph inferred by AA, but AR should improve this graph? How well does it improve the graph? Can such an AR-improved graph account for all the copy number changes with a rearrangement or are there still links missing?" Response: As Figure 2A indicates a FISH image, we will assume the reviewer is referring to Figure 3A, which does indeed show the copy number changes without structural rearrangement. We created Figure R5 to address the first two questions asked by the reviewer, and this is also included in the subsection "AR provides a reconstruction improvement over AA". In the case of K562, AR did suggest additional junctions between amplified segments, not contained in the breakpoint graph. With OM alone such junctions are necessarily coarse-grained due to the mapping resolution of OM. AR suggested 25 additional junctions beyond the 30 suggested by AA alone (55 total). There may still be links missing from the breakpoint graph which AR did not detect, but that is not possible to quantify in this case. We have added a line to clarify this in the Discussion (lines 152-155). With regards to the last question, while AR reconstructs an amplicon containing all amplified segments, it does not explain the variable copy numbers between chr9, chr13 and chr22 as the reviewer pointed out. Presumably there is some level of heterogeneity present. AR is designed to reconstruct the dominant focal amplification and detecting lower abundance fCNA is outside the scope of this paper. Regardless, we agree the CN variance not explained by AR should be better clarified and this is addressed in the revised version of the Discussion section on lines 327-330 and 543-552, including "… Copy number variance in this amplicon not explained by AR may be due to structural heterogeneity across the many copies of the amplicon…" 5. "The paths that are inferred by AR presumably imply a copy number, and then each path has a multiplicity. When integrated together, does this copy number match the total copy number of the region? It seems like the method only uses the total copy number as an upper bound? If so, then how do the authors explain the remaining copies and breaks in the region?" Response: AR examines the ratio of copy numbers (Methods: "Finding reconstructions in the linked scaffold graph"). Thus, if a segment is duplicated inside an amplicon and thus has 2x CN above the rest of the focal amplification, it can have 2x as many copies in the resulting reconstruction. The method used by AR is not constrained solely by the maximum copy number, but the ratio of copy numbers in the breakpoint graph. We allow for some error to occur in the CN reported in the breakpoint graph. The number of copies implied by the ratio may exceed the number of breakpoint graph copies by 1, to account for any underestimation in the NGSderived copy number. However, as the reviewer points out, there may be excess copy numbers present in the graph which are not explained by the amplicon. Unexplained CN may be due to heterogeneity of the amplicon in the genome, missing breakpoints, missing segments, issues with OM assembly or some combination of all these factors. We have clarified this in the discussion section on lines 544-546, 554-560, including: "The collection of paths reconstructed by AR represent possible reconstructions of the fCNA, and the collection of paths may contain multiple similar explanations for the fCNA architecture. This may be in part due to genomic heterogeneity, limitations of the optical map assembly process, or errors in linking scaffolds across overlapping graph segments. Furthermore, technological limitations related to the quality of OM assembly may affect the ability to reconstruct high-fidelity amplicons." 6. a) "How did the authors arrive at the reconstruction of the time sequence of events for the BFB amplicon. Was this done manually or is it a part of AR? If the latter can the authors reveal the method? Are there alternative reconstructions? Otherwise, it would be clearer to emphasize that this part of the analysis was manual and not an output of AR." Response: We agree that these are important questions. The reconstruction of the time sequence of events for the BFB amplicon was performed manually, using the theoretical underpinnings in Zakov et al. We found no other alternative reconstructions of the time sequence which were consistent with the theoretical model for BFB formation. We have clarified this point in the manuscript: "When the AR scaffolds were combined with the copy number data present in the breakpoint graph, we were able to manually identify a single BFB structure, that was consistent with the theoretical model of BFB formation." (lines 395-397). b) "where does the BFB "end"? Does it acquire a telomere of another chromosome or is a new telomere synthesized at one of it's ends?" Response: We believe that the BFB repeat unit we reconstructed is repeated 10 times between the of the BFB ends (centromere to telomere of chr7p) based on the copy number of the segments in the breakpoint graph and the reconstructed repeat unit. As discussed in Maciejowski and de Lange, Nat. Rev. Mol. Cell Biol., 2017, BFBs often reacquire a telomere. However, as noted in Carroll et al., Mol. Cell Biol., 1988, in some cases a BFB will end by making a DM (ecDNA). We did not detect any DM-like reconstructions in this case and did not observe any EGFR-bearing DMs in the HCC827 FISH results. We have added this negative finding for BFBderived double-minute structures in the Results section: "While some BFBs may result in "double-minute" amplicons, AR suggested and FISH analysis confirmed that the HCC827 BFB does not contain a circular extrachromosomal version of the BFB cycle." (lines 391-393). We therefore conclude that the HCC827 BFB stabilized with the acquisition of a telomere. AR is not designed to do telomere detection, and we feel attempting to detect the telomere capping the BFB is an analysis that is outside the scope of the paper. Reviewer #2 (Remarks to the Author): "Their approach and algorithm appear to be adept at reconstructing fCNA architecture, even if currently limited to samples with both NGS and OM data. The manuscript is well written, and highlights both the advantages and caveats to their approach, which is appreciated." We thank the reviewer for this comment. 1. "Given the great interest in the field in computationally distinguishing between extrachromosomal and intrachromosomal amplifications, can the data/approaches here offer any insight into whether HSRs might be genetically joining the linear chromosomes to which they appear to be physically attached as observed in FISH? As the authors point out, the differences in topology between ecDNA and HSR-like fCNAs don't lend themselves to differences in the breakpoint graphs, but these breakpoint graphs are in fact derived from the focal amplifications mapped in AA, correct? If HSRs are at all directly fused to the linear genome (and they may not be), are there any 'edge' cases demonstrating something resembling an 'exit' from the fCNA to the associated linear chromosome in NGS or OM data?" Response: This is a very good question and represents an area of the field which is understudied. Low frequency breakpoint edges, such as the ones indicating integration points may not appear in the NGS breakpoint graph and they may not get incorporated into assembled OM contigs. We used Bionano Solve molecule alignment methods for OM molecule (unassembled) to search for evidence of such integration points in NGS and OM molecules. In CAKI2, H460 and K562 we identified four estimated integration points with 10 or more molecules of support in the OM data. To identify these points, we sorted aligned molecules and clustered alignment endpoint pairs. The values below represent the center of the cluster -and not a refined breakpoint. Unfortunately, HK301, the sample where FISH suggested an integration of a circular ecDNA, did not have high enough OM coverage (> 100, Supplemental Table 1) to perform such an analysis. We have described this analysis in the manuscript under a subheading in the Results section, "Integration points of focal amplifications", reproduced below: Visualized with MapOptics, H460 showed a single integration point between amplicon region chr8:129410000 and non-amplicon region chr12:7660000 (Supplemental Fig. S14a). K562 showed two integration points. The first joined amplicon region chr13:81120000 and non-amplicon region chr1:142890000 (Supplemental Fig. S14b). The second joined amplicon region chr13: 93260000 and non-amplicon region chr1: 142890000 (Supplemental Fig. S14c) Fig. S14d). . The proximity of these two integration points suggests a left and right boundary for the integration of the K562 BCR-ABL1 amplicon. CAKI-2 showed one integration point joining amplicon region chr12:88300000 and non-amplicon region chr6:168380000 (Supplemental HCC827 and T47D did not show any such integration points with 10+ molecules of support, which is consistent with the finding that these were chromosomally derived focal amplifications (BFB and segmental tandem duplication respectively) residing in their native locations." The breakpoint cluster centers from Supplemental Table 3 (Fig 1) as well as a chr6-chr8 fusion (Fig 2), whereas AR results on CAKI-2 in this paper yielded a segment of chr3 joined to chr12 (Supplemental Fig. S11c,d). It's clear that different approaches will always have certain advantages and disadvantages, but can the authors explain the discrepancies here? Also, given the increasing abundance of HiC maps, might they be used in future iterations to aid the AA if not the AR steps in the current approach?" Response: We agree that a comparison with the Dixon et al. paper is very prudent. We had initially felt that such a comparison was unfair as our methods enable reconstruction of chained breakpoints only in focally amplified regions, while they searched for all genomic breakpoints. To address this point in a fair manner, we examined the breakpoints reported by Dixon et al. which overlapped the regions we studied. While they reported breakpoints using a number of different technologies separately, we used their high-confidence integrated breakpoints (Dixon et al., 2018 Supplemental Fig. S15c). In those cases, the majority of breakpoints not observed by AR joined amplicon regions to regions outside the original amplicon (CAKI-2: 2 of 3 non-AR breakpoints, H460: 1 of 1 non-AR breakpoints, K562: 11 of 16 non-AR breakpoints). In H460, the one breakpoint not observed by AR was the integration point we later detected, suggesting that these are typically lower frequency breakpoints perhaps related to integration or heterogeneity." We identified four cell lines shared between the two studies with breakpoints reported in this table; CAKI2, H460, K562 and T47D. As they performed their analysis entirely with respect to hg38 and we used hg19, we first lifted over all their breakpoints on the same chromosomes as our amplicons to hg19. We then matched breakpoints from Dixon et al with breakpoints in the AR reconstructions and counted the amount of overlap. In all four cases, AR identified more breakpoints in these regions than Dixon et al. (Figure R7). This may be because AR provides a level of validation to structural variants suggested in NGS in addition to the large ones OM detects on its own. In this regard the NGS and OM data have a synergistic ability to detect breakpoints when combined in our study. In comparison, the Dixon study requires multiple technologies to independently identify the same breakpoint, and thus the presence of one breakpoint in one technology does not help to condition the existence of the same breakpoint in another technology. These findings are reported in the manuscript in the Results section, in the last paragraph under the subheading "AR provides a reconstruction improvement over AA." Figure R7 has been added as Supplemental Figure S15c. We do note that the existence of breakpoints identified by Dixon et al. which are not present in our AR reconstruction despite overlapping the amplicon region(s). This may be because AA, which is responsible for building the breakpoint graph assess the existence of breakpoints in a manner which considers the copy number of the region. Thus, some breakpoints that occur in low frequency may not appear in the AR reconstruction. Hi-C is a promising way to detect large structural variants perhaps missed by traditional NGS. It would provide a very valuable addition to AA and we will consider adding that ability in the future. As for the possible contribution of Hi-C data to AR, we feel AR would best be served by incorporating new technologies that can chain multiple breakpoints together with single reads, simplifying a breakpoint graph. Thus, we feel Hi-C would be worth adding to AA, but not necessarily to AR. We added a note mentioning this point in the discussion section "Other sequencing modalities involving NGS with modified sample preparation, such techniques based on Hi-C, have shown the ability to reveal additional genomic breakpoints without the need for an additional sequencing instrument. While constructing a breakpoint graph is not a part of the AR algorithm, we acknowledge that such techniques would be valuable to adapt for breakpoint graph generation." (lines 580-585). 3. "In the authors' 2019 paper (Deshpande et al), they use Amplicon Architect in an innovative way to find human-viral fusion segments in cancer genomes and explore their functions and potential origins. With the OM Figure R7. Overlap between detected breakpoints by Dixon et al. and AR within the focally amplified regions analyzed by AR. strategy, how might incorporation of these additional (non-human) reference sequences work with the AR approach?" Response: This was not something we had initially considered but we believe AR would certainly be useful in validating the integration of certain oncoviruses into the cancer genome. Viral genomes are generally too short to be reliably labeled with OM fluorescence (typically 10 kb or less). Thus, OM data can serve as a validation of integration, but on its own is very unlikely to be useful -unless the OM protocol is modified to provide a separate fluorescence on a custom-designed label for a viral sequence. We decided to address this question by performing an additional simulation. To demonstrate that AR can validate a viral integration suggested by NGS, we performed a simulation (following the strategy in Figure R4) where we simulated a circular ecDNA with human papillomavirus-16 (HPV) viral integration (sequence length: 7906 bp), similar to such structures reported by Nguyen et al., Nucleic Acids Research, 2018, and Deshpande et. al, on top of a simulated tumor reference genome background. AR was able to identify the viral integration successfully ( Figure R8) despite the viral sequence having no labels, using the adjacencies suggested in the simulated breakpoint graph. We note that experimental strategies which separately label the HPV16 genome during OM sample prep may also provide a high throughput method for integration location identification. We have added this new result to the Results section, in the last paragraph of the "AR reconstructs ecDNA in multiple forms" subheading, reproduced below: "We had previously documented a circular amplicon containing an integrated human papillomavirus-16 (HPV16) genome, and we hypothesized that AR could be used to help resolve the location of viral insertion in the host genome. We simulated a 1 Mbp circular amplicon with the 7.9 kbp HPV16 genome randomly inserted. AR was able to reconstruct the structure of the circular ecDNA as well as the integration point of the HPV16 sequence (Supplemental Fig. S7), despite the viral genome having no OM labels, suggesting that AR would serve as useful method for validating the existence of genomic oncovirus integrations suggested by NGS data." 4. "The authors state that future iterations of AR may involve input from other long-read sequencing technologies. Perhaps outside the scope, but given that they've worked with PacBio data in the past (Deshpande et al 2019), can they comment on read lengths/accuracies that would make such efforts worthwhile for increasingly used platforms like PacBio and Nanopore?" Figure R8. AR reconstruction of a simulated ecDNA containing HPV16. We suggest that read lengths in excess of 150 kbp will span two or more breakpoints (with enough overhang for sufficient anchoring) for the majority of fCNAs, given that the median length of genomic segments in our study was 100 kbp. As the resolution of OM data is so coarse, we consider nanopore data to be less noisy than OM data and thus we felt it goes without saying that nanopore is accurate enough for this problem. Given that modified nanopore protocols can routinely generate a substantial fraction of reads in excess of 150 kbp, we suggested that nanopore would be valuable for fCNA resolution and we have added that to the Discussion section at lines 575-578. We intend to adapt AR for nanopore in the future. We feel however, that PacBio reads are still too short for reliable reconstruction of fCNA (very few reads > 100 kbphttps://www.pacb.com/products-and-services/sequel-system/latest-system-release/). The fCNA reconstructed in Deshpande et al., using a combination of NGS and PacBio was only 100 kbp. "With modified protocols, nanopore reads may routinely surpass 150 kbp in length, which is sufficient to frequently chain multiple breakpoints in fCNA. When paired with NGS data we hypothesize that one could modify AR and achieve similar results to NGS and OM data. We plan to address this point in future methods development." 5. "Given that all of the data in this manuscript has been generated from cell lines, can the authors comment on challenges they expect to encounter when attempting to reconstruct amplicons in primary patient data with this approach?" This is an interesting question to consider. We have addressed this in the Discussion section of the manuscript on lines 536-541, reproduced below: "While we analyzed data from cancer cell lines, sequencing data collected from patients may introduce more sources of complex genomic structural heterogeneity. Using AmpliconArchitect, we have previously analyzed > 3000 whole genome sequences from primary tumors and achieved similar success in resolving ecDNA status as with cell line data, and similar levels of heterogeneity as measured by breakpoints per amplicon." Reviewer #3 (Remarks to the Author): "This is a well-written paper that describes an exciting new piece of software AR. AR can combine sequencing (WGS) and optical mapping (OM) data to accurately reconstruct the rearrangement events that commonly occur in various tumor lineages. Specific examples of various rearrangement and copy number amplification mechanisms were inferred and illustrated nicely. I believe that AR would be a useful software for researchers aiming for accuracy and resolution of such events. There are still some issues though as I went through this paper." We thank the reviewer for this comment and for the reviewer's exceptionally close reading of the manuscript. 1. "I think one of the immediate questions a reader might have -what is the added benefit of OM on top of WGS? The authors need to make it explicit during the Discussion. There have been many separate studies looking at WGS alone, or OM alone, what are they missing? AR makes heavy use of the breakpoint graph from WGS, does it somehow limit AR from the discovery of additional events that are difficult to identify using WGS (e.g. lack of coverage for certain regions, variation in mappability across the genome). There are some brief hints of this mentioned in the Results but not summarized systematically which I think would be highly valuable. Similarly, what are the added benefits of WGS+OM on top of OM alone?" Response: This is an important discussion point not clarified thoroughly enough in our original manuscript. We answered a very similar response to a similar question by Reviewer 1 (point #4 a,b). In comparing AA's reconstructive abilities based on NGS alone with AR (AA output + OM) we note a marked improvement in reconstructive ability ( Figure R5). One way to view AR is that AR provides a level of validation to structural variants suggested by NGS. The manuscript describes that the median molecule ("map") length used in assembly across all samples used in this study is 244 kbp (molecule N50 340 kbp), while the median segment length in breakpoint graphs used in this study is 100 kbp, highlighting that OM data can span multiple junctions in the fine-resolved breakpoint graphs derived from focal amplifications (Supplemental Table 1). By virtue of OM not being "sequencebased", the integrated NGS data and OM data provide an orthogonal pairing of short-and longrange information about genomic structural variation. In Chaisson et al. Nature Communications 2019, the authors demonstrate the increased ability of OM to detect extremely large structural variants over NGS. While NGS provides finely mapped breakpoints, if the exact breakpoint has poor mappability, the breakpoint may go undetected. To expand on this point, the NGS and OM data have an independent ability to detect breakpoints when combined in our study. At the same time, we use the OM junctions to condition the existence of the suggested NGS breakpoints. In that regard there is a synergistic relationship between NGS and OM when used in AR. In comparison, the Dixon et al. Nature Genetics, 2018 study requires multiple technologies to independently identify the same breakpoint, and thus the presence of one breakpoint in one technology does not help to condition the existence of the same breakpoint in another technology. We show the added benefit of WGS+OM over OM alone is that, without NGS data, imputation of short segments into OM scaffolds would be impossible. We show that improvement as a simulation result ( Fig. 1i and Supplemental Fig. S4). Most tools (either for NGS or OM) look for individual SVs and do not try and capture amplicons structures. The reconstruction of BFB and complex ecDNA would not have been possible without the combination of both modalities. We have edited the Discussion section to address these points (lines 508-511): Figure R5. A. Proportion of amplified segments in the AA-generated breakpoint graph which are explained by AA and AR heaviest reconstructions. B. The number of breakpoint edge junctions inferred by AA and by AA + AR (union). B. "OM tends to detect larger SVs than NGS alone and is less affected by mapping issues on low complexity breakpoints. We have demonstrated that NGS data, when incorporated with OM can be used to resolve fine-mapped breakpoints suggested by OM." 2. "Some in-depth analyses may be needed to look at the results from simulation studies to understand the common error modes. What are the characteristics of the false positive or false negative, or poor LCS predictions from AR? While this reviewer has little doubts that AR would be useful -it is important to inform the readers when the software fails and how they fail. Would supplying more input data (both the WGS and OM) help in those cases? The mixture experiment on heterogeneity is very nice. Can we learn what is the limit of capability of detection from the mixture experiment? For example, in terms of the minimum level of abundance or the number of clonal subpopulations in the sample." Response: We agree that we should present a better understanding of the relatively small number of cases where AR shows poor results on simulated data. To produce a more complete understanding we manually analyzed the results from the AR + SegAligner simulation. In the AR + SegAligner simulation, both modules were our tools as opposed the cases where RefAligner or OMBlast were used for OM alignment. Specifically, we analyzed the simulation that had path imputation enabled (disabled imputation was only simulated for purposes of showing the improvement imputation yields) for amplicons with CN 20. Considering all simulated amplicons where precision was < 0.6 (N = 13 out of 85 simulated structures): • 9 cases showed signatures of assembly failure (i.e. the Bionano Assembler did not reconstruct any contigs covering the region of interest which did not differ from the reference genome). • 3 cases showed signs that at least some amplicon contigs were assembled but the breakpoint graph was too complex/segmented for AR determine a correct structure. A highly segmented breakpoint graph leads to difficulty identifying "anchor" segments to form the backbone of a reliable scaffold using the OM data. • 1 case showed that the contigs were correctly assembled and the individual scaffolds generated by AR was correct, however the linking of the different scaffolds on unbroken graph segments was incorrectleading to an incorrect reconstruction. Considering all cases where the recall was < 0.6 (N = 14 out of 85 simulated structures): • 9 cases showed signatures of assembly failure. • 5 cases showed signs that the breakpoint graph was too complex. Red boxes indicate region where precision < 0.6, recall < 0.6. Regarding the question as to whether we can learn what the limit of focal amplification is from further mixture experiments -We are confident that the selected mixtures capture the levels of heterogeneity which are both detectable and still at reasonable copy numbers to be considered as focal amplifications. We believe that there are too many variables involved to design a meaningful simulation that would accurately identify the limit of detection from a mixture experiment. We have added a note in the results to explain some of these findings (lines 198-207), reproduced below: "To understand the reasons for loss of performance on a small number of simulation cases, we examined the results from the CN 20 simulation where individual reconstructions showed either precision or recall < 0.6. We manually examined the results from the 85 total cases and found that of the 13 amplicons with precision below this threshold, nine cases showed signs of assembly failure, while three had incorrect reconstructions likely on account of graph complexity. The remaining case showed an issue with incorrect scaffold linking. Of the 14 amplicons having recall below the threshold, nine cases showed signs of assembly failure, while five had highly segmented breakpoint graphs making it difficult for AR to identify anchoring alignments around the breakpoints, leading to an incomplete reconstruction." 3. "In the Introduction, please introduce the key algorithmic challenge and what is the main algorithm and statistical models employed in tackling this challenge. Please mention any prior statistical modeling work on the properties of NGS-read breakpoint graph and optical mapping data. For example, the source of variation on segment size, missing labels, uncertainties in the direction and connection within the breakpoint graph, etc. to provide some theoretical ground for the AR method later in the paper." We have modified the Introduction section to draw more attention to the key algorithmic challenge; "ordering and orienting multiple genomic segments joined by breakpoints into high confidence copy number-aware scaffolds, which are subsequently joined to enable complete reconstructions of complex rearrangements." (lines 71-74). We feel that discussion of the main algorithms we developed to address the computational problem should be done at the beginning of the Results section. We have reworded the opening paragraph in the Results section to frame these new algorithmic developments more clearly (lines 120-140). "We formulated the problem of fCNA reconstruction in multiple parts. First, alignment of genomic segments with optical map contigs. Second, the reconstruction of a genomic scaffold using OM data as a backbone. Third, the identification of the maximal simple paths in a graph where each node is an OM scaffold, for which the path is not a sub-sequence of another maximal simple path. AR separates these computational tasks into four primary modules (Fig. 1a,b). .. [description of each module]" We note that there is only limited statistical modeling of the breakpoint graphs we use as input. We provided some of the properties in the Results section where we profile the breakpoint graphs we simulated (line XXX): "These included both cyclic (37 paths) and non-cyclic paths (48 paths Table 2)." There is more prior modelling however related to the properties of optical mapping. We added the predominant sources of OM error to the introduction (lines 95-98) and cited two recent papers which discuss mathematical models for Bionano data and errors (Li, M. et al. "Toward a more accurate error model for BioNano optical maps" Springer Verlag, 2016 and Chen, P., et al., "Modelling BioNano optical data and simulation study of genome map assembly" Bioinformatics, 2018). The introduction of the Bionano Saphyr system has recently changed many of the error rates downwards for optical mapping, providing longer, more accurately labeled molecules. Given the continuing improvements to throughput and sample prep made by Bionano, we feel that performing a complete quantification of the error rates for all different modes is outside the scope of this paper. 4. "There were a number of issues in the Methods section. Please check the formulas and pseudocode more carefully. While I could conceptually follow most of it, I had some issues going through." a) "Check DP recurrence equation, especially the sub for the max (line 578), I don't think I agreed with the current upper bounds for i and p." Response: There were indeed two typos in the upper bounds. The recurrence has been corrected. b) "Poisson distribution (line 658), should it be a minus or equal sign? The formula also seems wrong for the exponential of e part. Should it be -E instead of -a?" Response: The reviewer is correct. We have corrected this typo. 5. "Algorithm 2: explain what c and k are and whether they are fixed parameters or estimated from the data (and how). What is the rationale behind this scoring function? Specifically, what is the role of each of these terms? Explanations would help. "Some parametrization needs a little more explanation. Explain why different modes of alignment warrant different levels of P-value threshold, and how are these thresholds determined. Parameterization of Irys and Saphyr is different due to their properties (e.g. Fig. S1d) -Supplementary Table 5 contains at least 5 parameters that are different, as well as the minimum number of labels (line 671). What is the process to tune these parameters per platform, at least on a high level? This will be relevant as Bionano continues to upgrade the chemistry and specs, and for people trying to optimize those parameters for their own data." Response: We thank the reviewer for raising these important points. We have added an explanation of the scoring function, its terms, and constants c and k in the Methods section on lines 724-733 and reproduced the explanation below this paragraph. We tested many possible values for c and k during development, through a grid search approach on data with known OM alignments and these gave the best performance based on optical mapping tests we designed. We have also tried much more complex scoring functions for OM alignment. For example an early version of SegAligner we built used a likelihood ratio-based scoring model (as used in Valouev et al., Journal of Computational Biology, 2006) with parameters learned from the data using a maximum likelihood approach, yet we ultimately found our simpler heuristic formulation presented here performed the best. OM data errors are still not as well understood as errors in other sequencing modalities and hard to model (compared to NGS or even Nanopore), thus a heuristic worked better for us than a probabilistic scoring model. "Score is defined in Algorithm 2 and contains four main terms. First, fn which is defined as the number of potentially unmatched contig labels between i and j scaled by the missing label score, c. Second, eref is the number of potentially unmatched reference labels between p and q, after accounting for labels which are too close together to be measured distinctly. Third, fp is the number of potentially unmatched reference labels scaled by the missing label score. Lastly, Δ measures the absolute difference in length between j -i and q -p, which is scaled non-linearly (k). Together these penalty terms are combined and subtracted from a base matching score 2c. Parameters c and k in this model were identified through a coarse grid-search using data where correct OM contig-reference alignments were already known." The different modes of alignment use different p-value thresholds as they consider search spaces of different sizes, and thus we control the false discovery more stringently for larger search spaces. The detection of reference segments, which involves alignment against the entire genome has the largest search space and thus gets the smallest p-value threshold. To set the default values, we used results of SegAligner on data where OM alignment was already "known", i.e. identified through a combination of Bionano RefAligner, OMBlast, and extensive manual inspection. We have described this on lines 824-829. "The need for different p-value thresholds between the different modes of alignment is based on the different sizes of the search spaces possible in the different modes. Searching for alignments between contig and entire reference is the largest search space to consider and thus it gets to smallest p-value threshold in order to stringently control false discovery. The default p-values for each mode were assigned based on empirical testing of OM data with known alignments." Our decision to parameterize the instruments differently was a consequence of the improvement in label resolution, and the changes in labelling chemistry and labelling density of the DLE1 (direct labeling) recognition sequence vs BspQI (nickase) as observed in the NA12878 data released by Bionano. Saphyr data tends to be better resolved, having less directional uncertainty and a smaller rate of label collapse (as reflected by parameters t,w, and η in Supplemental Table 5). Again, we tuned these parameters by running SegAligner on data with known alignments and using coarse grid search to identify the best parameter choices for each instrument. We have updated manuscript on lines 781-785 to reflect this explanation, reproduced below: "We selected default parameters separately for Bionano Irys and Bionano Saphyr instruments based on the tendency for the newer Saphyr instrument to have less directional uncertainty and a lower rate of label collapse. We selected default values for each instrument through a coarse grid search strategy and manual examination of data with known alignments." The default numbers of labels for alignment, which are addressed in text and in Supplemental Table 5 were based primarily on the difference in labeling density between BspQI and DLE1. DLE1 has a reference label density 1.7x larger than BspQI. As a result, contigs generated from DLE1 labeling have a higher density of labels per base. We based the minimum length for alignment results based on this, and again used cases where breakpoint graph segments had known alignment locations to optimize these parameter choices. We added a brief description: "The need for different length thresholds is motivated by the different in labeling density between Irys and Saphyr." (lines 834-835). While a comprehensive comparison of the differences between data generated by Irys and Saphyr instruments would be valuable, it is confounded by properties related to the individual sample prep and we consider such an exhaustive analysis to be outside the scope of the paper. 6) "Finally, as a sincere suggestion to improve software usability, please include visualization or succinct textual outputs on the front page of README of the repo to make it friendlier to the new users so they can know what they expect. Please also consider including a docker image that has everything installed and a copy of the test data that contain a full pipeline script end-to-end, including the command of PrepareAA to build the initial breakpoint graph." Response: We have made efforts to improve the software usability and documentation. We have added the following to the AmpliconReconstructor README 1) Revise README to better explain dependencies and give precise installation instructions. A user can install AR & CycleViz with a small number of pre-provided commands. 2) New README section on inputs and outputs for AR -explaining the different file formats and giving an example of the YAML file used as input. 3) Included example commands for generating the AA-derived breakpoint graph using PrepareAA 4) Example output images included on CycleViz README. Because the BAM file and the data repo used by AA are large, we felt that we could not create one single Docker image that contained everything needed to generate the GBM39 test data from BAM file all the way to AR reconstruction. Therefore, we had originally provided an example of running AR using a pre-generated graph. However, we agree that it should be relatively easy to run this test if a user wants to generate the breakpoint graph. As a result, we have dockerized PrepareAA so that it can, in a single container, run both PrepareAA and install and run AmpliconArchitect. In the AR README we have provided an example of a PrepareAA command one can easily modify to generate breakpoint graph, provided they download the BAM file themselves. With that graph generated a user can run the example AR command provided previously, without the pre-generated graph file.
2020-09-03T09:04:48.289Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "ffe659e0001f59efd1d4924d1559cee126080ae5", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-020-18099-z.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "103a02f6ce215a78255868fee4d29864872d5bcd", "s2fieldsofstudy": [ "Medicine", "Biology", "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
17232856
pes2o/s2orc
v3-fos-license
Analysis of the priority of anatomic structures according to the diagnostic task in cone-beam computed tomographic images Purpose This study was designed to evaluate differences in the required visibility of anatomic structures according to the diagnostic tasks of implant planning and periapical diagnosis. Materials and Methods Images of a real skull phantom were acquired under 24 combinations of different exposure conditions in a cone-beam computed tomography scanner (60, 70, 80, 90, 100, and 110 kV and 4, 6, 8, and 10 mA). Five radiologists evaluated the visibility of anatomic structures and the image quality for diagnostic tasks using a 6-point scale. Results The visibility of the periodontal ligament space showed the closest association with the ability to use an image for periapical diagnosis in both jaws. The visibility of the sinus floor and canal wall showed the closest association with the ability to use an image for implant planning. Variations in tube voltage were associated with significant differences in image quality for all diagnostic tasks. However, tube current did not show significant associations with the ability to use an image for implant planning. Conclusion The required visibility of anatomic structures varied depending on the diagnostic task. Tube voltage was a more important exposure parameter for image quality than tube current. Different settings should be used for optimization and image quality evaluation depending on the diagnostic task. Introduction Cone-beam computed tomography (CBCT) provides 3dimensional images of the anatomic structures of the head and neck area. CBCT allows higher spatial resolution, lower radiation exposure, and a lower cost than multi-detector computed tomography (MDCT). 1 Due to these advantages, CBCT scanners have been widely used for many indications in dento-maxillofacial imaging, and radiation dose concerns have increased proportionally. Therefore, the optimization of CBCT images is crucial, and it is necessary to minimize the radiation dose while maintaining the clinical image quality. Image quality can also be assessed by the subjective eva-luation and quantitative measurement of physical factors. 2 Many studies have investigated the technical image quality parameters of CBCT devices, but they used different phantoms, CBCT scanners, exposure parameters, and diagnostic tasks. [3][4][5][6][7][8] These differences among studies make it difficult to compare the previous results directly, and no quantitative image quality criteria or standardized evaluation method has yet been developed to assess CBCT image quality. At this point, subjective evaluation is used as the gold standard to assess image quality for certain diagnostic tasks. 3,[9][10][11] Standardization of subjective evaluation is difficult due to its subjectivity and differences in the methodologies of previous studies. 3,[9][10][11] Subjective evaluations usually involve the identification of anatomic structures by a radiologist. 2, 3,9,11 Many anatomic structures are present in the maxillofacial region, and the importance of specific landmarks may differ according to the diagnostic task. 9,11,12 Analysis of the priority of anatomic structures according to the diagnostic task in cone-beam computed tomographic images However, insufficient research has addressed the relative importance of anatomic structures in relation to diagnostic tasks. This study was designed to evaluate differences in the required visibility of anatomic structures in the diagnostic tasks of implant planning and periapical diagnosis. CBCT images The CBCT images were obtained by a Dinnova3 CBCT scanner (HDXwill Inc., Seoul, Korea). The Dinnova3 scanner has an amorphous silicon flat-panel detector. The voxel size was 0.3 mm × 0.3 mm × 0.3 mm. A pulsed X-ray beam was rotated 360° around the phantom, and the exposure time was 12 s (scan time: 24 s). A total filtration of 2.8-mm aluminum was used. The computed tomography dose index value was 3.183 mGy (120 kV, 120 mAs, field of view [FOV]: 200 mm × 190 mm). An FOV of 200 mm × 190 mm was used to obtain the complete image of a real skull phantom with a soft-tissue replica (X-ray phantom, head; product number 7280, Erler Zimmer Co., Lauf, Germany) ( Fig. 1). To obtain CBCT images with different image qualities, 24 combinations of 6 different tube voltages and 4 different tube currents were used (60, 70, 80, 90, 100, and 110 kV and 4, 6, 8, and 10 mA). Images were saved in the Digital Imaging and Communications in Medicine format. All 24 sets of images were reconstructed into 3 planes (axial, coronal, and sagittal) with a slice thickness of 0.3 mm. Subjective evaluation All reconstructed images were presented to 5 radiologists for a subjective evaluation of the image quality. Three 20.8-inch monochrome monitors (ME315L, Totoku Electric Co., Tokyo, Japan) with a resolution of 2048 × 1536 pixels were used, and the images of each plane were displayed on a different monitor. All observers had a trial session before evaluation, and the evaluation was performed individually in a random, irreversible order. The observers were not informed of the exposure conditions, and they were allowed to adjust the brightness and the contrast of the images. Each observer evaluated the left maxillary first molar area first and the right mandibular first molar area second. Observers were asked about the visibility of 3 anatomic structures in each jaw and the image quality for the diagnostic tasks of periapical diagnosis and implant planning ( Table 1). The following 6-point scale was used to answer 5 items: strongly agree (6), agree (5), slightly agree (4), slightly disagree (3), disagree (2), and strongly disagree (1). The evaluation was repeated after an interval of 2 weeks to calculate intraobserver reliability. We classified the visibility of anatomic structures as visible or invisible, and also classified the image quality for each diagnostic task as acceptable or unacceptable by using consensus criteria. In the consensus criteria, only images that obtained a score of more than 4 from all ob- Statistical analysis The intraobserver and interobserver reliabilities of the subjective evaluations were calculated using the weighted kappa in Microsoft Office Excel 2007 (Microsoft Corp., Redmond, WA, USA). The Fisher exact test was used to evaluate the relationship between the visibility of 3 anatomic structures and image quality for 2 diagnostic tasks in SPSS version 21 (IBM Corp., Armonk, NY, USA). The Mann-Whitney U test was used to evaluate differences in the tube voltage and currents between the visible/ invisible groups and acceptable/unacceptable image quality groups in SPSS version 21. When differences in the exposure parameters were found between the acceptable and unacceptable image quality groups, cut-off values and the area under receiver operating characteristic (ROC) curves were calculated using SPSS version 21. The statistical significance level of P<.05 was used. results In the subjective evaluation, the average weighted kappa value was 0.63 (range, 0.14-1) for intraobserver reliability and 0.51 (range, 0.36-0.66) for interobserver reliability, corresponding to moderate agreement. The agreement results between the visibility of 3 anatomic structures and image quality for 2 diagnostic tasks are presented in Table 2. The visibility of the periodontal ligament space showed a closer association with the ability to use an image for periapical diagnosis than other structures in both jaws. No statistical significance was found between the visibility of the sinus border and the usability of an image for periapical diagnosis of the maxillary first molar. Additionally, visibility of the mandibular canal wall did not show any significant relationship with the ability for an image to be used for periapical diagnosis of the mandibular first molar. For implant planning in the maxilla, the visibility of all 3 anatomic structures showed statistically significant associations with image quality, and all kappa values showed moderate agreement. However, in the mandible, visibility of the canal wall showed the highest agreement, and the visibility of periodontal space ligament did not show a statistically significant relationship with image quality. The differences in tube voltage and current between the visible/invisible groups and the acceptable/unacceptable groups are shown in Table 3 and Table 4. For all anatomic structures, the tube voltage of the visible images was significantly higher than in the invisible images (Table 3). However, no significant differences were found between visible and invisible images regarding the sinus border and canal wall. For all diagnostic tasks, the tube voltage of the acceptable images was significantly higher than that of the unacceptable images (Table 4). However, the tube currents in acceptable images did not show statistically significant differences from the unacceptable images for the diagnostic task of implant planning. This result implies that tube current does not have a major influence on the visibility of the sinus border and canal wall or on image quality, especially for the diagnostic task of implant planning. The cut-off values of tube voltage for the acceptablequality images were calculated. In all groups, the areas under the ROC curves were high, suggesting that the cutoff values were reliable (Table 5). To obtain acceptable images for the periapical diagnosis of the mandible, a tube voltage of 85 kV was required, which was 10 kV higher than needed for other diagnostic tasks. discussion This study investigated the relationships among the visibility of anatomic structures and image quality for 2 diagnostic tasks. The evaluation was performed using a single CBCT device and a real skull phantom. This study demonstrated that the priority of anatomic structures varied depending on the diagnostic task. For periapical diagnosis, visibility of the periodontal ligament space was most important, but the sinus border and canal wall were less important structures. In contrast, the sinus border and canal wall were the most critical anatomic structures for assessing image quality for implant planning. These results imply that different evaluation methods may be needed for different diagnostic tasks. Many studies have been conducted on CBCT image quality, but insufficient information has been published on the assessment of the relationship between image quality and the visibility of anatomic structures corresponding to specific diagnostic tasks. 2 Only 3 studies have evaluated 2 or more diagnostic tasks. 3,9,11 All 3 studies reported that the required image quality varied according to the diagnostic task. Lofthag-Hansen et al. and Choi et al. reported that a higher image quality was required for periapical diagnosis than for implant planning. 9,11 These results correspond with the results of this study. The results of Pauwels et al. also showed that image quality was related to diagnostic tasks, but they reported that image quality was device-dependent and that it was difficult to set a reference value. 3 Changes in tube voltage showed a significant effect on the visibility of all anatomic structures and image quality for the 2 diagnostic tasks. Especially for periapical diagnosis of the mandible, a higher tube voltage was required. In contrary, adjusting the tube current did not lead to significant differences in the visibility of the sinus border and canal wall or in image quality for implant planning. These results support the proposal that by reducing tube current, it is possible to reduce radiation dosage without image quality degradation for implant planning. These results are in agreement with those of previous studies. 2, 13,14 However, reducing the tube current deteriorated the image quality for periapical diagnosis. Periapical diagnosis may require better image quality than implant planning because the periodontal ligament space (PDL) is an important structure for periapical diagnosis. However, it is a fine structure and is susceptible to small amounts of noise. Earlier studies have demonstrated that the PDL space and lamina dura are less visible structures than other anatomical structures using several protocols with different CBCT devices. 3,15 Therefore, optimization by lowering the tube current is considered to be difficult for periapical diagnosis, and other strategies would be needed. In conclusion, the required visibility of anatomic structures varied depending on the diagnostic task. Tube voltage was a more important exposure parameter for image quality than tube current. Different protocols should be used for optimization and image quality evaluation depending on the diagnostic task, and these results can be a starting point for future research into the evaluation of the image quality of CBCT devices.
2018-04-03T05:16:46.068Z
2016-12-01T00:00:00.000
{ "year": 2016, "sha1": "dbe2842f42c9393d05e725ac80a1bc3f04939483", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc5192022?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "dbe2842f42c9393d05e725ac80a1bc3f04939483", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
245613123
pes2o/s2orc
v3-fos-license
Estimation of Chlorophyll-a, TSM and Salinity in Mangrove Dominated Tropical Estuarine Areas of Hooghly River, North East Coast of Bay of Bengal, India Using Sentinel-3 Data This study aims to explore the variations in spatio-temporal characteristics of water quality factors of three estuaries in the western portion of the Indian Sundarbans. Reliable retrieval of near surface concentrations of parameters such as Chlorophyll-a, SST & TSM in various aquatic ecosystems with broad ranges of trophic needs has always remained a complex issue. In this study the application of C2RCC processor has been tested for its accuracy across different bio optical regimes in inland & coastal waters. Satellite images for the same period were also collected and analysed using the C2RCC processing sequence to retrieve values of factors like the depth of water, surface re�ectance, water temperature, inherent optical properties (IOPs), chlorophyll-a, salinity and total suspended matter (TSM) using the SNAP software. During the 2017-2020 season, in situ sampling from speci�c locations and laboratory water quality analysis were carried out. The OLCI retrieved results were then trained and corroborated by means of the in situ datasets. It was observed that the highest amount of TSM was recorded in Diamond Harbour during the pre-monsoon, in the year 2018 (301.40 mgL -1 in-situ value, and 308.54 mg L -1 estimated value). Similarly, chlorophyll-a had higher concentrations through the monsoon season (3.03 mg m -3 , in-situ, and 2.96 mg m -3 , estimated) in Fraserganj and Sagar south points. Very good �tted correlation results for all seasons between Chl-a, r = 0.829 and TSM, r = 0.924 remained established throughout the comparisons of OLCI and in situ results. The high level of correlation highlights the importance of both primary as well as secondary information in understanding any dynamic system properly. Finally, the result shows that the water quality model outperforms conventional techniques and OLCI chl-a and TSM products. This paper empirically investigates a reliable remote sensing method for estimating coastal TSM and chl-a concentrations and supports the use of OLCI data in ocean colour remote sensing. Introduction The rst satellite operation to assess coastal aquatic quality and ocean e ciency from a remote platform was the Coastal Zone Color Sensor (CZCS), which was propelled in October 1978 (Acker, J. 2013, Mondal et al., 2014;Kyryliuk, et al., 2019).This device has been utilized in approximating global production yield, and has steered our growing understanding of the importance of oceanic and littoral phytoplankton production (Longhurst et al., 1995;Behrenfeld et al., 2006).Chlorophyll-a is one of the typical water quality factors observed in aquatic bodies.It is measured by water sampling and laboratory analysis and through in situ measurements using a water quality checker.However, the CZCS has had di culties in distinguishing between chlorophyll-a (Chl-a) and (TSM).Successive operations by the National Aeronautics and Space Administration (NASA) and the European Space Agency (ESA) have aimed to increase the accuracy of the retrieval of various water ingredients from space technology.The launch of OLCI (Ocean and land colour Instrument) on board the Sentinel 3A by the (ESA), in 2016 has enabled better management of the environment and to appreciate and improve data collection in uenced by the effects of climate change (Donlon et al., 2014;Bonekamp et al., 2016;Mondal et al., 2019;2020).OLCI mission is the follow up of the MERIS mission (2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012) (Pavel et al., 2011) with improved capabilities as its spectral conformation is premeditated for optically intricate coastal and inland aquatic bodies (Mondal et al., 2018;ESA, 2019;Kyryliuk, et al., 2019).Various studies have revealed that the OLCI is presently the most appropriate satellite instrument for water color remote sensing in inland aquatic bodies (Mograne et al., 2019;Xue et al., 2019). The Hooghly estuary and its tributaries are an extraordinarily complex study object for ocean colour remote sensing.The very high amount of coloured dissolved organic matter (CDOM) received from the Sundarban mangrove forest in the catchment area along with anthropogenic inputs (Thakur et al., 2019) makes the water darker.As a result, the water loss signal is very small, necessitating the use of highly sensitive remote sensing devices as well as extremely precise atmospheric correction.In addition, due to fresh water input in the upper reaches and differential tidal in uences, low salinity (psu) is common in some areas, while it is much higher in others.Similarly, there are stark differences in the TSM values along different stretches of the estuaries.This necessitates the need for a good algorithm to study the entire network of estuaries criss-crossing and crossing through the Indian Sundarbans.Eutrophication and hypoxia are regular phenomena in the Indian Sundarbans and, thus, environmentalists have been working to decrease the environmental pollution of runoff.Monitoring of chlorophyll concentrations is a very imperative contribution to evaluating these endeavours.Similar conditions have been stated in the OLCI images, with very good accuracy. Most of the studies employing the OLCI images have been taken up in Europe and South East Asia. Studies along the eastern coast of India are very limited and, although the Sundarbans is one of the most studied areas in the world, no study using OLCI images has been attempted yet.Because of the fragility of water and the need for continuous monitoring, there is a need for real-time, surface-based studies of water quality parameters.However, due to accessibility and other logistics, that target has not been achieved on a regular basis.This study is an attempt to bridge the gap by using OLCI images from the Sentinel-3 to study a select few parameters of the Hooghly, Saptamukhi and Muri Ganga estuaries. In this research, we presented a novel empirical model for estimating TSM and Chl-a concentrations in the Sundarban coastal waters of the Hooghly Estuary by integrating OLCI data with in situ data.Finally, the model was run using the time-series Sentinel-3 data to chart the spatial dispersal of TSM and Chl-a concentrations, and then the geographical and temporal distribution features of Chl-a concentration in the Sundarban coastal deltaic waters were analyzed.The main objectives of the study are-to estimate water quality factors such as salinity, surface temperature, chlorophyll-a concentration, and TSM concentration through seasonal eld surveys (in-situ data); to use Sentinel-3 OLCI images in estimating salinity, surface temperature, chlorophyll-a concentration, and TSM concentration during 2017-19 by C2RCC processor & to validate the results obtained by the OLCI estimations with the in situ data sets through regression equations. Study area The study area is part of the Sundarbans delta extending from 21°20'N to 22°40'N and 87°0'E to 89°0'E (Fig. 1) and is crisscrossed by many estuaries which are mostly distributaries of the Hooghly estuary (Thakur et al, 2020ab;Mondal, et al., 2016;2021b;Bag et al., 2019;Bandyopadhyay et al., 2014).The delta is an ecologically delicate area that is stuck by consistent tidal ebb and ow of 12 durations each (Mondal et al., 2021a).The samples of water were collected at nine widely dispersed sampling stations along the mangrove-dominated banks of the Mooriganga, Saptamukhi, and Hooghly estuaries, three distributaries of the Ganges.The names of the stations are mentioned in table 1.A mapdisplaying the sampling stations along with speci c geographical positions has been shown in gure 1. The region has a monsoonal climate with three main seasons, viz., pre-monsoon, monsoon and postmonsoon.Seasonal eld observations were conducted at all the nine sampling stations throughout the monsoon season (July-Oct), the post-monsoon season (November-January), and the pre-monsoon seasons (March-June) for a period of two years, from November 2017 to January 2020.& 24.04.2019 for the pre-monsoon.All waterbody samples were collected using clean plastic buckets from the middle portion of the river using mechanized boats.Both eld data and satellite data were collected on the matching day and at the similar time (9.32 a.m.IST ± 30 min), irrespective of the tidal conditions in the estuary.Collected water samples were kept in acid cleaned dry polythene bottles and transported to the laboratory for the measurement of suspended particulate matter and salinity.The temperature of surface water was measured at each station using a hand-held thermometer.The transparency was measured using a metal Secchi disc of 20 cm in diameter.The mean of three Secchi disc depths was chosen as the water transparency at each location (Preisendorfer 1986;Lee et al. 2015). All samples were delivered to the laboratory as soon as they were collected.Every time, triplicate samples were collected and analysed periodically to check the reproducibility of results and to evaluate the precision of measurements.Salinity was measured by argentometric iteration following the Mohr-Knudsen method (Grasshoff et al. 1983).TSM was separated by ltering an aliquot of water sample (1-2 litters) through a pre-weighed 0.45 (mm) Millipore membrane lter under vacuum, and weighing it with an accuracy of (0 ± 0.1mg) after drying at 60 o C or in desiccators in the presence of Conc.H 2 SO 4 . Estimation of Salinity and Temperature Salinity was estimated through the argentometric titration method, following Strickland & Parsons, 1972. Samples of waste-water (15 ml) were titrated against a standard AgNO 3 solution, using a K 2 CrO 4 solution (3.5 g l -1 ) as an indicator.Silver nitrate solution was previously standardized against standard seawater solution (3.5 g NaCl in 96.5 ml distilled water), having chlorinity 19.375 ppt and salinity 35‰.The salinity and chlorinity were related by Knudsen's equation; S‰ = 0.03 + 1.805 Cl‰.This technique is precise up to 0.05 to 0.1 ‰ of salinity.The water temperature was recorded during each sample collection using a laboratory thermometer. Chlorophyll measurement For the validation of water parameters, we collected around 15-20 cm under the river surface using Niskin bottles, and samples have been moved to the laboratory for further analysis.Water samples were clari ed over 0.7 mm Whatman glass ber lter (GF/F) paper under low vacuum and kept in acetone of 90% concentration at 0°C for 24h in the dark for the complete extraction.The extracted solution was centrifuged at 10000 rpm for 10min and the solvent was used to estimate Chl-a concentration using a Shimadzu UV-vis 2450 spectrophotometer after the technique described in Strickland and Parsons (1972).The precision of Chl-a estimation is at the 5 µG level, the correct value lies in the range: The organic and inorganic section of (TSM in mg L -1 ) has been quanti ed using the gravimetric technique meticulously given by Toming (2017) and also the MERIS protocols.Whatman Glass Fiber Filters (GF/F) are washed with ultrapure water to remove any drop lter bits, and they now combust at 4800C with an instruction to burn off any conceivable organic contamination (Doerffer et al., 2002).The weighted clean lters and they are kept in a folded square of aluminum foil paper (0.020 * 100 * 100 mm) through counted numbers up to their percolation.In accumulation, the amount of water samples (1-2 L) has been ltered in triplicates over and done with the pre-weighed and also pre-combusted lter methods (Kratzer et al., 2018).The funnel and lter wash the clean water, then add 50mL of ultrapure water to take out any residual salt.The lters were dried up overnight at 60 0 C and preserved in a desiccator prior to consideration by a microbalance (±1µg).TSM was obtained through the dissimilation of both the tare and the dry weight.Then, the sample data is combusted at 480 0 C in an incinerator, monitored by an alternative weighing stage.The mass of the inorganic TSM was then equal to the mass of the combusted lters (modi ed tare weight), and the organic segment was the variance of the entire and the inorganic TSM. 2.2.3.Sentinel-3 Satellite data processing 2.2.3.1.Image pre-processing The Sentinel-3 OLCI has a speci cation sensor as per its precursor Medium Resolution Imaging Spectrometer (MERIS) on ENVISAT with the capability to achieve an analysis of bio-optical ingredients in global coastal and inland regions (Doerer et al., 1999, Moore et al., 1999;Merheim-Kealy et al., 1999).The Sentinel-3 OLCI bands (Table 2) are an inheritance of MERIS and are supplemented to improve the quality of the measurement of ocean colour remote sensing.The Sentinel 3 OLCI instrument swath area covers around 1270 km and it's tilted across by a track of 2.6 in the opposite track to the sun's angle in direction to minimize the potential effect of sun glint (ESA, 2019).Cloud free images for the period 2017-2020, viz. the monsoon (June), post-monsoon (November), and pre-monsoon (April) were downloaded and used in the study.The Sentinel-3 satellite passed above the selected sampling area around 9.32 am and satellite images with a ± 6-hour time lag from water sampling timings were collected on the following dates: 13.06.2018& 19.06.2019 for the monsoon, 13.12.2017,10.12.2018 & 2.01.2019 for the post-monsoon, and 21.04.2018& 24.04.2019 for the pre-monsoon.Using the SNAP software, the images were corrected for atmospheric effects.The Case-2 Regional/coast colour (C2RCC) processor was used for atmospheric correction (Brockmann et al., 2016).The C2RCC processor is a processing chain that helps in recovering water quality factors such as Chl-a, TSM, SST and salinity.The processor comprises of two neural nets. One net removes atmospheric and water surface effects (such as glint), while the other net retrieves absorption and scattering coe cients from which optically active substance concentrations (such as Chla) are calculated.The processor is available freely through the SNAP (Sentinel Application Platform) software.Finally, we have used the cloud-free pixel values relating to each sampling station's location were extracted from each thematic layer and veri ed against ground truth data.The extraction multi-layer values to points tool in Arc GIS was used to extract the Chl-a and TSM indices from the images at locations speci ed in a point feature class.For each input raster cluster value, a novel arena comprising the cell values for individual input raster is added to the input particular point feature class, and each pixel that covered the geographic location of the station value is extracted.After that, an attribute table was disseminated to MS Excel to build a connection between both the radiometric values and in-situ Chla and TSM concentration, as well as an analysis of the models produced.Figure 2 shows a summary of the methods used in this study. Methods The water pixels are processed exclusively by the Sentinel 3 OLCI data processing system.A part of it is based on atmospheric correction, which produces water-leaving re ectance's and, as a by-product, the aerosol weight overhead it, and the other part is based on ocean colour processing, which is the derivation of the colour of the aquatic body itself from the water-leaving re ectance's of the suite of merchandises telling it.The results of the laboratory analysis were then arranged in a systematic manner.Cloud free OLCI C2RCC processed pixel values conforming to the sampling location points were extracted.These values of data sets giving chl-a, TSM, SST and Salinity were also added to the database (Table 1; Fig. 2). Statistical analysis The validation operations took place between 2017 and 2020 in the Hooghly estuary of the Bay of Bengal.In different seasons of the year, marine, coastal, and inland water bodies have been covered, including in situ measurements of parameters such as chlorophyll-a, TSM, salinity, and sea surface temperature.After the datasets were assembled in one database, they were subjected to statistical analysis, and nally, the OLCI retrieved data validated with the in situ results by applying regression and correlation analysis using Matlab software. Results The physico-chemical and biological feature of a owing river system largely depend on the tidal impacts, the intensity of contribution from point and non-point sources contributing on both sides of the banks, and the anthropogenic activity in and around the riverine system.Thus, the variations of the studied components are ascribed to tidal variations, spatial variations, and seasonal and annual variations.In this research, we have estimated the water quality factors from Sentinel-3 images along with multi-seasonal in-situ eld data and have tried to validate and analyse the ndings. 3 Summer has dominated the pre-monsoon, resulting in higher mean water temperature values at all sampling stations.In this season it varied between 34.65 ± 0.64°C at Sagar south and 30.05 ± 0.07°C at Beguakhali and Henry's island and 30.05 ± 0.35°C at Bakkhali.In the same season, the mean water temperature at BabuGhat was 32.90 ± 2.69°C and at Diamond Harbour it was 33.90 ± 0.14°C (Fig. 4).A point to be mentioned is that the water temperature variation majorly depends on tidal circulation and the sample collection time.Hence, from its variation, no important conclusion can be drawn directly. Mapping Chl-a Concentration from the OLCI Images Chlorophyll content of water is an excellent indicator of biotic production in any aquatic system.In estuaries like Hooghly, chlorophyll-a concentrations signi cantly describe water circulation and dilution patterns.It is sometimes also an important indicator of anthropogenic input of nutrients into the nearshore areas.In our study, the monsoonal concentration of in-situ ( eld survey) Chl-a was lower than in the other two seasons.During the monsoon, the mean Chl-a concentration varied from 1.16 ± 0.10 mg m -3 at Henry's Island to 2.47 ± 0.36 mg m -3 at Diamond Harbour.Among other stations, Namkhana had exhibited 1.42 ± 0.16 mg m -3 , Sagar south had exhibited 1.86 ± 0.31 mg m -3 , Babu Ghat had exhibited 1.72 ± 0.68 mg m -3 , Beguakhali had exhibited 1.73 ± 0.22 mg m -3 and Bakkhali had exhibited 1.40 ± 0.10 mg m -3 (Fig. 5).During the same season, the estimated (Sentinal-3) Chl-a values were lower than the insitu values.The estimated values varied from 0.68 ± 0.03 mg m -3 at Namkhana to 1.47 ± 0.29 mg m -3 at Fraserganj (Fig. 6a). During post-monsoon the estimated Chl-a values were higher compared to the monsoon, with a highest of 1.96 ± 0.80 mg m -3 at Sagar South and a lowest of 1.05 ± 0.28 mg m -3 at Henry's Island.Other major contributors to the in-situ mean Chl-a concentration in the post-monsoon season were 1.13 ± 0.05 mg m -3 at Babu Ghat, 1.22 ± 0.27 mg m -3 at Diamond Harbour, 1.16 ± 0.83 mg m -3 at Namkhna point, and 1.53 ± 0.40 mg m -3 at Bakkhali point (Fig. 6b). During the same season, the in-situ Chl-a levels were higher than the estimated levels.The in-situ Chl-a varied from 2.28 ± 0.70 mg m -3 at Sagar South to 1.26 ± 0.04 mg m -3 at Babu Ghat (Fig. 5) When compared to the other two seasons, pre-monsoon concentrations of in-situ Chl-a were highest.During the pre-monsoon, the mean in-situ Chl-a concentration has varied from 2.45 ± 0.50 mg m -3 at Bakkhali to 1.23 ± 0.14 mg m -3 at Namkhana.Higher mean in-situ Chl-a concentrations were also observed on Henry's Island, i.e. of 2.15 ± 0.52 mg m -3 , in Diamond Harbour, i.e. of 1.80 ± 0.31 mg m -3 , in Babu Ghat, i.e. Again, in the pre-monsoon season, the estimated Chl-a concentration was lower than the in-situ values. The highest estimated Chl-a concentration was recorded at Bakkhali, i.e. 1.82 ± 0.44 mg m -3 and the lowest was at Namkhana, i.e. 0.80 ± 0.16 mg m -3 in this season during 2018 -19 (Fig. 6c).This trend of mean Chl-a concentration variation has indicated that as the river ows downstream towards the Bay of Bengal, the parameter gets diluted and so the productivity drops lower. Mapping of Total Suspended Matter (TSM) from the OLCI Images The weight of TSM is dependent upon a number of factors present in the water column, e.g.turbidity, chlorophyll concentration, waste water dilution, and tidal uctuations.Because of massive anthropogenic in ows, the upstream parts of the Hooghly estuary have shown higher values.TSMs concern their Bay of Bengal ward counterparts, with Beguakhali as an exception during the post-monsoon and monsoon seasons.During the post-monsoon season, the mean weight of in-situ TSM was the highest, 129.24 ± 27.08 mg L -1 at Diamond Harbour, and the lowest, 25.53 ± 9.48 mg L -1 at Babu Ghat.Other major contributors to the post monsoonal mean weight of in-situ TSM were Namkhana with 57.97 ± 23.01 mg L -1 , Kachubaria with 101.71 ± 32 mg L -1 , Henry's Island with 82.65 ± 18.21 mg L -1 and Bakkhali with 69.87 ± 25.44 mg L -1 (Fig. 7). The estimated TSM was higher compared to the in-situ values for this season.The highest estimated TSM concentration was recorded at Diamond Harbour, i.e. 134.61 ± 30.55 mg L -1 and the lowest was at Babu Ghat, i.e. 41.86 ± 22.24 mg L -1 (Fig. 8a).During the pre-monsoon, the scenario changes.With a mean weight of estimated TSM of about 114.06 ± 30.61 mg L-1, Babu Ghat had exhibited the lowest and, with about 281 ± 26.95 mg L-1, Kachubaria had acquired the highest position.Among other upstream sampling stations, Diamond Harbour had exhibited 180.58 ± 25.83 mg L -1 and Namkhana had exhibited 102.93 ± 84.48 mg L -1 .Among downstream points, Sagar south had exhibited 119.26 ± 11.39 mg L -1 and Henry's Island had exhibited 179.99 ± 18.68 mg L -1 (Fig. 8b).This season, the highest in-situ TSM was measured at Kachubaria (273.05 28.35 mg L-1) and the lowest at Babu Ghat (108.03 27.72 mg L-1) (Fig. 7). During the monsoon period at Fraserganj, the highest mean weight of estimated TSM, i.e. 150.41 ± 26.17 mg L -1 was observed, and at Babu Ghat, the lowest value of 26.30 ± 2.49 mg L -1 was observed.Similar to this, the weight of in-situ TSM was also highest at Fraserganj, i.e. 148.96 ± 25.36 mg L -1 and was lowest ground data and satellite data is signi cantly viable for larger populations and varied hydrological parameters.This could be bolstered further by a correlation study between the two.A signi cant correlation was observed between eld data and Sentinel-3 data in all seasons along the Hooghly River for both parameters, i.e.Chl-a and TSM (Table 3).All r values for individual seasons were > 0.5, which indicates a signi cant relationship.This may further be supported by p values, i.e. all values ≤ 0.05.This validates the level of accuracy of eld sampling and also states the importance of both primary and secondary data in understanding any dynamic system properly.When studied for all seasons, for Chl-a, r = 0.829 and p = 0.000 and for TSM, r = 0.924 and p = 0.000.Both these were again signi cantly supported by the strength of satellite data and eld data variables.Although there is a continuing argument amongst scholars about the reliability of primary eld data versus secondary satellite data, this type of comparative study might work as a conduit between the two and will have a futuristic application of interpretation in oceanic research. The study's ndings indicate the need to develop an algorithm speci c to the Hooghly estuary, particularly for the C2RCC processor, because the current ones only provide sensible re ectance outcomes in Chlorophyll and TSM circumstances and correlation with in-situ data. Discussion Of The Study In the present scenario, the spectral and temporal resolution of OLCI makes it ideal for mapping and collecting water quality parameter information for large-scale coastal waters.The OLCI data has been acquired nearly daily using the Sentinel-3A and Sentinel-3B satellites' ocean monitoring programme. However, image quality is constantly affected by the weather, particularly in cloudy and rainy subtropical coastal regions such as the Sundarban delta.This limits the usefulness of the OLCI images for studying ocean colour, yet this climatic factor is unavoidable in optical remote sensing.To acquire additional ocean colour information in the future, we should explore combining multi-source satellite data. Furthermore, the atmospheric adjustment is important for retrieving ocean colour.The quality of the atmospheric adjustment has a substantial effect on the remote sensing inversion outcome.For ocean colour modelling, a good atmospheric correction technique can provide high-quality re ectance data. Despite the fact that ocean colour missions have given us with a plethora of ocean colour products, their suitability for use in local environments has not been determined.In this study, we compared the Chl-a and TSM with a correlation matrix-based model on in-situ measurement with the (IOPs) product.While the spatial distribution of TSM and Chl-a concentrations from various techniques is usually comparable, the TSM and Chl-a values from each method have vastly different ranges.The primary reason for this is that eld measurement data for other ocean colour models has been gathered from different ocean areas throughout the years, while in-situ measurement data for the correlation matrix-based model is collected from the Hooghly estuary's coastal waters (Fig. 10).According to our result shows the Sentinel-3 products are suitable for ocean color monitoring in Hooghly estuary of Sundarban coastal waters.Although the correlation matrix-based technique has shown positive results in the coastal waters of the Hooghly estuary, for the spatial application of this method.Finally, our approach, on the other hand, offers signi cant reference value for the study of ocean colour in other coastal regions. Conclusions And Outlook The retrieval of the estuarine water quality parameters and its subsequent validation was generally successful with moderately consistent results (comparatively stumpy bias and scatter) for the Chl-a, TSM, salinity, and water temperature.In general, the C2RCC does well in recovering a remote sensing spectrum with a distinctive deltaic shape for estuary waters with a re ectance peak at 560 nm.Results revealed that the maximum concentration of chlorophyll-a was observed during the post-monsoon period (2.96 mg m -3 , estimated and 3.17 mg m -3 , in-situ) on Sagar Island South.The amount of chlorophyll-a belongs moderately in the continental shelf zone than continental slope because of the abundance of phytoplankton.That is why the continental shelf zone is better for shing activities.The in-situ results total amount of TSM has decreased progressively into land towards the sea completely over the year (2017-2020).The amount of TSM is very low at the most upstream sampling point, i.e.Babu Ghat.It has shown a mean of 36.76 mg L -1 in-situ and 58.04 mg L -1 estimated TSM in all seasons of our study period.The study results have also shown the highest amount of in-situ TSM at Fraserganj during the monsoon time in 2019 (174.32 mgL-1), much more in the monsoon period than in the post-monsoon period in the Hooghly estuaries. The intention of this study was to check the performance of Sentinel-3 OLCI in coastal waters by appraising the results of atmospherically corrected ocean colour products produced by a standard processor (C2RCC), and the results point to very good performance in the test.Very good tted correlation results for all seasons between Chl-a, r = 0.829 and TSM, r = 0.924 were obtained during the comparisons of OLCI with the in situ results.The high level of correlation highlights the importance of both primary and secondary data in understanding any dynamic system properly.This research could pave the way for future research into the estuarine waters of the Indian Sundarbans, which are di cult to access for laboratory-based studies.Real-time monitoring using OLCI images and the application of different algorithms could bring about a very good chance in the environmental monitoring programs of the Indian Sundarbans.Competing Interests: Declarations This manuscript has not been published or presented elsewhere in part or in entirety and is not under consideration by another journal.There are no con icts of interest to declare. Availability of data and materials: The data that support the ndings of this study are available from the author, [Ismail Mondal, ismailmondal58@gmail.com], upon reasonable request. Declarations: In-situ Measurement of Water Quality Estimation in the Hooghly Estuary Funding been done under the project entitled "Water quality assessment using AVIRIS-NG satellitederived data along the Hooghly (Ganges) River Estuary, Eastern part of India" [Reference No.: A/F/693/4S-2973/2016] with collaboration between Space Applications Centre, ISRO, Ahmedabad, and the Department of Marine Science, University of Calcutta, Kolkata, India.The nancial support from SAC, ISRO, Ahmedabad, is gratefully acknowledged.The help from the Port Trust Authority of India and Shipping Corporation of India is being duly and gratefully acknowledged for their support during the sampling periods by providing us with means of riverine transportation and personnel of assistance.Finally, the authors are indebted to the University Grants Commission, Government of India; for the award of the DS Kothari Post-Doctoral Fellowship (ES/20-21/0009) to Dr. Ismail Mondal.And also the European Space Agency (ESA) for providing the Sentinel-3 data to further extend our research work. Figures Table 1 : Name of sampling stations from North to south Table 2 : OLCI bands with MERIS heritage bands and additional highlighted in bold (source: ESA) Table 3 . Table showing Seasonal correlation between Sentinel-3 observed data and field
2022-01-19T16:33:32.739Z
2022-01-17T00:00:00.000
{ "year": 2023, "sha1": "2b6bbe7e001d0ca81daef419944e14714a718140", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-1191959/latest.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "b6f97ea3680df561f9382e9833f9cd9235bce102", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
258602958
pes2o/s2orc
v3-fos-license
Selectivity of mass extinctions: Patterns, processes, and future directions A central question in the study of mass extinction is whether these events simply intensify background extinction processes and patterns versus change the driving mechanisms and associated patterns of selectivity. Over the past two decades, aided by the development of new fossil occurrence databases, selectivity patterns associated with mass extinction have become increasingly well quantified and their differences from background patterns established. In general, differences in geographic range matter less during mass extinction than during background intervals, while differences in respiratory and circulatory anatomy that may correlate with tolerance to rapid change in oxygen availability, temperature, and pH show greater evidence of selectivity during mass extinction. The recent expansion of physiological experiments on living representatives of diverse clades and the development of simple, quantitative theories linking temperature and oxygen availability to the extent of viable habitat in the oceans have enabled the use of Earth system models to link geochemical proxy constraints on environmental change with quantitative predictions of the amount and biogeography of habitat loss. Early indications are that the interaction between physiological traits and environmental change can explain substantial proportions of observed extinction selectivity for at least some mass extinction events. A remaining challenge is quantifying the effects of primary extinction resulting from the limits of physiological tolerance versus secondary extinction resulting from the loss of taxa on which a given species depended ecologically. The calibration of physiology-based models to past extinction events will enhance their value in prediction and mitigation efforts related to the current biodiversity crisis. Introduction Earth is currently undergoing a biodiversity crisis on a scale unprecedented in the history of the human species (Barnosky et al., 2011;Dirzo et al., 2014;McCauley et al., 2015), but crises of similar or greater magnitude have occurred at least five times across the 600-million-year history of animal life ( Figure 1A) (Raup and Sepkoski, 1982;Barnosky et al., 2011). All major mass extinction events are associated with evidence of rapid environmental change. In some cases, such as the end-Permian (252 million years ago [Mya]) and end-Triassic (201 Mya) mass extinctions, there is evidence for rapid and pronounced climate warming (Kiessling and Simpson, 2011;Payne and Clapham, 2012;Blackburn et al., 2013;Burgess et al., 2014;Bond and Sun, 2021). By contrast, the Late Ordovician (443 Mya) and Late Devonian (372 Mya) extinctions occurred in association with climate cooling (Joachimski and Buggisch, 2002;Finnegan et al., 2011). The Figure 1. Extinction patterns in the fossil record. (A) Graph of marine animal diversity across the past 600 million years, illustrating the diversity declines associated with the five major mass extinction events (modified from Raup and Sepkoski, 1982). (B) Extinction selectivity with respect to geographic range, illustrating the preferential survival of broadly distributed genera during background intervals and the greatly reduced selectivity during mass extinction events (modified from Payne and Finnegan, 2007). (C) Principal components analysis of logistic regression coefficients of ecological traits and body size selectivity of the Big Five mass extinction events and the modern oceans, demonstrating the unique selectivity of the modern extinction threat (modified from Payne et al., 2016b). (D) Extinction selectivity during the end-Permian mass extinction, illustrating the preferential extinction of heavily calcified marine animal classes with less complex respiratory and circulatory systems (modified from Knoll et al., 2007;Knoll and Fischer, 2011). (E) Extinction selectivity with respect to body size for major classes of marine animals, illustrating the general bias of background extinction against smaller-bodied genera versus the variable direction of selectivity for classes that exhibit distinct patterns during mass extinction (modified from Monarrez et al., 2021). end-Cretaceous extinction (66 Mya) was associated with an asteroid impact event whose aftermath resembled the consequences of a hypothetical global thermonuclear war Turco et al., 1983). Due to the magnitude and global scale of the current "Sixth" extinction, these events from Earth's past provide historical reference points for predicting the long-term magnitude, ecological impact, and recovery timescale from the current crisis or other, potential, human-mediated catastrophes. Extinction selectivity provides our most direct evidence of proximal kill mechanisms (Raup, 1986), but to date, most testing of observed extinction patterns against hypothesized kill mechanisms has been semi-quantitative, focused on establishing consistency between predicted and observed directions of selectivity under various hypothesized kill mechanisms. Recently, advances in paleontological databases, geochemical proxies, physiological experiments, and Earth system and ecosystem models have enabled the comparison of observed and predicted extinction patterns within quantitative, self-consistent frameworks ( Figure 2) (Penn et al., 2018). Although quantitative model-data comparison between observed and predicted extinction patterns is still in its early days, the door for direct comparison of past and future biotic response to climate change is now open, increasing the value of the fossil record in the mitigation of the current biotic crisis. Pattern Analyses of selectivity for individual mass extinction events date back many decades (Jablonski, 2005). Studies synthesizing and comparing selectivity patterns across all major mass extinctions (and intervening background intervals) have emerged more recently, alongside publicly available databases of fossil occurrences and other traits (Alroy, 1999;Payne and Finnegan, 2007;Peters, 2008;Kiessling and Simpson, 2011;Payne et al., 2016b;Smith et al., 2018;Payne and Heim, 2020;Monarrez et al., 2021). Geographic range is one of the traits most commonly hypothesized to correlate with extinction risk due to its influence on the extent to which populations of a given taxon may avoid a regional disturbance or have broad enough physiological tolerance limits or ecological capacities to survive a global one. Analyses of fossil data have confirmed that widely distributed taxa survive preferentially during background intervals ( Figure 1C) (Jablonski, 1986(Jablonski, , 2005Payne and Finnegan, 2007). Broader geographic range is also significantly associated with survival during at least some major mass extinction events (Jablonski and Raup, 1995;Finnegan et al., 2016), but the strength of this association (i.e., the change in odds or probability of extinction per unit change in geographic range) is greatly reduced relative to background intervals ( Figure 1C) (Kiessling and Aberhan, 2007;Payne and Finnegan, 2007). Due to the consistency of the association and the expectation of selectivity on total geographic range under most extinction scenarios, these patterns have rarely yielded direct insight into kill mechanisms. By contrast, the biogeography of extinction can be more informative. For example, end-Cretaceous echinoid extinction was significantly more severe in areas proximal to the Chicxulub impact site (Smith and Jeffery, 1998), and differences in extinction intensity across latitude often correspond with expectations due to climate change (Finnegan et al., 2012;Penn et al., 2018;Reddin et al., 2019Reddin et al., , 2021. Quantifying the expected magnitude of spatial gradients in extinction intensity and differences in such gradients across higher taxa (or functional groupings) is the key to linking these findings with hypothesized kill mechanisms, and one that is already being partially realized (Penn et al., 2018). The extinctions of large mammals during the Pleistocene (0.0117 Ma) and of large, non-avian dinosaurs during the Maastrichtian (66 Ma) have long prompted speculation that largebodied animals are at systematically higher risk of extinction during times of environmental change (Raup, 1986;Wallace, 1889;Brown, 1995). Analyses of the fossil record reveal a more heterogeneous relationship, and one that may differ across taxa and habitats. For example, smaller body size is generally associated with greater extinction risk during background times for many classes of marine animals ( Figure 1D) (Payne and Heim, 2020;Monarrez et al., 2021). By contrast, body size was not generally associated with extinction probability for terrestrial mammals until the Pleistocene (Alroy, 1999;Smith et al., 2018). End-Cretaceous extinctions preferentially eliminated larger-bodied fish, lizards, and snakes (Friedman, 2009;Longrich et al., 2012) but were unbiased in bivalves and gastropods (Jablonski and Raup, 1995). End-Permian extinctions preferentially affected larger foraminifera and brachiopods (Schaal et al., 2016). Many taxon-size combinations have yet to be examined systematically. In marine animals, size selectivity changes between background and mass extinction in many classes but the direction and magnitude of the size bias during mass extinction differs among classes ( Figure 1D) (Payne and Heim, 2020;Monarrez et al., 2021). The differences in responses among classes remain to be explained. Because body size correlates with many ecological and physiological traits (Peters, 1983), size bias on its own is insufficient to diagnose proximal kill mechanisms but may be useful in conjunction with other traits or in testing against predictions of specific kill mechanisms . Some mass extinction events exhibit selectivity patterns that can be mapped onto respiratory and circulatory anatomy, potentially reflecting underlying differences in susceptibility to metabolic stress from hypercapnia, anoxia, climate warming, or their interactive effects. For example, the end-Permian mass extinction preferentially affected heavily calcified marine animal genera with limited respiratory and circulatory systems ( Figure 1B), suggesting a role for hypercapnia and/or direct and indirect fitness effects of acidification on shell dissolution (Calosi et al., 2017) in driving the extinction (Knoll et al., 1996). At the same time, the lack of sophisticated oxygen-supply mechanisms would also make these taxa more sensitive to temperature-dependent hypoxia (Deutsch Cambridge Prisms: Extinction 3 et al., 2020;Endress et al., 2022) and metabolic differences among groups likely influence taxonomic selectivity patterns from changes in CO 2 , temperature, and O 2 . Similar patterns as seen in the end-Permian apply to other extinction events, including the end-Triassic mass extinction (Clapham, 2017;Kiessling and Simpson, 2011), consistent with shared kill mechanisms. By contrast, the end-Cretaceous mass extinction exhibits the opposite pattern, with taxa thought to be more sensitivity to ocean acidification surviving preferentially (Kiessling and Simpson, 2011), potentially reflecting differences in extinction patterns triggered primarily by volcanism versus impact events. The extent to which these patterns stand out from background extinction remains incompletely studied. A study controlling for differences between benthic versus planktonic and nektonic taxa indicates that many background intervals show the same selectivity, often of similar magnitude (Payne et al., 2016a). As discussed below, results of physiological experiments on living relatives of species in the fossil record are enabling quantitative prediction of biological response to past environmental changes inferred from geological and geochemical proxies. This is currently an area of rapid progress. Simultaneous analysis of extinction selectivity across multiple traits and time intervals enables quantitative comparison of selectivity patterns between background and mass extinction as well as among mass extinction events ( Figure 1E). Such analyses generally confirm that mass extinction events differ in selectivity from background patterns ( Figure 1C, E) (Payne and Finnegan, 2007;Kiessling and Simpson, 2011;Finnegan et al., 2012;Payne et al., 2016b;Monarrez et al., 2021) and that the pronounced size bias of the modern extinction makes it an outlier relative to major mass extinctions as well as to recent background intervals ( Figure 1D) (Payne et al., 2016b;Smith et al., 2018). Overall, selectivity patterns accord with geological and geochemical data, indicating that mass extinction events are typically associated with large and rapid environmental perturbations rather than intensification of background extinction processes (Alvarez et al., 1980;Hallam and Wignall, 1997;Finnegan et al., 2011). Testing hypothesized kill mechanisms requires simultaneous consideration of selectivity across multiple variables because physiological and ecological traits are often linked in complex ways. For example, body size is related to the supply and demand of oxygen (Deutsch et al., 2015 and food (Gearty et al., 2018) as well as to trophic level (Romanuk et al., 2011). Introduction Understanding the causes of extinction selectivity in the fossil record requires additional information about the patterns of environmental change, the sensitivity of species to those changes, and disruptions in ecological networks. The interpretation of extinction selectivity thus relies on geochemical reconstructions of climate, understanding of the ecological and physiological traits of living taxa and, increasingly, on models that incorporate all these aspects of ecological and Earth system dynamics into an internally consistent, quantitative framework ( Figure 2). Patterns of extinction selectivity can arise simply from the fact that environmental changes can be highly variable in strength or even direction across space. Extinction selectivity could also arise from taxonomic or geographic differences in physiological sensitivity to environmental change, even if climate trends were globally uniform. In general, these factors are likely to be connected, as the tolerance limits of taxa to environmental conditions will shape the pre-extinction geographic distribution, which may confer greater or lesser sensitivity to environmental change in certain regions. Contemporary studies have advanced a mechanistic approach to investigating the causes of selectivity in mass extinctions by integrating many of these elements, from geochemical proxies of climate change, the modern diversity of ecophysiological traits, and the climate dynamics of Earth system models. In ocean studies, Earth system model (ESM) Figure 2. Workflow illustrating the use of geological and geochemical data to constrain Earth system models (ESMs), physiological experiments to constrain parameters used to populate models with species of different ecophysiotypes, and fossil occurrence data to conduct model-data comparison. Ecosystem structure remains to be incorporated into such models and can be used to predict extinction cascades. Calibration of models against selectivity patterns in ancient extinction events will improve their use in forecasting biotic response to current and future environmental change. Panels on right showing CO 2 emissions curves and future biodiversity projections are from Penn and Deutsch (2022). emphasis has been on integrating climate and physiological constraints (Penn et al., 2018;Stockey et al., 2021). Terrestrial studies, by contrast, have tended to focus on ecological (food web) mechanisms largely missing from marine analyses (Roopnarine, 2006;Roopnarine and Angielczyk, 2015). These dichotomous approaches have made significant advances in their respective domains, paving the way for more unified marine and terrestrial studies. Example: Metabolic Index One promising avenue for examining physiological kill mechanisms for ancient extinction events is the Metabolic Index, which was initially developed to test whether the biogeographic distributions of species are physiologically limited by O 2 supply and demand in the modern ocean (Deutsch et al., 2015). This ecophysiological model quantifies habitat viability for a species, in terms of its ability to carry out aerobic respiration, by taking a ratio of environmental oxygen supply to biological oxygen demand as a function of temperature and taxon-specific metabolic and O 2 supply traits (Eq. (1)). The metabolic energy demands of water-breathing marine animals increase with water temperature and body size (Gillooly et al., 2001), raising corresponding biological O 2 requirements. Temperature and body size also impact the rates of organismal O 2 supply through diffusion, ventilation, and internal circulation Endress et al., 2022), while warmer water holds less ambient O 2 . The ratio of temperature and body size (B)dependent rates of potential O 2 supply and organismal metabolic demand, termed the Metabolic Index (ɸ), quantifies the metabolic viability of a habitat for a given species: where A 0 (atm À1 ) is the ratio of O 2 supply to resting demand rate coefficients, or hypoxia tolerance at a reference temperature and body size (B), with allometric scaling exponent ε and Arrhenius temperature sensitivity, E 0 (eV), and pO 2 and T are the oxygen partial pressure and temperature of ambient water, respectively ( Figure 3) (Deutsch et al., 2015. These physiological traits and their distributions across taxa can be estimated from critical oxygen thresholds in respirometry experiments conducted for diverse marine biota over the past half century (Rogers et al., 2016;Chu and Gale, 2017). Critical oxygen thresholds define the Metabolic Index to be 1 (i.e., ɸ = 1), allowing the traits to be estimated for organisms in a resting state under laboratory conditions. In the environment, O 2 requirements are elevated by more strenuous activities important for population persistence, such as growth, reproduction, feeding, defense, or motion. These additional energy demands require the O 2 supply to be raised by a factor, ɸ crit , corresponding to sustained metabolic scope (Peterson et al., 1990). Stable aerobic habitat barriers thus arise in ocean regions where the Metabolic Index falls below ɸ crit , while the geographic positions of these barriers depend on the species' traits . The habitability of any given parcel of water can therefore be determined from the temperature and oxygen partial pressure given the species values of A 0 , E 0 , and ɸ crit . Earth system models can be populated with hypothetical species by drawing combinations of values from the trait distributions ( Figure 3). The promise of this framework for paleontological application is that trait distributions can be used to predict the patterns of biodiversity, providing a means for testing the model against the fossil record. Indeed, the observed tropical dip in marine species richness observed for diverse animal groups in the modern ocean (Chaudhary et al., 2021) can be explained by aerobic habitat limitation implied by modern species Metabolic Index traits . Environmental temperature and oxygen concentration can be quantified using geochemical proxies for ancient events to calibrate Earth system models and body size can be measured from fossil specimens. In principle, ecological interactions can be further incorporated to model, allowing extinction cascades to be accounted for alongside direct, climate-driven habitat loss (Figure 4). During periods of climate warming, rising water temperatures can drive the metabolic O 2 demand above a supply declining from ocean deoxygenation, leading to the loss of available aerobic habitat, and eventually species extinctions at local and global scales (Penn et al., 2018;Reddin et al., 2020). At regional scales, such as in the California Current System, aerobic habitat changes have been linked to multi-decadal fluctuations of anchovy populations, including near-extirpation of larvae from portions of their range (Howard et al., 2020). At global scales, aerobic habitat loss under the climate change simulated for the end-Permian mass extinction predicted a geographic selectivity of extinction consistent with the fossil record ( Figure 5A): Extinction risk was greater for species inhabiting higher latitudes. This geographic selectivity arises because species previously occupying the tropics would already have been adapted to warm, low-O 2 conditions that became more widespread, whereas polar habitat niches disappeared more completely (Penn et al., 2018). In contrast to the geographic selectivity predicted for warming, periods of global cooling, such as during the Late Ordovician, are expected to generate extinctions focused on the low latitudes (Saupe et al., 2020), consistent with the patterns observed for that mass extinction (Finnegan et al., 2012) and may also occur through aerobic habitat loss if accompanied by deoxygenation (Finnegan et al., 2016) or due to declining hypoxia tolerance in cold water in species with thermal optima (Boag et al., 2018;Endress et al., 2022). Aerobic habitat loss is also predicted to select against large-bodied species, with a strong variability within size classes that depends on a species' temperature sensitivity . Extinctions driven by aerobic habitat loss may also explain the amplified background extinction rates observed for the early Phanerozoic, because of dramatically lower atmospheric O 2 levels and thus species living closer to their ecophysiological limits (Stockey et al., 2021). Trait adaption to different past climate states (Bennett et al., 2021) has the potential to buffer or amplify predicted extinction risks. The role of differences in ecophysiological traits across taxonomic groups in explaining observed patterns of extinction selectivity across higher taxa (Knoll et al., 1996(Knoll et al., , 2007 remains an open area of research. Primary extinctions driven by the loss of aerobic habitat have the potential to be amplified by secondary extinctions arising from food web effects (Figure 4) or co-occurring environmental stressors that exacerbate direct aerobic habitat loss ( Figure 5J-O). Aerobically tolerant species could still be lost if they are ecologically tied to vulnerable ones, for example, through the food web (Figure 4) or other critical interactions. Ocean acidification ( Figure 5M-O) has the potential to further deplete aerobic habitat through direct CO 2 effects on critical oxygen thresholds, but the magnitude and direction of this effect is uncertain and variable across limited available experimental studies ( Figure 3E) (Rosa et al., 2013;Lefevre et al., 2015). On its own, the magnitude of primary extinction from climate warming and associated physiological stresses depends on the amount of habitat loss beyond which a species can no longer sustain a viable population (i.e., the extinction threshold) (Urban, Cambridge Prisms: Extinction 5 Figure 4. Hypothetical progression of a mass extinction highlighting sources of trait-based and geographic selectivity and potential ecological amplification. (A) An initial distribution of species (or "ecophysiotypes") defined by traits under selection by large-scale environmental conditions will likely result in systematic correlations between traits and geographic range. The range metric here can be considered overall range size (area and volume), or centroid (e.g., low-latitude versus high-latitude, shallow versus deep). (B) The initial biota are subjected to climate perturbation that poses a direct stress through a reduction in fitness whose magnitude depends on species traits and on local climate trends. The resulting change in available habitat (ΔH; contours) presents an ecophysiological extinction risk that is geographically selective because it is trait selective (but may also be caused by climate patterns themselves). In this hypothetical case, habitat loss (ΔH < 0) selects against species with high values of two traits (habitat "Losers") and may even benefit species with low values of those traits (habitat "Gainers"; ΔH > 0). (C) Physiological extinction poses further ecological risks (or advantages) depending on the mutualistic or adversarial interactions with ecophysiotypes (nodes in graph) that are under trait-selective risk. Ecological risk is complex and for any particular species will depend on the physiological risk faced by the other species with which it interacts, which may be positive (green lines) or negative (brown lines), and strong (thick lines) or weak (thin lines). The results of these associations, which may be multiple and indirect, could alter extinction risk by either preserving ecological fitness ("þ" symbol) or reducing it ("À" symbol). Changes in extinction risk are likely to be most pronounced for those in the neutral zone whose antagonists go extinct or who are buoyed by prey/mutualists that are under positive selection. (D) Post-extinction ecosystem, equal to the initial one (A) minus the ecotypes that have gone extinct from either primary (B) or secondary (C) effects. For species in a resting state, the aerobic habitat limit occurs when ɸ = 1, but in the environment a species' activity level or sustained metabolic scope (SMS) elevates the habitat limit to ɸ crit . For species with negative E o , aerobic habitat availability increases with temperature, whereas for those with positive E o (i.e., most species; panel B), aerobic habitat declines with warming. Changes in PO 2 has the potential to lower aerobic habitat availability, and thus the amount of warming a species can withstand, as exemplified for two scenarios of with different fractions of present atmospheric levels of O 2 (PAL; yellow dots and arrows). A change in CO 2 also has the potential to alter hypoxia tolerance, but the magnitude and direction of this effect is unknown across marine biota and is illustrated here from experimental data for a single species under Δ pH = þ0.5 (Rosa et al., 2013). Arrows in A-C denote species traits in D and E. Cambridge Prisms: Extinction 7 2015; Penn et al., 2018;Penn and Deutsch, 2022), even if population decline takes a long time to occur. Extinction thresholds may vary across species, but the average value at the global ecosystem level has been estimated from comparison of end-Permian model simulations to the fossil record, and assuming a similar loss of habitat that drove extinctions in the past would apply in the modern ocean . Calibration of this parameter from the fossil record has recently been used to project future extinction risk from climate changes resembling those of the end-Permian, which are arising today due to accelerating anthropogenic greenhouse gas emissions ( Figure 5). Example: Food webs Terrestrial paleo-community dynamics are usually modeled according to trophic ecology and body size to investigate the role of food-web topology in the propagation of disruptions caused by environmental change. Models of extinction cascades suggest that responses can be complex, resulting from both bottom-up and topdown effects (Kaneryd et al., 2012), with debate about whether simple or complex communities are more susceptible to such cascades and whether trophic versus other ecological interactions are most important (Eklöf and Ebenman, 2006;Donohue et al., 2017). Explicit consideration of extinction cascades during mass extinctions has generally focused on the consequences of collapse in primary production (Tappan, 1968;Vermeij, 2004). Bottom-up models predict extinction of smaller-bodied species in both the marine and terrestrial realms, due to the correlation of body size with trophic level, and exacerbated paleo-community instability post-extinction, which are consistent with investigations conducted on patterns of selectivity in relation to body size (Dunne et al., 2002;Roopnarine, 2006;Roopnarine et al., 2007;Dunne and Williams, 2009;de Visser et al., 2011;Lotze et al., 2011). Interestingly, the end-Cretaceous mass extinction, for which we have the strongest evidence for collapse of primary production, is associated with preferential extinction of larger-bodied species in some clades (Friedman, 2009;Longrich et al., 2012) but not with the preferential extinction of smaller-bodied species, suggesting that physiology or other ecological factors (including top-down extinction cascades) were important in determining survivorship. Two challenges remain in the modeling of extinction via networks of ecological interactions. First, evidence that "primary" extinctions may often occur via environmental change that exceeds the physiological tolerance limits of species at many positions in the food web creates a need for further investigation of how food webs respond to such losses. Are extinction cascades more, or less, extensive when driven by primary extinctions occurring simultaneously at multiple trophic levels? Second, there is the challenge of integrating physiological and ecological models such that the full response of the marine or terrestrial ecosystem could be predicted in an integrated manner from the modeling of climate change to the loss of species that cannot physiologically tolerate the modified world, to the loss of species that depended on ecological interactions with species lost via primary extinctions (Figure 4). Differences in timescale and level of biological organization at which physiological and ecological processes dominate add to this challenge. Application to the sixth extinction Mass extinction events provide our best source of information regarding the response of the biosphere to planetary-scale environmental disruption and the timescales and mechanisms of subsequent recovery. This information may be particularly important for the oceans, where observing biological response to environmental change is challenging and where the fossil record is particularly complete and diverse. Since the industrial revolution, the oceans have experienced substantial changes in ocean biogeochemistry, mainly because of rapid injection of CO 2 into the atmosphere from anthropogenic sources. Under the accelerating future anthropogenic emissions scenario consistent with historical trends (Figure 5C), the oceans are expected to warm by 4-5°C and pH is expected to decrease, on average, by 0.44 pH units by the end of the 21 st century, with changes increasing even further over the next few centuries ( Figure 5E, N) (Kwiatkowski et al., 2020). High temperatures are also expected to reduce the ocean's oxygen content while also altering nutrient cycles (Sweetman et al., 2017). Unabated anthropogenic emissions could drive the oceans toward widespread oxygen deficiency over the rest of the 21 st century and beyond ( Figure 5H) (Breitburg et al., 2018). Such changes would have drastic consequences for marine ecosystems as evident from declining fish stocks, expansion of marine dead zones, and reduced primary productivity across different parts of the globe ( Figure 5K) (Blanchard et al., 2012). Efforts are already underway to project changes in species' ranges and abundances in response to climate change on land and in the oceans (Thuiller, 2004;Cheung et al., 2009;Chen et al., 2011;Pinsky et al., 2020). Extrapolating results from experiments and field observations over days or years to timescales of centuries, millennia, and beyond is challenging because different processes may dominate the biospheric response on different timescales, although there is emerging evidence that responses to some stresses are concordant across timescales (Reddin et al., 2020). Furthermore, the primary phase of extinction, dominated by physiology, may give way over time to a secondary phase of extinction, dominated by the effects of changing ecological interactions. Connecting the physiological and ecological processes driving extinction remains a research frontier. Studies from the fossil record show that the ecophysiological constraints on marine taxa due to global warming and ocean deoxygenation will exert a key role in determining their risk to extinction under current and future emissions scenarios. The fossil record can even be used to calibrate the Earth system models used to predict future extinctions and changes in geographic range, just as paleoclimate records are used to calibrate models providing climate projections (Zhu et al., 2022). Under a high emissions scenario ( Figure 5C), the marine biological richness could be reduced to 65% of its current state due to global warming and oxygen loss from oceans by 2,300 . The combined climate-ecophysiological models indicate that the local loss of species is expected to be the highest in tropical to temperate regions where taxa are expected to undergo a significant loss of aerobic habitat at their warm/low-O 2 range boundaries. In contrast, in terms of global habitat loss and extinction risk, the equatorial taxa are expected to fare better overall in low oxygen and warmer oceans compared to polar species due to their higher tolerance limits to warm climates and opportunities to expand their available habitats as the poles become more like the present-day tropics. This scenario has precedent in the fossil record with the end-Permian mass extinction where a similar latitudinal extinction pattern unfolded ( Figure 5A, B) (Penn et al., 2018;Reddin et al., 2019). Further work to integrate the effects of changes in pH, pCO 2 , salinity, and other key environmental variables into physiological performance models has the potential to make these models more general and accurate in reconstructing the causes of past extinction and predicting the consequences of future global change. The ecological functions disrupted by global warming and marine defaunation are also bound to have cascading effects which could lead to extinction of vulnerable taxa. Modeling such effects is challenging due to the complexity of the interactions involved. The fossil record is our only source of data on the effects of major environmental disturbance at global scale. Fortunately, calibration of environmental change to physiologically expected extinction is becoming possible due to parallel advances in geochemistry, Earth system modeling, and physiological experimentation. The next decade will require integration of food webs and other types of ecosystem models to extract the full value of the lessons from Earth's past in forecasting and guiding its future. Open peer review. To view the open peer review materials for this article, please visit http://doi.org/10.1017/ext.2023.10. Data availability statement. No data were collected or analyzed as part of this review paper.
2023-05-11T15:02:32.250Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "a443422c81d4bbd1767692d2d4edbb5a75ea6f5c", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/771A5FA29DEC48EFFBD8C53E3097B0CE/S2755095823000104a.pdf/div-class-title-selectivity-of-mass-extinctions-patterns-processes-and-future-directions-div.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "04e7469efff0f4facdbe55cdda0914a59f90a12d", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
264951880
pes2o/s2orc
v3-fos-license
The role of the Bali Province village community development service in increasing village potential ABSTRACT INTRODUCTION Many studies have investigated the role of government in advancing the role of villages (T. S. Maharani et al., 2022;Pradnyani, 2019;Saleha et al., 2022;Solihah, 2021).However, there are still gaps that must be filled to improve the quality of related studies.Previous studies recruited Pok Darwis, Bumdes, and Karang Taruna to develop village potential.Therefore, the current study investigates the role of the Bali Province Village Community Development Service in increasing village potential.Increasing village potential is very important for developing employment opportunities, improving the quality of community income and local original income (F.G. Maharani & Malau, 2022).Increasing the potential of a village will be considered good if it is able to improve the economy and welfare of the surrounding community (Amantha, 2021). Based on Village Law Number 6 of 2014, village governments have autonomous rights and broad authority in regulating and administering their own government.Therefore, Village Government is a government that is in direct contact with the community and knows about social problems that occur among village communities.A village is a legal community unit that has territorial boundaries and has the authority to regulate and manage the interests of its own community based on the rights of origin and customs in accordance with the initiative and dignity of the village, which has been regulated in Village Law Number 6 of 2014 article 1 paragraph 1. Judging from these laws and regulations, villages have autonomous rights and authority to regulate and administer their own areas (Liu et al., 2022). Meanwhile in Bali, as is known, there are 2 village governments, namely the traditional village and the official village.Sugiantiningsih et al. (2019) think that The Bali Provincial Government has issued a very strategic policy, namely by issuing a very strategic policy, namely by establishing Bali Province Regional Regulation number 4 of 2019 concerning Traditional Villages in Bali.The duties of the Traditional Village are explained in the regional regulations as to organize, manage and oversee the implementation of the parahyangan, pawongan and palemahan of the Traditional Village."This regional regulation is a real implementation of the Bali regional development vision "Nangun Sat Kerthi Loka Bali".Understanding sad kerti is a concept of environmental conservation in Hinduism.Partsad kerti; throwing an effort to preserve all efforts to purify the Atma from bondagetri-purpose.Segare garden an effort to preserve the ocean as a natural resource that has a function.Wana kerti efforts to preserve forests.Danu kerti efforts to preserve fresh water sources.Take care efforts to preserve the harmony of dynamic and productive social relations based on truth.Jana Kirti efforts for humans to have individual quality through a universal development pattern planning towards a new era of Bali. Local regulation Bali Province Number 4 of 2019 is a comprehensive basic legal guideline regarding existence Traditional Villages in Bali which gives strong authority to Traditional Villages.With the enactment of this regional regulation, all Traditional Villages in Bali must implement the contents of Local regulation number 4 of 2019 concerning Traditional Villages.In general, the implementation of regional regulations other than number 4 of 2019 concerning Traditional Villages in Bali has been in effect and is running in accordance with the contents of these regulations, for example the implementation of these regulations is the formation of protective institutions and higher institutions for Traditional Villages called Traditional Village Councils both in provinces and districts, and sub-districts (Abels et al., 2019). The formation of operational regional apparatus (OPD) which is fully responsible for the administration and finances of Traditional Villages in the Bali Provincial government (Priono et al., 2019;Zeho et al., 2020).Regional apparatus operations have authority as Traditional Villages to determine and appoint village heads/kelian/ulun villages/other names and prajuru in accordance with chapter 6 of this regional regulation by means of deliberation and consensus in sangkepan/paruman and confirmed by decree.Apart from that, the government also issued supporting regulations regarding the implementation of Traditional Village governance (Sun et al., 2021).Meanwhile, implementation in old traditional villages depends on the conditions of the traditional village (Das, 2019). Based on Bali Governor Regulation Number 2 of 2021 concerning Amendments to Bali Governor Regulation Number 58 of 2019 concerning Position, Organizational structure, duties and functions, as well as work procedures for regional apparatus within the Bali Provincial government.The Bali Province Indigenous Community Advancement Service has the task of assisting the Governor in carrying out government affairs in the area of indigenous community advancement which is a regional authority, as well as carrying out deconcentration tasks until the Governor's Secretariat is established as a Representative of the Central Government and carrying out assistance tasks according to their field of duties (Lowe et al., 2019). The authority of the Department for the Advancement of Indigenous Peoples in the development of community advancement is in accordance with its function in this case, formulating technical policies for the advancement of indigenous communities which is the authority of the Province, implementing policies for the advancement of indigenous communities which are the authority of the Province, administering the administration of the Service, coordinating and facilitating the implementation of the activities of the Assembly.Traditional Villages (MDA), carrying out evaluations and reporting of the Department, and carrying out other functions assigned by the Governor related to his duties and functions.Thus, Researchers investigate the role of the Bali Province Indigenous Community Advancement Service in increasing village potential in Denpasar City? and what are the supporting and inhibiting factors for the Bali Province Indigenous Community Advancement Service in increasing village potential in Denpasar City? METHOD This research uses a qualitative descriptive research method.Satori (2011) stated that qualitative research was carried out because researchers wanted to explore phenomena that could not be quantified which were descriptive in nature such as the process of a work step, the formula of a recipe, the meanings of various concepts, the characteristics of goods and services, pictures -images, styles, cultural procedures, physical models of artifacts and so on. An informant must really know or be an actor who is directly involved with the research problem.Choosing an informant must be seen for its competence, not just for presenting it (Barkhuizen, 2007).In this research, the informants who were the data sources in this research were the heads of the Bali Province Indigenous Community Advancement Service, the Pakraman Village Customary Village, and the MDA Bandesa of Denpasar City.Therefore, researchers used three data collection methods: observation, in-depth interviews, and documentation (Sugiyono, 2017).This research uses passive participant observation, namely the researcher comes to the location of the activity of the person being observed, but is not involved in the activity.After making observations, the researcher then conducted in-depth interviews with the informants.Researchers conducted in-depth interviews (in-depth interview) because this type of interview allows more freedom to explore the informant's information so that the information obtained is deeper (Sugiyono, 2001).Then, documentation is applied before the research data analysis steps.When the data analysis process takes place, the researcher simultaneously drafts a research report while still in the field, so that various data that is felt to be lacking or that can still be known can be filled in immediately and when leaving the field (research site) the draft is refined again, so that the report is complete.So, the data analysis process is data analysis before, during and after entering the field. RESULT AND DISCUSSION This research found that the Bali Province Indigenous Community Advancement Service has several roles in increasing village potential in Denpasar City (welfare, access, participation and control). Well-Being Based on the results of the researcher's interview with I Gusti Agung Ketut Kartika Jaya Seputra, SH, MH.Who is the Head of the Bali Province Indigenous Community Advancement Service.On July 16, 2022, related to welfare employees at the Bali Province Indigenous Community Advancement Service, that prosperity was felt.This is because the performance allowance paid is in accordance with the performance/implementation of duties and the presence of civil servants within the Bali Province Indigenous Community Advancement Service.Due to the duties of the PMA Service, the PMA Service does not carry out management of Indigenous communities, but is tasked with facilitating, providing assistance and guidance to Traditional Villages with the aim of achieving Kasukertan Traditional Villages (Veloutsou & Black, 2020). Additionally, from the results of the researcher's interview with informant A.A. Ngurah Rai Sudharma, SH, MH. who is the Head of the Traditional Village of Pakraman Village, Denpasar City related to the welfare of employees and elements in the PMA as well as the community in Traditional Villages throughout Denpasar City and even throughout Bali, according to him, with the existence of the Bali Province Indigenous Community Advancement Service, he is able to manage indigenous communities and realizing the welfare of indigenous communities in Bali.This can be proven by the existence of theories or regulations related to the PMA Service which was formed based on the mandate of Article 96 paragraphs (1) and (2) Bali Regional Regulation 4/2019, through Bali Provincial Regulation Number 7 of 2019 concerning Amendments to Bali Provincial Regulation Number 10 of 2016 concerning the Formation and Structure of Regional Apparatus as well as Bali Governor's Regulation Number 2 of 2021 concerning Amendments to Governor's Regulation Number 58 of 2019 concerning Position, Organizational Structure, Duties and Functions and Work Procedures of Regional Apparatus within the Bali Provincial Government establishing Regional Apparatus in charge of the field advancement of indigenous communities, namely the Bali Province Indigenous Community Advancement Service based on Bali Governor Regulation Number 2 of 2021, the main task of the Bali Province PMA Service is to assist the Governor in administering Regional Government affairs in the field of Indigenous Community Advancement.If it is the main key in the advancement of indigenous communities, then it is appropriate to improve your own welfare first, before the welfare of the community (Yuesti et al., 2020). Additionally, the researcher's interview with Dr. Drs.A A. Ketut Sudiana, S.H., A.Ma., M.H., who is the Bendesa MDA Denpasar City.In his interview, he stated that prosperity begins with the implementation of indigenous community welfare programs at the Bali Province Indigenous Community Advancement Service.Not giving salaries or wages just like that to the community which has implications for public indulgence (Spenkuch et al., 2023).So the key is to be rich in ideas, programs, what needs to be done and how to get support through Dimas PMA (Chanana & Sangeeta, 2021). One activity program that really supports the welfare of traditional communities is the SIKUAT application which aims to increase the efficiency and effectiveness of traditional village administration, in terms of transparent and accountable traditional village financial management (Ansell et al., 2021). Access Based on the results of the researcher's interview with I Gusti Agung Ketut Kartika Jaya Seputra, SH., MH.Who is the Head of the Bali Province Indigenous Community Advancement Service.On July 16 2022, related to the access that indigenous communities can get with the existence of the Bali Province Indigenous Community Advancement Service during the enactment of Regional Regulation No. 4 of 2019 concerning Traditional Villages, then Access can be obtained from the existence of the PMA Service through the Bali Province PMA Service Website.And Access in the context of trusting various elements.That the progress of Bali and its existence is carried out through coordination, communication and utilizing all opportunities and potential of the village (Burry et al., 2020). Meanwhile, according to A.A. Ngurah Rai Sudharma,.SH,.MH., who is the Traditional Village Head of Pakraman Village, Denpasar City.In his interview, he said that the access that indigenous communities get through the existence of the Bali Province Indigenous Community Advancement Service is access to the development of Rural Areas of Traditional Villages including: a. use and utilization of Traditional Village Wewidangan in the context of determining development areas in accordance with the Regency/City spatial layout; b. services provided to improve the welfare of rural communities; c. infrastructure development, improving the rural economy, and developing appropriate technology; and D. empowerment of Traditional Village Krama to increase access to services and economic activities.Apart from that, it is hoped that indigenous communities in particular follow the custom continue to make improvements and do not hesitate to study like a "library" which must be prepared as a place to access various information which mainly contains the teachings of Hinduism, spirituality, philosophy, historical knowledge, medicine, architecture, agriculture, guidance on making offerings and ceremonies, as well as local wisdom (Arifin et al., 2020). And a statement related to access was also conveyed by Dr. Drs.A A. Ketut Sudiana, S.H., A.Ma., M.H., who is the Bendesa MDA Denpasar City.In his interview on July 16 2022, regarding the access that indigenous peoples get through the existence of the Bali Province Indigenous Community Advancement Service, this is through the great trust of the Bali Provincial Government which provides as many opportunities as possible to the Village government.Especially traditional villages in Bali, to immediately develop human resources and village potential by referring to Regional Regulation No. 4 of 2019.It is hoped that traditional villages will take advantage of opportunities to work and be creative as widely as possible.To create superior human resources.This is also hoped to be supported by the IT skills of employees in the Village government as well as residents who can hone their skills in IT (Phu & Thu, 2022). Participation Based on the results of the researcher's interview with I Gusti Agung Ketut Kartika Jaya Seputra, SH., MH. who is the Head of the Bali Province Indigenous Community Advancement Service.On July 16 2022, related to the participation of the Bali Province Indigenous Community Advancement Service in the governance arrangements for traditional villages in Bali.It is stated that the participation carried out by the Bali Province PMA Service in the governance of Traditional Villages is through facilitation, assistance and guidance on the governance of Traditional Villages carried out in all Regencies/Cities throughout Bali. Meanwhile, according to A.A. Ngurah Rai Sudharma, SH, MH.In his interview, the Head of the Traditional Village of Pakraman Village, Denpasar City, said that related to the participation of the Bali Province Indigenous Community Advancement Service, it was the participation carried out by the Bali Province PMA Service in increasing potential with.For example, the village's potential is in the tourism sector.By creating a tourist destination that is proven to have generated a source of income for local communities, it will not automatically realize the preservation of local culture, but will be largely determined by the participation of local residents (stakeholder) (Holmes et al., 2019). In order to answer and try to find a solution to realize sustainable tourism development, it is relevant to conduct a study by the village government together with a team of experts to analyze: ( 1 In social reality, it shows that first, community participation is realized in the form of direct interaction through village meetings at the banjar level, as well as through representative elements such as BPD, PKK and youth organization activities (Greenhalgh et al., 2019).Frameworks for supporting patient and public involvement in research: systematic review and co-design pilot.Health expectations. Second, community participation leads to a form of representation, so it is recommended to improve the quality of human resources in community institutions in the village.In implementing development, community participation is highly expected at every stage of development starting from the planning stage, implementation stage, stage.Through development based on community participation, regional development can be implemented that is truly in accordance with the needs and aspirations of the community.Problems actually arise in line with changes in village governance arrangements, in other words changes to village governance can have an impact on changes in community attitude patterns and changes in the function of community institutions.One thing that is undesirable is the emergence of community apathy and indifference towards the implementation of village government. Community participation will arise if there is openness and interaction that involves the community in every activity.Especially development activities.From year to year, the development process carried out by the government is also increasingly criticized by the public, and as a result, negative biases from society towards the development process that is being or will be carried out are growing.At the very least, it turns out that there are people who don't care about the development process that is being and will be carried out.This clearly shows a symptom of a lack of community participation in the development agenda.It is hoped that there will be many opportunities for the community to participate in governance and village development (Jones, 2007). Control Based on the results of the researcher's interview with I Gusti Agung Ketut Kartika Jaya Seputra, SH., MH, who is the Head of the Bali Province Indigenous Community Advancement Service, regarding the role of Control during its formation until now it has been implemented by the Bali Province Indigenous Community Advancement Service.The control role carried out by Traditional Villages regarding activities in Traditional Villages is through monitoring and evaluating the implementation of Traditional Village activities.Citizen participation is one of the basic and key principles that must be upheld in society in a democratic country.The control function is closely related to community participation, where community participation is a form of real activity seen as a result of habit, in the government area which aims to influence the decision-making process (decision making).Active community involvement in the political process is an absolute requirement, because that participation will give rise to community control over the running of government (Schneider, 2022). Added by A.A. Ngurah Rai Sudharma, SH, MH. as the Head of the Traditional Village of Pakraman Village, Denpasar City, in his interview, he said that regarding the control of the role of the Bali Province Indigenous Community Advancement Service, the issue of human resources is also an aspect that is in the spotlight when looking at community participation.The village government considers that the main factor that is very influential is community participation in every policy at the village and regional level, namely human resource factors, low levels of education and economic levels resulting in low aspirations and control of indigenous peoples regarding policy formulation, besides that there are no regulations from the government.Area that guarantees that.This is what should be done by controlling every program and activity implemented and involving the community (Mao et al., 2021). Meanwhile, based on interviews with penDr.Drs.A A. Ketut Sudiana, S.H., A.Ma., M.H., who is the Village Head of Denpasar City MDA in his interview stated that in carrying out its main duties the PMA Service carries out several functions, namely the formulation of technical policies in the advancement of indigenous communities which is the authority of the Province of Bali (Situmorang et al., 2019).Implementation of policies in the advancement of indigenous communities is the province's authority (Shaffril et al., 2020).The administration of the Department coordinates and facilitates the implementation of the activities of the Traditional Village Council, there are four patterns of participation carried out by the community, namely: First, Voice, which is related to community aspirations in influencing local government policy making; second, Access, namely the opportunity and ability of the community to enter or achieve access to decision making and management of local resources.Third, Ownership (ownership), namely the community's sense of ownership and responsibility for policies and infrastructure, and public services, public goods.Fourth, Control, namely the opportunity and capacity of the community to assess and exercise control over the running of local government and its policies.Sometimes this is what the government has to respond to, limited human resources (HR) at low levels result in less accommodation of community aspirations, only now it must be acknowledged that the government's development strategy does not guarantee the accommodation of community aspirations, so with lots of physical assistance, The community is becoming increasingly pampered with these assistance programs.As a result, growing participation in society is more about physical implementation.Meanwhile, at the planning, control and evaluation stages of government policies and programs, the community tends not to be involved at all.Gives rise to many options and suspicions among the public.So it will encourage a vote of no confidence (Ferguson et al., 2022). Supporting Factors Through VisionNangun Sat Kerthi Loka Bali is one of the Regional Apparatus within the Bali Provincial Government, in carrying out its duties the Department for the Advancement of Indigenous Peoples carries out the mission of "Strengthening the position, duties and functions of Traditional Villages in carrying out Balinese Krama life which includes Parahyangan, Pawongan and Pabelasan".This mission is then outlined in programs and activities with the aim of "Making it happenKasukretan Traditional Village basedSad Kerthi in Bali Province, this can be seen from the traditional village life that Sukreta is based onSad Kerthi".Kasukretan Traditional Villages based on Sad Kerthi can be realized if the governance of the Traditional Village Government is quality, the governance of Customary Law is quality, the quantity and quality of Traditional Village Economic institutions increases and the role of Traditional Village Krama in Traditional Village Development increases (Peter et al., 2022). The supporting factors for the Bali Province Indigenous Community Advancement Service in increasing the potential of the Denpasar Traditional Village are influenced by regulatory factors.With the existence of regulations, namely the mandate of Article 96 paragraphs (1) and ( 2) of the Bali Regional Regulation 4/2019, the Bali Provincial Government forms a regional apparatus that handles traditional village affairs no later than 6 months after the Bali Regional Regulation 4/2019 is promulgated.Through this mandate, the Bali Provincial Government through Bali Provincial Regulation Number 7 of 2019 concerning Amendments to Bali Province Regional Regulation Number 10 of 2016 concerning the Formation and Structure of Regional Apparatus as well as Bali Governor Regulation Number 2 of 2021 concerning Amendments to Governor Regulation Number 58 of 2019 regarding Position, Organizational Structure, Duties and Functions and Work Procedures of Regional Apparatus within the Bali Provincial Government to form regional apparatus tasked with the advancement of indigenous communities, namely the Bali Province Indigenous Community Advancement Service.Based on Bali Governor Regulation Number 2 of 2021, the main task of the Provincial PMA Service Bali is to assist the Governor in administering Regional Government affairs in the field of Indigenous Community Advancement (Farbotko & McMichael, 2019). Position, Organizational Structure, Duties and Functions and Work Procedures of Regional Apparatus within the Bali Provincial Government to form regional apparatus tasked with the advancement of indigenous communities, namely the Bali Province Indigenous Community Advancement Service.Based on Bali Governor Regulation Number 2 of 2021, the main task of the Bali Province PMA Service is to assist the Governor in administering Regional Government affairs in the field of Indigenous Community Advancement (Nguyen et al., 2020). In carrying out its main duties, the PMA Service carries out several functions as follows: formulating technical policies for the advancement of indigenous communities which are the authority of the Province, implementing policies for the advancement of indigenous communities which are the authority of the Province, carrying out administration of the Service, and coordinating and facilitating the implementation of the activities of the Traditional Village Council. Obstacle Factor The inhibiting factor experienced by the Bali Province Indigenous Community Advancement Service in carrying out its role and main duties in increasing the potential of villages in the city of Denpasar is that the lives of the people of Denpasar City are more heterogeneous compared to the communities of other districts in Bali.The people of the City are broader in thought and have very modern and varied jobs.The potential in each village is difficult to develop and identify (Robson, 2019).Because some of them are also immigrants.This is a particular difficulty in developing human resources and village potential, in terms of livelihoods it is also hampered by the modernization of social life in Denpasar City (Singh et al., 2022). CONCLUSION It can be concluded that four indicators have been detected as the role of the Bali Province Indigenous Community Advancement Service (Welfare, Access, Participation, Control) to increase village potential in Denpasar City.The Bali Province Indigenous Community Advancement Service implements the Bali Province government's work program through Vision Nagun Sad Kerthi Loka Bali.As one of the Regional Apparatuses within the Bali Provincial Government, the Bali Province Indigenous Community Advancement Service also strengthens the position, duties and functions of Traditional Villages in organizing life.Manners Bali which includes Parahyangan, Pawongan, and Palemahan.Life Karma Traditional Village Sugared also emphasized to the community to createSad Kerthi Kasukretan because a Traditional Village based on Sad Kerthi can be realized if it has quality Traditional Village Governance and Customary Law, the quantity and existence of Traditional Village Economic institutions is maintained, and the role of Traditional Village Culture in Village Development is increased. ) the influence of the role of government, the role of traditional villages and social capital on community based tourism; (2) the influence of the role of government, the role of traditional villages and community based tourism on sustainable tourism development; (3) mediation community based tourism on the influence of the government's role in sustainable tourism development; (4) mediation community based tourism on the influence of the role of traditional villages on sustainable tourism development; and (5) moderation of social capital on the influence of the role of traditional villages on sustainable tourism development.This is an alternative for village government participation for the community and for Bali.Meanwhile, Dr. Drs.A A. Ketut Sudiana, S.H., A.Ma., M.H as Bendesa MDA Denpasar City in his interview on June 15 2022 conveyed regarding participation.Participation of the Bali Province Indigenous Community Advancement Service in the governance arrangements for traditional villages in Bali.Where the participation carried out by the Bali Province PMA Service is firstly how village community participation is carried out in the administration of village government.Second, what are the implications of implementing Law no.6 of 2014 concerning Villages regarding the development of community participation models in the administration of Village Government.
2023-11-03T15:21:24.197Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "24c7b32e19f9f591bf2c032d212e07e0cd129788", "oa_license": "CCBYSA", "oa_url": "https://riset.unisma.ac.id/index.php/JISoP/article/download/19843/15937", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4d92711a88f2b63af3528133a3a874c30f23ad82", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [] }
17489393
pes2o/s2orc
v3-fos-license
Genetic Evaluation of Dual-Purpose Buffaloes (Bubalus bubalis) in Colombia Using Principal Component Analysis Genealogy and productive information of 48621 dual-purpose buffaloes born in Colombia between years 1996 and 2014 was used. The following traits were assessed using one-trait models: milk yield at 270 days (MY270), age at first calving (AFC), weaning weight (WW), and weights at the following ages: first year (W12), 18 months (W18), and 2 years (W24). Direct additive genetic and residual random effects were included in all the traits. Maternal permanent environmental and maternal additive genetic effects were included for WW and W12. The fixed effects were: contemporary group (for all traits), sex (for WW, W12, W18, and W24), parity (for WW, W12, and MY270). Age was included as covariate for WW, W12, W18 and W24. Principal component analysis (PCA) was conducted using the genetic values of 133 breeding males whose breeding-value reliability was higher than 50% for all the traits in order to define the number of principal components (PC) which would explain most of the variation. The highest heritabilities were for W18 and MY270, and the lowest for AFC; with 0.53, 0.23, and 0.17, respectively. The first three PCs represented 66% of the total variance. Correlation of the first PC with meat production traits was higher than 0.73, and it was -0.38 with AFC. Correlations of the second PC with maternal genetic component traits for WW and W12 were above 0.75. The third PC had 0.84 correlation with MY270. PCA is an alternative approach for analyzing traits in dual-purpose buffaloes and reduces the dimension of the traits. Introduction Buffalo herds are managed under dual-purpose production systems in Colombia so farmers are interested in improving traits related with breeding, milk and meat production. A strategy to improve herd productivity is to select animals according to their breeding values (BV), which allow programming mating according to specific objectives. However, when BV are available for various traits it can be difficult to select the animals, especially when the traits have negative genetic correlation. The principal components analysis (PCA) is a multivariate technique that reduces the amount of originally-correlated variables into a smaller set of non-correlated variables, keeping most of the original variability, and reducing the dimensionality to a new set of variables named principal components (PC), under the assumption of losing the least possible amount of information. This technique creates orthogonal axes which are linear combinations of the original variables, based on the matrix eigenvalues of the variables considered. The eigenvalues are generated in order from highest to lowest and each eigenvalue is assigned a principal component allowing each PC to retain more variability than the following PC [1]. According to Meyer K [2], when the original variables are highly correlated the first PCs can explain most of the variation, thus allowing to eliminate redundant information. Quantitative genetics has developed three uses for principal components (PCs): as a tool to visualize genetic variation patterns, to define the genetic parameters to be estimated, and to separate the original number of variables into a smaller set of principal components to estimate the genetic parameters of these PCs [3]. The PCA technique has been incorporated into genetic evaluations in beef cattle [4][5][6], dairy [7], and to analyze reproductive traits in different breeds [8][9][10]. Recently, PCA was used for genetic evaluations of nine traits of economic interest in buffalo cattle in Brazil, concluding that four PCs are sufficient to explain the covariance structure of the traits [11]. The reviewed literature concludes, among other things, that the PCA allows lowering dimensionality of the variables, facilitating the interpretation of data in a few PC, and identifying the type of relationship between the original variables. The aim of this study was to explore the relationship between BV for growth, milk yield, and age at first calving in dual-purpose buffaloes by using PCA. Materials This study was approved by the Ethics Committee for Animal Experimentation of Universidad de Antioquia (approved on May, 2013, 83 minutes). The Colombian Association of Buffalo Breeders (ACB) provided the database used in this study. The traits evaluated were: weaning weight (WW), yearling weight (W12), weight at 18 months of age (W18, view S1 Dataset), weight at 2 years of age (W24), milk yield at 270 days (MY270), and age at first calving (AFC). The age range allowed for WW, W12, W18, W24 and AFC was 180 to 300, 330 to 390, 450 to 510, 680 to 760, and 760 to 1500 days, respectively. MY270 was estimated following the guidelines of the International Committee of Animal Recording (ICAR) [12]. Animals were grazing on pastures and received mineral supplementation. The breeding system consisted in controlled natural mating. Records were taken between 1996 and 2014. All herds are located in Colombia's Caribbean region in a rainforest zone (height above sea level: 80 m, temperature: 28°C, and annual precipitation: 2000 mm) [13]. All herds were managed as dual-purpose systems. The database (S2 Dataset pedigree dataset available) included a relationship matrix with 48621 animals, predominantly Murrah crossbreds. An overview of the data is shown in Table 1. For WW and W12 the random effects were: direct additive genetic (a), maternal additive genetic (m), maternal permanent environmental (pe), and residual effect (ε). The fixed effects were: sex (male or female), number of calving (1 to 14) and contemporary group (farm, year, and birth time: January to April, May to August, or September to December). Age at weighing was used as a covariate (linear effect). The matrix representation of the model is: Where y is a vector of observations, β is the vector of fixed effects, and ε is the random residual vector. X, Z 1 , Z 2 , and W are the incidence matrices relating the fixed effects, direct additive genetic effects, maternal additive genetic effects, and maternal permanent environmental effects, respectively. The following formula by Willham RL [15], was used in the estimation of total heritability for WW and W12. Where h 2 t = total heritability σ 2 t = direct additive genetic variance σ 2 m = maternal additive genetic variance σ am = genetic covariance between direct and maternal effects σ 2 p = phenotypic variance For W18 and W24 random effects were the additive genetic random (a) and the residual effect (ε). The fixed effects were sex (male or female), number of calving (1 to14), and contemporary group (defined as for WW and W12). The age at weighing was used as a covariate (linear effect). For AFC the random effects were the same as for W18 and W24, and the fixed effect of contemporary group was included (farm, year, and time of first birth: January to April, May to August, or September to December). The matrix representation of the model was: For MY270 the random effects were: additive genetic (a), permanent environmental (pe), and residual effect (ε). The fixed effects were parity (1 to 14) and contemporary group (farm, year, and time of birth: January to April, May to August, or September to December). The matrix representation of the model was: Principal components PCA was developed using the BV from 133 males, selected from 961 males with higher than 50% reliability for WW, W12, W18, W24, MY270, AFC, maternal genetic effect for weaning weight (MGWW), and maternal genetic effect for yearling weight (MGW12), data are also available in S3 Dataset. All BV were standardized to zero mean and unit variance. To select the number of principal components (PC) that explained the highest percentage of variance only those PC with greater than one eigenvalues were took into account [16]. The linear correlations of traits with each PC were estimated, and significant traits in each PC were defined. This analysis was conducted using command PCA, FactoMineR library [17] of r-project software [18]. Genetic parameters The estimated heritability of the studied traits is presented in Table 2. Traits with the highest and lowest heritability were W18 and AFC, with 0.53 and 0.17, respectively. Heritability of the other traits ranged between 0.18 and 0.23. Heritability of the maternal genetic component included in WW and W12 was 0.04 and 0.08, respectively, indicating the need to include this effect in genetic assessments to obtain more accurate heritabilities for these two traits. Heritabilities of the permanent environment for WW, W12, and MY270 were 0.11, 0.16, and 0.25, respectively. Principal component analysis PCA was performed using BV of WW, W12, W18, W24, MY270, AFC, MGWW and MGW12 from 133 breeding males chosen from 961 males. The first three PC had eigenvalues greater than one, and explained 65.78% of the original variance of the breeding values for the aforementioned traits, view Table 3. See PCA progam in S1 File. Distribution of traits in each of the first three components (PC1, PC2 and PC2) is shown in Fig 1. The lines represent eigenvectors indicating the strength and direction in each PC [19]. Traits WW, W12, W18 and W24 showed greatest intensity in PC1, and related positively with this component. The MY270 behaved in similar way, but with less intensity. On the other hand, AFC and MGWW were negatively associated with PC1, while MGWW, MGW12 and MY270 related positively with PC2. The traits with greatest intensity in PC3 were MY270 and AFC, and they were positively related. The MGWW was negatively associated with this component, and had low intensity (Fig 1). Table 4 shows the correlations of significant traits with each of the first three PC. The PC1 presented higher than 0.72 correlation with WW, W12, W18, and W24; and it was -0.38 and -0.30 with AFC and MGWW, respectively. Correlation of PC2 with WW and AFC was negative, while it was positive with MY270, MGWW and MGW12. Correlation of PC3 with MY270 and AFC was positive, and it was negative with MGWW. Discussion The values found in this study for WW and W12 were higher, and W18 and W24 were lower than those reported in Colombia for those traits: 182, 201, 278 and 363 kg, respectively [20]. Milk yield was lower to 2286.8 kg reported for buffaloes in Italy [21], and 1594 kg reported for Murrah buffaloes in Brazil [22]. AFC was higher to 1094 days reported for Murrah buffaloes in Brazil [22] and less than 1140 days reported for buffaloes in Colombia [23]. The performance parameters of buffaloes for WW, W12, W18, W24, MY270, and AFC were better than the data reported for the dual purpose cattle in Colombia [24], indicating that buffalo is a good livestock production alternative in this country. The estimated WW, W12 and W24 heritability was lower than the figures reported in Colombia by Bolivar et al. [23]: 0.42, 0.42 and 0.41, respectively. Heritability of W18 was 0.42 in that report, which is lower than estimated in the present study. In this study, the estimated heritability for milk yield was lower than previously reported for buffaloes in Brazil: 0.30, 0.25, and 0.28 [11,25,26], respectively, but was higher than that reported in Italy 0.14 [21], Brazil 0.22 [27] and Colombia 0.22 [28]. The estimated heritability for AFC was higher than that reported in Nellore heifers: between 0.08 and 0.16 [29]; but less than 0.47 estimated in buffaloes in Colombia [23]. The estimated maternal heritability for WW and W12 coincide with values reported by Albuquerque and Meyer [30] for Nellore cattle. They evaluated this trait from birth to 600 days of age, reporting values between 0.01 and 0.08 that were statistically significant at up to 390 days. In Brazil Malhado et al. [31] estimated maternal heritability as 0.09 for weight at 205 days of age in buffaloes. In Colombia Bolivar et al. [20] reported 0.28 for the same trait for weaning weight. These results suggest that inclusion of the maternal effect allows for a better estimation of heritability for WW and W12. In Table 5 shows the heritability estimates for the studied traits and those obtained by other researchers. The PCA results in this study are consistent with other reports, evidencing the usefulness of PCA to reduce dimensionality. According to the report by Val and Ferraudo [8], the first two PCs comprised 70.33% of the total variation of six traits associated with meat production and one trait associated to breeding in Nellore cattle. Also in Nellore cattle, three PCs accounted for 100% of the additive genetic variance of nine traits associated with meat production [5]. Oliveira et al. [11] evaluated seven productive and two reproductive traits of buffaloes in Brazil concluding that a reduced rank model with 3 or 4 PCs was sufficient to explain the largest percentage of the additive genetic variance for all the traits. Conclusions According to the heritability figures obtained, W18 and MY270 would be the most responsive traits to the selection process, while AFC would be less responsive. PCA facilitates and improves efficiency of the animal selection process by using correlations between traits and components, hence reducing the range of the analysis. It is concluded that the traits studied in this work can be analyzed with the first three PCs. Supporting Information S1 Dataset. This file contains the information productive weight at 18 months W18, information of each of the columns corresponds to: animal (id), father (sire), mother (dam), sex (sx), contemporani group (cg), calving number (N), weight (W18), and age (age). (XLSX) S2 Dataset. This archive contains the genealogical information of animals tested, each of the three columns correspond to the renumbering of the animal, father and mother, respectively. financing the project entitled "Modelos de regresión aleatoria e índices de selección en ganado bufalino doble propósito en Colombia" (code: 8714-2013-5025) and Sostenibilidad E1808. This paper is part of the PhD thesis of the first author, who received a scholarship from Colciencias (Colombia, convocatoria 528).
2018-04-03T05:08:49.603Z
2015-07-31T00:00:00.000
{ "year": 2015, "sha1": "14affa309a0ffef2dc6910cd8bb06f2364641999", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0132811&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "14affa309a0ffef2dc6910cd8bb06f2364641999", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
210813042
pes2o/s2orc
v3-fos-license
Synthesis and Electromagnetic Interference Shielding Performance of Ti3SiC2-Based Ceramics Fabricated by Liquid Silicon Infiltration In this work, Ti3SiC2-based ceramics were fabricated by the infiltration of liquid silicon into TiC preform by incorporating a small amount of Al. Al can play a catalytic role to promote the formation of TiC twins before liquid silicon infiltration (LSI), which leads to the increase of transformation efficiency from TiC to Ti3SiC2 in the LSI process. When the Al content in the TiC preform increases to 9 wt.%, the volume content of Ti3SiC2 reaches 85 vol.%, revealing the high electromagnetic interference shielding effectiveness of 39 dB in the frequency range of 8.2–12.4 GHz. The results indicate that it is an effective way to synthesize Ti3SiC2-based ceramics with excellent electromagnetic shielding performance. Introduction Electromagnetic interference (EMI) shielding materials have attracted more attention due to their extensive applications in protecting electronic devices from electromagnetic interference [1][2][3]. Metal-based and high-conducting polymer-based composites were the two most common types of EMI shielding materials. However, the metal is easily corroded, and the polymer cannot be applied at high temperatures. It is well known that ceramics have low density and high corrosion resistance, which can be applied as high-temperature structural materials, but the low electrical conductivity limits their application as EMI shielding materials. Apart from conventional ceramics, MAX phases have high electrical conductivity-like metals, due to the existence of M-X metallic bonding in the lattice structure, which reveals their superior application potential as EMI shielding materials [4][5][6][7][8][9]. Bulk Ti 3 SiC 2 exhibited high complex permittivity, and the EMI shielding effectiveness (SE) reached 35-54 dB in the frequency range of 8.2-18 GHz (X-band and Ku-band) [4]. With the addition of Ti 3 SiC 2 filler, the EMI SE of polyaniline composite was greatly enhanced [5]. Bulk Ti 3 AlC 2 with a high texture degree had an EMI SE of above 30 dB from room temperature to 800 • C [6]. MAX phases modified ceramic matrix composites were also prepared, and their EMI SE can be higher than 30 dB in the frequency range of 8. 2-12.4 GHz [10][11][12][13]. MXenes, as a new family of two-dimensional materials, exhibited outstanding EMI performance with the SE value being higher than 90 dB [14]. Ti 3 SiC 2 , as one of most studied MAX phases, can be prepared by several methods such as hot pressing [15], spark plasma sintering (SPS) [16], and reactive melt infiltration (RMI) [17][18][19][20][21]. Among these methods, RMI is an effective way to synthesize dense Ti 3 SiC 2 -based ceramics with near-net-shape. In the RMI process, metal melt (Si/Al-Si) infiltrated into porous TiC preforms driven by capillary force, and then reacted with TiC particles to synthesize the dense Ti 3 SiC 2 -based ceramics [18,19]. Carbon was Materials 2020, 13 added into TiC preform to promote the precipitation of Ti 3 SiC 2 in the liquid silicon infiltration (LSI) process, and the volume content of Ti 3 SiC 2 reached 58 vol.% [19]. When the Al-Si alloy was employed to infiltrate into TiC preform, the volume content of Ti 3 SiC 2 reached 52 vol.% [20]. The volume content of Ti 3 SiC 2 in RMI-based composites is usually lower than 60 vol.% due to the formation of byproducts including SiC, which limits the improvement of EMI shielding performance. The formation of Ti 3 SiC 2 in the LSI process includes three steps: the infiltration of liquid silicon, the formation of TiC twins, and the transformation from TiC twins to Ti 3 SiC 2 [18]. In order to increase the volume content of Ti 3 SiC 2 , we tried to shift the transition from TiC to TiC twins before LSI by introducing Al as a catalyst into TiC preform, and, thus, promoted the transformation from TiC to Ti 3 SiC 2 . In this work, TiC preforms with the incorporation of Al were first prepared, and then LSI was carried out to synthesize Ti 3 SiC 2 -based ceramics. The microstructure and EMI shielding performance of as-obtained ceramics were studied systematically. First, TiC and Al powders with different weight fractions were mixed homogenously. Second, the mixed powders were put into a metal mold with dimensions of 75 mm × 15 mm, and a uniaxial pressure of 17 MPa was put on the metal mold, obtaining green TiC preforms. Third, the green TiC preforms were laid in Al 2 O 3 crucible with SiC powder bed, and were heat-treated at 1400 • C for 1 h with flowing argon. At last, the TiC preforms were infiltrated with liquid silicon at 1600 • C in a vacuum furnace. Characterization The as-fabricated samples were crushed into powders, and an X-ray diffractometer (XRD, Rigaku D/max-2400, Tokyo, Japan) with CuKα radiation was employed to analyze the phase composition. The voltage was 40 kV and the current was 100 mA, and data was recorded in the angle (2θ) ranging from 5 to 80 • with a scanning rate of 5 • /min. The morphology of polished surface and fracture surface was characterized by a scanning electron microscope (SEM, S-4700, Hitachi, Tokyo, Japan) at 15 kV and 10 mA. The back scattered electron (BSE) imaging was employed to characterize the phase distribution, according to the different atomic number contrast, and the secondary electron imaging was employed to observe the fracture surface. An energy dispersive spectrometer (EDS, Genesis XM 2000, EDAX Inc., Berwyn, PA, USA) was used to analyze the element. A transmission electron microscope (TEM, Talos F200X, FEI, Hillsboro, OR, USA) was employed to analyze the microstructure on an atomic scale. For EMI shielding tests, the bars with dimensions of 22.86 × 10.16 × 3.00 mm 3 were cut from the as-fabricated samples, and then the scattering parameters (S-parameters: S 11 , S 12 , S 21 , S 22 ) in the frequency range from 8.2 to 12.4 GHz were measured with a vector network analyzer (VNA, MS4644A, Anritsu, Kanagawa, Japan) using the waveguide method, according to ASTM D5568-08. And then the reflected (R) and transmitted (T) coefficients were calculated by the following equations [22]. Phase Composition and Microstructure After heat-treatment and LSI, the density and open porosity of different samples are listed in Table 1. As the content of Al increased in the TiC preform, the Al melted during the heat-treatment and filled a part of the pores, which results in the change of the density and open porosity. During LSI, the liquid silicon infiltrated into TiC preform with capillary force and reacted with TiC particles. The density of all four samples increased to more than 3.7 g/cm 3 . It noted that the porosity of samples TA9 and TA12 increases slightly compared with that of samples TA3 and TA6. As reported, the transformation from TiC to Ti 3 SiC 2 led to a volume decrease of 11.7% [23]. Therefore, the variation of porosity may be related to volume shrinkage between the reaction of TiC and liquid silicon. XRD patterns of as-fabricated samples are shown in Figure 1. TiC particles preferred to react with liquid silicon to form TiSi 2 and SiC, as reported in the previous work [18]. With the addition of Al, the diffraction peaks of Ti 3 SiC 2 appear in the XRD patterns of as-obtained ceramics. Samples TA3, TA6, TA9, and TA12 have the same composition as Ti 3 SiC 2 , TiSi 2 , and SiC. With the addition of Al in the TiC preform, it can effectively decrease the formation energy of TiC twins [24,25], which promotes the formation of Ti 3 SiC 2 in the LSI process. Phase Composition and Microstructure After heat-treatment and LSI, the density and open porosity of different samples are listed in Table 1. As the content of Al increased in the TiC preform, the Al melted during the heat-treatment and filled a part of the pores, which results in the change of the density and open porosity. During LSI, the liquid silicon infiltrated into TiC preform with capillary force and reacted with TiC particles. The density of all four samples increased to more than 3.7 g/cm 3 . It noted that the porosity of samples TA9 and TA12 increases slightly compared with that of samples TA3 and TA6. As reported, the transformation from TiC to Ti3SiC2 led to a volume decrease of 11.7% [23]. Therefore, the variation of porosity may be related to volume shrinkage between the reaction of TiC and liquid silicon. XRD patterns of as-fabricated samples are shown in Figure 1. TiC particles preferred to react with liquid silicon to form TiSi2 and SiC, as reported in the previous work [18]. With the addition of Al, the diffraction peaks of Ti3SiC2 appear in the XRD patterns of as-obtained ceramics. Samples TA3, TA6, TA9, and TA12 have the same composition as Ti3SiC2, TiSi2, and SiC. With the addition of Al in the TiC preform, it can effectively decrease the formation energy of TiC twins [24,25], which promotes the formation of Ti3SiC2 in the LSI process. and stacking structure of Ti 3 SiC 2 can be clearly seen, and there is no crystallization relationship between TiSi 2 and Ti 3 SiC 2 . The typical selected area electron diffraction patterns of Ti 3 SiC 2 and TiSi 2 are displayed in Figure 2c, and the corresponding incident beam directions are parallel to [1100] and [010] for Ti 3 SiC 2 and TiSi 2 , respectively. The super lattice structure can be found due to periodically stacking Ti6C and silicon. Materials 2020, 13, x FOR PEER REVIEW 4 of 10 Figure 2a shows the low-resolution TEM image of Ti3SiC2-based ceramics in which Ti3SiC2 and TiSi2 grains can be clearly seen. As shown in the high-resolution TEM image (Figure 2b), the periodic and stacking structure of Ti3SiC2 can be clearly seen, and there is no crystallization relationship between TiSi2 and Ti3SiC2. The typical selected area electron diffraction patterns of Ti3SiC2 and TiSi2 are displayed in Figure 2c, and the corresponding incident beam directions are parallel to [11 00] and [010] for Ti3SiC2 and TiSi2, respectively. The super lattice structure can be found due to periodically stacking Ti6C and silicon. As shown in the high-angle annular dark field (HADDF) image (Figure 3a), the laminated structure of Ti3SiC2 can be clearly seen, and there are two kinds of particles inserting in Ti3SiC2 grain. By the EDS mapping image, it can be deduced that the left one is Al, and the other is SiC. As shown in Figure 3b, the TiSi2 and SiC can be clearly seen, and Al particles distribute as a discontinuous phase. The SiC was formed by the reaction during LSI, and the Al was formed by the condensation of residual Al melt. However, it cannot be detected by XRD, which indicates that the volume content of Al is lower than 5 vol.%. In the LSI process, part of Al dissolved into the Ti3SiC2 grains, and the others would be condensed to form Al particles. As shown in the high-angle annular dark field (HADDF) image (Figure 3a), the laminated structure of Ti 3 SiC 2 can be clearly seen, and there are two kinds of particles inserting in Ti 3 SiC 2 grain. By the EDS mapping image, it can be deduced that the left one is Al, and the other is SiC. As shown in Figure 3b, the TiSi 2 and SiC can be clearly seen, and Al particles distribute as a discontinuous phase. The SiC was formed by the reaction during LSI, and the Al was formed by the condensation of residual Al melt. However, it cannot be detected by XRD, which indicates that the volume content of Al is lower than 5 vol.%. In the LSI process, part of Al dissolved into the Ti 3 SiC 2 grains, and the others would be condensed to form Al particles. Materials 2020, 13, x FOR PEER REVIEW 5 of 10 Figure 4 shows the BSE images of samples TA3, TA6, TA9, and TA12, in which the bright phase and grey phase represent the Ti3SiC2 and TiSi2, respectively. The content of each phase in all four samples were obtained by measuring the areas in the BSE images, and at least 10 images were employed. The SiC and Al have the similar average atomic number, which shows the same contrast. Thus, it is hard to distinguish these two phases by BSE. Since the Al only occupies a very small amount and it is hard to distinguished from SiC, the SiC (Al) in Figure 5 represents the total volume content of SiC and Al. With the addition of Al into the TiC preform, the Ti3SiC2 and TiSi2 are the main phase, and the volume content of Ti3SiC2 is greatly promoted with the increase of Al content. As shown in Figure 5, the volume content of Ti3SiC2 increases from 42 vol.% to 85 vol.%, and then decreases to 68 vol.%. With the appearance of Ti3SiC2, the volume content of TiSi2 and SiC decrease, which indicates that a large part of Ti and carbon was consumed to form Ti3SiC2, and a smaller source of Ti and carbon was consumed to form TiSi2 and SiC. Figure 4 shows the BSE images of samples TA3, TA6, TA9, and TA12, in which the bright phase and grey phase represent the Ti 3 SiC 2 and TiSi 2 , respectively. The content of each phase in all four samples were obtained by measuring the areas in the BSE images, and at least 10 images were employed. The SiC and Al have the similar average atomic number, which shows the same contrast. Thus, it is hard to distinguish these two phases by BSE. Since the Al only occupies a very small amount and it is hard to distinguished from SiC, the SiC (Al) in Figure 5 represents the total volume content of SiC and Al. With the addition of Al into the TiC preform, the Ti 3 SiC 2 and TiSi 2 are the main phase, and the volume content of Ti 3 SiC 2 is greatly promoted with the increase of Al content. As shown in Figure 5, the volume content of Ti 3 SiC 2 increases from 42 vol.% to 85 vol.%, and then decreases to 68 vol.%. With the appearance of Ti 3 SiC 2 , the volume content of TiSi 2 and SiC decrease, which indicates that a large part of Ti and carbon was consumed to form Ti 3 SiC 2 , and a smaller source of Ti and carbon was consumed to form TiSi 2 and SiC. Figure 4 shows the BSE images of samples TA3, TA6, TA9, and TA12, in which the bright phase and grey phase represent the Ti3SiC2 and TiSi2, respectively. The content of each phase in all four samples were obtained by measuring the areas in the BSE images, and at least 10 images were employed. The SiC and Al have the similar average atomic number, which shows the same contrast. Thus, it is hard to distinguish these two phases by BSE. Since the Al only occupies a very small amount and it is hard to distinguished from SiC, the SiC (Al) in Figure 5 represents the total volume content of SiC and Al. With the addition of Al into the TiC preform, the Ti3SiC2 and TiSi2 are the main phase, and the volume content of Ti3SiC2 is greatly promoted with the increase of Al content. As shown in Figure 5, the volume content of Ti3SiC2 increases from 42 vol.% to 85 vol.%, and then decreases to 68 vol.%. With the appearance of Ti3SiC2, the volume content of TiSi2 and SiC decrease, which indicates that a large part of Ti and carbon was consumed to form Ti3SiC2, and a smaller source of Ti and carbon was consumed to form TiSi2 and SiC. It is interesting to note that no silicon can be detected in all four samples. The similar phenomenon can be found for the infiltration of liquid silicon into TiC-C preform. Once Ti3SiC2 appeared in the final product, the silicon would disappear. Generally speaking, it is normal to detect the residual silicon in the LSI-based materials. In the LSI process, the silicon infiltrated into the porous preform, and filled into the pores. Part of silicon would be consumed by the reaction, and the others would remain. For high-temperature structural materials like C/C-SiC, the silicon always remained after the infiltration of liquid silicon into porous C/C, which is harmful to the high-temperature strength [26]. However, the different phenomenon can be found for Ti3SiC2-based ceramics fabricated by LSI, which indicates that the infiltration of liquid silicon was inhibited by the formation of Ti3SiC2 [23]. Table 2 shows the summary of phase content of MAX phases in RMI-based ceramics. For the infiltration of Al melt into TiC-TiO2 preform, the volume content of Ti3AlC2 reached 40 vol.% [17]. For the infiltration of liquid silicon into TiC-C preform, the volume content of Ti3SiC2 reached 58 vol.% [19]. For the infiltration of Al-Si alloy into TiC preform, the volume content of Ti3SiC2 reached 52 vol.% [20]. For the infiltration of Al-Si alloy into TiC-TiO2 preform, the volume content of Ti3SiC2 reached 44 vol.% [21]. As listed in Table 2, the volume content of Ti3SiC2 in this work can reach 85 vol.%, which is much higher than other works. The catalytic role of Al to the formation of Ti3SiC2 has been demonstrated, and the difference is the introduction of Al into the TiC preform before LSI. In the LSI process, the TiSi2 and SiC was formed by the reaction of TiC and Si. At the beginning, Equation (6) was used, and then the TiSi2 dissolved into liquid silicon to form Ti-Sirich (Equation (7)). As reported, the TiC twins are essential for the precipitation of Ti3SiC2 [24]. The TiC twins should be formed first (Equation (8)), and then Ti3SiC2 can be synthesized (Equation (9)). TiC(s) + 3Si(l) → TiSi2(l) + SiC(s) Growth Mechanism TiSi2(l) + Si(l) → Ti-Sirich TiC(s) → TiCtwin(s) (8) It is interesting to note that no silicon can be detected in all four samples. The similar phenomenon can be found for the infiltration of liquid silicon into TiC-C preform. Once Ti 3 SiC 2 appeared in the final product, the silicon would disappear. Generally speaking, it is normal to detect the residual silicon in the LSI-based materials. In the LSI process, the silicon infiltrated into the porous preform, and filled into the pores. Part of silicon would be consumed by the reaction, and the others would remain. For high-temperature structural materials like C/C-SiC, the silicon always remained after the infiltration of liquid silicon into porous C/C, which is harmful to the high-temperature strength [26]. However, the different phenomenon can be found for Ti 3 SiC 2 -based ceramics fabricated by LSI, which indicates that the infiltration of liquid silicon was inhibited by the formation of Ti 3 SiC 2 [23]. Table 2 shows the summary of phase content of MAX phases in RMI-based ceramics. For the infiltration of Al melt into TiC-TiO 2 preform, the volume content of Ti 3 AlC 2 reached 40 vol.% [17]. For the infiltration of liquid silicon into TiC-C preform, the volume content of Ti 3 SiC 2 reached 58 vol.% [19]. For the infiltration of Al-Si alloy into TiC preform, the volume content of Ti 3 SiC 2 reached 52 vol.% [20]. For the infiltration of Al-Si alloy into TiC-TiO 2 preform, the volume content of Ti 3 SiC 2 reached 44 vol.% [21]. As listed in Table 2, the volume content of Ti 3 SiC 2 in this work can reach 85 vol.%, which is much higher than other works. The catalytic role of Al to the formation of Ti 3 SiC 2 has been demonstrated, and the difference is the introduction of Al into the TiC preform before LSI. In the LSI process, the TiSi 2 and SiC was formed by the reaction of TiC and Si. At the beginning, Equation (6) was used, and then the TiSi 2 dissolved into liquid silicon to form Ti-Si rich (Equation (7)). As reported, the TiC twins are essential for the precipitation of Ti 3 SiC 2 [24]. The TiC twins should be formed first (Equation (8)), and then Ti 3 SiC 2 can be synthesized (Equation (9)). Growth Mechanism TiC(s) + 3Si(l) → TiSi 2 (l) + SiC(s) TiSi 2 (l) + Si(l) → Ti-Sirich (7) TiC(s) → TiC twin (s) (8) 2TiC twin (s) + TiSi 2 (l) → Ti 3 SiC 2 (s) + Si(l) (9) RMI is a reaction-infiltration competition process. When the Al-Si melt was employed, Al lowered the diffusion speed of liquid silicon, and the reaction between liquid silicon and TiC was inhibited. Therefore, TiC remained in the final product [20]. The effect of carbon content on the formation of Ti 3 SiC 2 has been studied, which revealed that it was essential to form TiC twins by the reaction of carbon with TiSi 2 [18]. For the infiltration of liquid silicon into TiC-C preform, Equation (6) first started, and then Equation (8) took place. In this work, Al was introduced into the TiC preform before LSI, which may promote the formation of TiC twins before LSI. Equation (8) took place before Equation (6), and, thus, it would increase the transformation efficiency from TiC to Ti 3 SiC 2 , which led to the high-volume content of Ti 3 SiC 2 . Therefore, the Ti 3 SiC 2 phase content can reach 85 vol.% in this work. With the consumption of TiSi 2 in Ti-Si rich melt, the TiSi 2 preferred to infiltrate inside, and the infiltration of liquid silicon was inhibited. The LSI process was conducted under vacuum, and the liquid silicon was easy to evaporate due to its low vapor pressure. Based on the above analysis, it will be reasonable to understand the disappearance of silicon with the appearance of Ti 3 SiC 2 . It is noted that the maximum Ti 3 SiC 2 content can be found for sample TA9. With the further increase of Al content in the TiC preform, the Ti 3 SiC 2 content decreased from 85 to 68 vol.%. Al play the catalytic role to promote the formation of TiC twins. When too much Al was introduced into TiC preform, it may agglomerate and inhibit the contact between TiC particles with liquid silicon, which leads to the decrease of Ti 3 SiC 2 content. EMI Shielding Performance The SEs of all four samples with the thickness of 3 mm are shown in Figure 6a. The SE T for samples TA3, TA6, TA9 and TA12 is 26, 31, 39, and 28 dB. All the samples have SE T over 25 dB, which means more than 99% of electromagnetic wave can be shielded. These materials meet the requirements of commercial application. The highest SE can be found for sample TA9 with the most fraction of Ti 3 SiC 2 . RMI is a reaction-infiltration competition process. When the Al-Si melt was employed, Al lowered the diffusion speed of liquid silicon, and the reaction between liquid silicon and TiC was inhibited. Therefore, TiC remained in the final product [20]. The effect of carbon content on the formation of Ti3SiC2 has been studied, which revealed that it was essential to form TiC twins by the reaction of carbon with TiSi2 [18]. For the infiltration of liquid silicon into TiC-C preform, Equation (6) first started, and then Equation (8) took place. In this work, Al was introduced into the TiC preform before LSI, which may promote the formation of TiC twins before LSI. Equation (8) took place before Equation (6), and, thus, it would increase the transformation efficiency from TiC to Ti3SiC2, which led to the high-volume content of Ti3SiC2. Therefore, the Ti3SiC2 phase content can reach 85 vol.% in this work. With the consumption of TiSi2 in Ti-Sirich melt, the TiSi2 preferred to infiltrate inside, and the infiltration of liquid silicon was inhibited. The LSI process was conducted under vacuum, and the liquid silicon was easy to evaporate due to its low vapor pressure. Based on the above analysis, it will be reasonable to understand the disappearance of silicon with the appearance of Ti3SiC2. It is noted that the maximum Ti3SiC2 content can be found for sample TA9. With the further increase of Al content in the TiC preform, the Ti3SiC2 content decreased from 85 to 68 vol.%. Al play the catalytic role to promote the formation of TiC twins. When too much Al was introduced into TiC preform, it may agglomerate and inhibit the contact between TiC particles with liquid silicon, which leads to the decrease of Ti3SiC2 content. EMI Shielding Performance The SEs of all four samples with the thickness of 3 mm are shown in Figure 6a. The SET for samples TA3, TA6, TA9 and TA12 is 26, 31, 39, and 28 dB. All the samples have SET over 25 dB, which means more than 99% of electromagnetic wave can be shielded. These materials meet the requirements of commercial application. The highest SE can be found for sample TA9 with the most fraction of Ti3SiC2. The power balance of all four samples is shown in Figure 6b. With the increase of Al content in the TiC preform, the percentage of reflected power increases from 83% to 96%, and the absorbed power decreases from 17% to 3.6%, which reveals that most of the power was reflected. For samples TA3 and TA6, 82.9% and 82.8% power were reflected, and, above 95%, power was reflected for samples TA9 and TA12. Especially for sample TA9, only 0.01% power can transmit, which reveals the excellent EMI shielding performance. Although the absorption loss is the dominating shielding effectiveness, most of the power was mainly reflected, since the reflection took place before absorption. The power balance of all four samples is shown in Figure 6b. With the increase of Al content in the TiC preform, the percentage of reflected power increases from 83% to 96%, and the absorbed power decreases from 17% to 3.6%, which reveals that most of the power was reflected. For samples TA3 and TA6, 82.9% and 82.8% power were reflected, and, above 95%, power was reflected for samples TA9 and TA12. Especially for sample TA9, only 0.01% power can transmit, which reveals the excellent EMI Materials 2020, 13, 328 8 of 10 shielding performance. Although the absorption loss is the dominating shielding effectiveness, most of the power was mainly reflected, since the reflection took place before absorption. The electrical conductivities of all four samples are shown in Figure 7. The electrical conductivities of samples TA3, TA6, TA9, and TA12 are 5.34, 5.91, 8.53, and 5.67 S/cm, respectively. The high electrical conductivity is consistent with the high EMI SE value. All four samples have different phase composition and phase distribution, which leads to the different electrical conductivity. Materials 2020, 13, x FOR PEER REVIEW 8 of 10 The electrical conductivities of all four samples are shown in Figure 7. The electrical conductivities of samples TA3, TA6, TA9, and TA12 are 5.34, 5.91, 8.53, and 5.67 S/cm, respectively. The high electrical conductivity is consistent with the high EMI SE value. All four samples have different phase composition and phase distribution, which leads to the different electrical conductivity. As reported in the references, the electrical resistivity of Ti3SiC2 [15], TiSi2 [27], and SiC [28] is 22 × 10 −6 , 13-16 × 10 −6 , and 10 3 -10 9 Ω•cm. Ti3SiC2 and TiSi2 have high electrical conductivity, while SiC is a typical semi-conductor material. It can be found that the volume contents of high-electrical-conductivity phase (Ti3SiC2 + TiSi2) are 77, 86, 93, and 83 vol.% for samples TA3, TA6, TA9, and TA12. The higher volume content of the (Ti3SiC2 + TiSi2) phase is, the higher the electrical conductivity is. The existence of SiC would impede the electron migration, which inhibits the improvement of electrical conductivity. For sample TA9, it has the lowest SiC content among four samples, so it will be reasonable to have the best electrical conductivity. With the increase of electrical conductivity, sample TA9 exhibits the best EMI shielding performance. The grain size of Ti3SiC2 also affects the EMI shielding performance. In the lattice structure of Ti3SiC2 grain, the edge-sharing Ti3C2 layers are separated by hexagonal nets of the Si layer. The Ti-d-Ti-d bonding dominates the electronic density of states at the Fermi level, while Si does not contribute significantly at the Fermi level. Therefore, the electrical conductivity of MAX phases perpendicular to the c-axis is higher than the parallel one [29,30]. In future work, the Ti3SiC2 grain with a long c-axis can be designed to further increase the EMI shielding performance. Conclusions In this work, Ti3SiC2-based ceramics with high EMI shielding performance were synthesized. The Al can play the catalytic role to promote the formation of TiC twins before LSI, and, thus, more TiC can be transformed into Ti3SiC2 in the LSI process. The volume content of Ti3SiC2 increases to 85 vol.% when the weight content of Al in the TiC preform increases to 9 wt.%, and then the volume content of Ti3SiC2 decreases with the further rise of Al content. The EMI shielding effectiveness of Ti3SiC2-based ceramics can reach 39 dB, revealing good EMI shielding performance. As reported in the references, the electrical resistivity of Ti 3 SiC 2 [15], TiSi 2 [27], and SiC [28] is 22 × 10 −6 , 13-16 × 10 −6 , and 10 3 -10 9 Ω·cm. Ti 3 SiC 2 and TiSi 2 have high electrical conductivity, while SiC is a typical semi-conductor material. It can be found that the volume contents of high-electrical-conductivity phase (Ti 3 SiC 2 + TiSi 2 ) are 77, 86, 93, and 83 vol.% for samples TA3, TA6, TA9, and TA12. The higher volume content of the (Ti 3 SiC 2 + TiSi 2 ) phase is, the higher the electrical conductivity is. The existence of SiC would impede the electron migration, which inhibits the improvement of electrical conductivity. For sample TA9, it has the lowest SiC content among four samples, so it will be reasonable to have the best electrical conductivity. With the increase of electrical conductivity, sample TA9 exhibits the best EMI shielding performance. The grain size of Ti 3 SiC 2 also affects the EMI shielding performance. In the lattice structure of Ti 3 SiC 2 grain, the edge-sharing Ti 3 C 2 layers are separated by hexagonal nets of the Si layer. The Ti-d-Ti-d bonding dominates the electronic density of states at the Fermi level, while Si does not contribute significantly at the Fermi level. Therefore, the electrical conductivity of MAX phases perpendicular to the c-axis is higher than the parallel one [29,30]. In future work, the Ti 3 SiC 2 grain with a long c-axis can be designed to further increase the EMI shielding performance. Conclusions In this work, Ti 3 SiC 2 -based ceramics with high EMI shielding performance were synthesized. The Al can play the catalytic role to promote the formation of TiC twins before LSI, and, thus, more TiC can be transformed into Ti 3 SiC 2 in the LSI process. The volume content of Ti 3 SiC 2 increases to 85 vol.% when the weight content of Al in the TiC preform increases to 9 wt.%, and then the volume content of Ti 3 SiC 2 decreases with the further rise of Al content. The EMI shielding effectiveness of Ti 3 SiC 2 -based ceramics can reach 39 dB, revealing good EMI shielding performance.
2020-01-16T09:06:01.598Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "30704b86e5af7cdc8101a1226668829d9ccdc4a8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/ma13020328", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d5f9548fd08b0363150f2c5fe96a6245a06bb08c", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
55400188
pes2o/s2orc
v3-fos-license
Evidence-Based Review of Therapeutic Approaches in Dementia with Lewy Bodies Our understanding of Dementia with Lewy Bodies is progressively updated and shaped every few years. The most recent consensus report of DLB consortium proposes essential, core and supportive clinical features [1-3]. The following are the core clinical features of the disease: Cognitive Fluctuations, Visual Hallucinations, Parkinsonism and REM sleep behavior disorder [3]. While the latter was previously considered only suggestive of the disease, built on accumulating longitudinal evidence, it is now among the core features. The most recent consensus has also re-assigned all of the suggestive features to the supportive clinical feature and indicative biomarkers categories. The severe sensitivity to neuroleptic agents is now one of the supportive clinical features and the low dopamine transporter uptake Imaging is within the indicative biomarkers [3]. Introduction Our understanding of Dementia with Lewy Bodies is progressively updated and shaped every few years. The most recent consensus report of DLB consortium proposes essential, core and supportive clinical features [1][2][3]. The following are the core clinical features of the disease: Cognitive Fluctuations, Visual Hallucinations, Parkinsonism and REM sleep behavior disorder [3]. While the latter was previously considered only suggestive of the disease, built on accumulating longitudinal evidence, it is now among the core features. The most recent consensus has also re-assigned all of the suggestive features to the supportive clinical feature and indicative biomarkers categories. The severe sensitivity to neuroleptic agents is now one of the supportive clinical features and the low dopamine transporter uptake Imaging is within the indicative biomarkers [3]. Tools for the diagnostic suspicion of DLB have been lacking and have been a limitation to the diagnosis of the disease outside expert centers. The Lewy Body Composite Risk score [4,5] was developed to help differentiate DLB patients from other types of dementia; (p<0.001). It can also assist physicians to distinguish Mild Cognitive Impairment (MCI) due to Dementia with Lewy Bodies from the MCI due to Alzheimer's disease (AD). The composite score relies on the caregivers' input and was designed as an easy assessment tool to be used by clinicians that need only 3 min to be performed. It is composed of ten structured questions that are divided to ask about motor disturbances (questions1-4), i.e., bradykinesia, rigidity, falls, tremors and non-motor features (questions 5-10), i.e., sleep disorders, staring spells, visual hallucinations, RBD, autonomic dysregulation. Positive answers to these questions are suggestive of dementia with lewy bodies. A score of equal to or more than 3 is suggestive of lewy body pathology with a sensitivity of 94.2% and specificity of 78.2% [4]. Once DLB is suspected, diagnostic certainty can be increased by the use of biomarkers. Biomarker development is an active field of research in all types of dementia as early diagnosis is recognized as a prerequisite for an optimum therapeutic impact. In the basal Ganglia, mesostriatal neuronal ends secrete dopamine to interact with the post-synaptic dopamine receptors. Both SPECT and PET imaging are able to assess dopamine activity in the pre-and post-synaptic ends with appropriate tracers [6,7]. Dopamine Transporter Uptake (DAT) on the pre-synaptic ends of dopaminergic neurons is a broadly utilized approach and part of the diagnostic criteria. Diminished DAT uptake as demonstrated Abstract Dementia with Lewy Bodies (DLB) is estimated to affect around 15-20% of dementia cases globally. This places it as the second most common type of dementia after Alzheimer's disease (AD). Paradoxically, clinical trials addressing the complex symptomatology of DLB are sparse compared to AD. While substantial progress has been made in overcoming the diagnostic challenges, evidence-based treatment is elusive. In this review we summarize the available placebo-controlled clinical trials and identify areas of need to develop treatment strategies. [3]. Polysomnography diagnostic of REM sleep behavior disorder and 123Iodine-MIBG myocardial scintigraphy are also accepted indicative biomarkers [3]. 18F-FDG PET scan has a role in the differential diagnosis of DLB from other types of dementia. In DLB, marked hypometabolism is seen mainly in the pareito-occipital cortex. The involvement of the visual association cortex along with hypometabolism in the pons, thalamus and amygdala may explain the visual hallucinations in these patients [8]. Exprimentally, new biomarker tracers with higher affinity are still under investigation [6]. Biomarkers looking at the alpha-synuclein deposition in other peripheral tissues in these patients are under investigation [9][10][11][12]. One study looked at the presence of alpha-synuclein in the nasal mucosa [12], which was pertinent in clinically diagnosed PD and DLB. The additional discovery of alpha-synuclein in the GIT within the lower esophagus [10,11] and the submandibular glands [9,10] is of major interest. These two locations were the most affected non-neural tissues and the easy accessibility of these two; make them of high value in future biomarker development. The cognitive deficit in Dementia with Lewy bodies' is attributed to the cholinergic deficit. In DLB, degeneration of cholinergic neurons of the basal ganglia projecting to the cortex has been identified in disease pathology [13]. The Neuropsychiatric symptoms observed by the patients and their caregivers have also been attributed to this cholinergic deficit. Thus, cholinomimetics have been a focus of investigation in the past decade. Neuropsychiatric symptoms, however, are not only caused by the cholinergic deficit but are also caused by an imbalance of neurotransmitters, including but not limited to dopamine, glutamate and serotonin [14]. DLB is less studied than AD, as clinical trials traditionally face more challenges with the diagnostic uncertainties and the complex array of symptomatology experienced by these patients. The lower incidence and faster progression pose a challenge for subject enrollment to double-blind studies and the complexity of phenotype is a burden for assessment strategies and makes the identification of primary outcome measures difficult. This paper will review the role of current therapeutic modalities for the treatment of DLB, from an evidence-based perspective. Methods We performed an online search of Pubmed, Google Scholar and Clinicaltrials.gov using the following keywords: "Dementia with Lewy bodies"-"Donepezil"-"Memantine"-"double-blind, placebocontrolled trials"-"Lewy body dementia"-"Screening"-"therapy"-"antipsychotics"-"REM sleep behavior disorder". The search resulted in 2 consensus reports, 2 meta-analyses, 2 review papera, 5 placebocontrolled randomized trials, 6 uncontrolled study, 3 Open-label extensions, 2 case series, 3 post-hoc analyses. The results were stratified according to level of evidence under the US Preventive Services Task Force (USPSTF) recommendations [15] into: Level 1-the highest evidence-based on well-designed Randomized control trials, Level 2.1 being controlled trials without randomization, Level 2.2 evidence from well-designed cohort studies or case-control studies, Level 2.3 being evidence from multiple time series with/without intervention or uncontrolled trials and Level 3-lowest level of evidence being based solely on clinical expertise and descriptive studies. Cognition and cognitive fluctuations Double-blind, randomized, placebo-controlled, trials (RCT) tested the efficacy of cholinesterase inhibitors in DLB. Donepezil was tested in 3 RCT, Rivastigmine in 1 RCT and one post-hoc analysis on donepezil are summarized in Table 1. Change in MMSE showed improvement of 1.5 points on average compared to the placebo group with Rivastigmine (p=0.072) [16] While one study reports significant improvements in MMSE with both the 5 and 10 mg dosing of donepezil [17] as a 3.8 point difference and 2.4 respectively, the confirmatory study only shows significance with the 10 mg Donepezil compared to the placebo group, with a 2.2 point mean improvement [18]. A post-hoc multivariate regression analysis studied whether using a higher dose of donepezil is beneficial in patients with Dementia with Lewy Bodies [19,20]. This analysis showed that patients on higher doses of donepezil had higher plasma levels compared to those on a low dose. This analysis showed change in MMSE correlation with higher plasma levels of the medicine; (p=0.040). A study looked at the difference in treatment response between patients who had mixed AD-DLB pathology and those with pure Dementia with Lewy Bodies. It showed an enhanced response with acetylcholine esterase inhibitors in those with pure DLB when compared to those with concomitant AD pathology [21][22][23][24]. Memantine, an NMDA receptor antagonist, approved for AD was tested in 2 RCTs. Emre et al. [25], showed a mean change in CGIC of 3.9 in Memantine-treated groups compared to 3.3 in the placebo group. (p=0.023). This was the largest scale study performed for the use of memantine in PDD/DLB. The initial study by Aarsland et al. [21] looked at the change in CGIC of the memantine-treated groups compared to placebo. The change measured by this trial was a mean difference of 0.7, 95% CI 0.04-1.39; p=0.03 [21]. After the original study [21], a post-hoc analysis showed improvement in the attentional deficit in these patients as measured by the simple reaction time (SRT and choice reaction time (CRT) was statistically significant as shown by Wesnes et al. [22]. Mean In a follow-up open-label extension phase, patients who were started on memantine in the original RCT were shown to have a better 3 year survival compared to those on placebo (p=0.045) [23,24]. Visual Hallucinations/Delusions Treatment of the neuropsychiatric symptoms, probably the most challenging symptom of DLB causing anxiety and caregiver burden, remains elusive. Almost all of the RCTs performed contained a version of the Neuropsychiatric Inventory (NPI); NPI-2, NPI-4 and NPI-plus as secondary outcome measures. Cholinesterase inhibitors have been reported to show some efficacy. In the initial study designed by Mori et al. [17] and the confirmatory phase 3 trials, the NPI-2, which is the sum of hallucinations and cognitive fluctuations, was significantly improved in the 5 mg Donepezil group with a linear dose-dependent improvement. The NPI-4; assessing the delusions, hallucinations, apathy and depression, also showed significant improvement in the 5 mg Donepezil group. However, the confirmatory phase 3 trial failed to show a significant improvement compared to placebo [18]. In an uncontrolled study, there was improvement in the visual symptoms on administering Donepezil in 13 patients with LBD (p=0.009) [25,26]. In a Case series, the use of Donepezil in patients with Capgras Syndrome was effective [27][28][29][30]. Not only does Donepezil cause regression of neuropsychiatric symptoms, like visual hallucinations; but increasing the dose of Donepezil has also been shown to treat relapses [27,28]. Atypical antipsychotics offer an alternative approach. However, sensitivity to these drugs marks a challenge, including the risk of neuroleptic malignant syndrome. In a post-hoc analysis of a RCT investigating the role of Olanzapine in AD patients, a subgroup of patients who met criteria for DLB (n=29) showed reduction in hallucinations as recorded by the NPI-NH [31]. Another looked at Quetiapine and its effect on dementia-related psychosis in 10 male patients and a reduction in the NPI overall score and the NPI subscale scores was observed. The major limitation of these studies is the size of the patient group and the uncontrolled design, hence the inability to draw a definite conclusion based solely on them. The black box warning of the atypical antipsychotics is a major drawback. Memantine's effect on the NPI scores was tested on patients with PDD and DLB. Emre et al. showed a significant improvement in the DLB memantine group but none in the PDD group [25]. Parkinsonism Four un-controlled trails investigated the use of L-dopa in patients with lewy body pathology, i.e., DLB, PDD. An uncontrolled study measuring the difference using the Unified Parkinson's Disease Rating Scale. This study looked at the effect of dopaminergic agents in 19 patients who met criteria for probable LBD and it showed motor improvement without causing psychosis in 22% of patients [32]. Two trials studying the effect of Levodopa in patients with Parkinson's disease, PDD or DLB also showed improvement motor symptoms [33,34]. REM sleep behavior disorder (RBD) The hallmark of the diagnosis of REM sleep behavior disorder is the absence of atonia during REM sleep and "dream enactment" [35][36][37][38][39][40]. Reliable history has to be obtained from caregivers in order to give such diagnosis because not all patients are aware of themselves having this disorder. A cohort study found that REM sleep behavior disorder was, in fact, one of the earlier symptoms that developed before cognitive symptoms in patients who are to develop LBD [31]. Other researchers have backed this up by following up on patients with idiopathic REM sleep behavior disorder with a final conclusion that 25% of these patients converted to an overt synucleinopathy within 3 years [40] or in a similar study 17.7% over 5 years [41]. Interventional, large-scale trials targeting REM sleep behavior are still lacking. The highest level of evidence is provided in an uncontrolled trial in patients with DLB (n=7). RBD in general, not in the context of DLB, was studied in a placebo-controlled, randomized crossover study in patients with iRBD [36] and an observational study in patients with idiopathic RBD (Irbd) [37]. In the uncontrolled study, 7 patients with DLB-associated REM sleep behavior disorder among a group of 14 patients with other neurological diseases were started on melatonin 3-12 mg [35]. Among the 14 patients, the average period of use was 14 months. Most patients showed improvement over the course of a year and 6 were controlled. In the observational study, 39 patients with iRBD all underwent treatment with Clonazepam over a follow-up duration of 28.8 ± 13.3 months. Over that period of time, the injury caused to self or to partner as a consequence of RBD was diminished in 2/3 of the patients treated. The RCT had a crossover design with two groups of patients, each receiving placebo/Melatonin for 4 weeks then switching to the other. There were decreased episodes of REM sleep without atonia (RSWA) compared to baseline in patients receiving Melatonin (p=0.012) and also decreased sleep-onset latency (p=0.05) [36]. In a case series, Ramelteon, a synthetic melatonin agonist, showed efficacy in 4 patients after failure of improvement with other therapies, including AchIs [38]. These two reports warrant further trials to test melatonin or synthetic analogues for the treatment of RBD. Anti-dementia drugs may have a role as well. In a RCT the use of memantine in PDD and DLB patients was shown to have some effect on the physical activity seen with REM sleep behavior disorder. It decreased movement during sleep while symptoms in the placebo group were worse at the end of the 24 week study (p=0.006) [36]. Along with bedroom safety rules, pharmacologic supplementation is necessary for better control of the symptoms of this disorder. 3-12 mg Melatonin dosing is effective in enhancing the physiologic REM sleep phase. It decreases periods without atonia during REM sleep [36,42]. Clonazepam is also a first-line therapy for RBD [42]. It seems to alter dream content and consequently ameliorate the vigorous verbal and physical activity [37,42]. Discussion Dementia with Lewy bodies is a tremendous burden to all caregivers and patients. The symptoms experienced over the course of disease are disheartening and are a source of anxiety and caregiver burnout. The motor and neuropsychiatric symptoms are also a cause of disability, resulting in the high cost of care needed for these patients. Dementia with Lewy bodies was constantly underdiagnosed or most commonly misdiagnosed as AD even in the recent past. FDA approved therapeutics are not available for DLB and the field mainly relies on consensus opinions and off-label use of medications. Most experts agree that using Cholinesterase inhibitors as a first line treatment to ameliorate cognitive decline in these patients is reasonable [2,3] Despite the evidence presented above, Donepezil use remains off-label in the US. Rivastigmine is the only approved medication for Parkinson's disease Dementia but remains off-label for DLB in the US. Japan was the first to approve Donepezil for treatment of DLB [14]. Cholinesterase inhibitors may also improve visual hallucinations and delusions at a better risk/benefit ratio than the atypical antipsychotics. Sometimes atypical antipsychotics are warranted, but discussion of the black box warning is necessary at the time of initiation. Seroquel may be the safest option and close monitoring is required to identify potential life-threatening side effects, such as Neuroleptic Malignant Syndrome. A newer antipsychotic, Pimavanserin, was recently approved for visual hallucinations and psychosis in Parkinson's disease Dementia. We are eagerly anticipating its role in DLB [43][44][45]. Larger scale RCTs would need to be conducted to favor a drug over the other for this symptom. Memantine has less supportive evidence for its off-label use, however, due to its relative safety, it is reasonable to use it with close monitoring for side effects. Motor Symptoms are treated with levodopa, with fewer efficacies than in PD [43]. While L-dopa improves the parkinsonian motor features, the increased dopaminergic activity may induce psychosis. Low dose and slow titration are favorable. While the psychosis risk is there, DLB trials thus far have not reported an increase in psychosis. REM sleep behavior disorder is frequent and may confer a risk to self and to the bed partner. Treatment strategies include Melatonin and Clonazepam. Melatonin decreases RSWA thus may be more relevant from the pathomechanistic point of view and affords a safer approach. Nelotanserin, an inverse agonist of the serotonin receptors is currently in RCT for the treatment of DLB associated RBD [46]. The field has made major strides in developing tools to increase the diagnostic sensitivity and specificity of DLB. The composite risk score assists primary care in identifying patients while the improved diagnostic criteria are a foundation for research. RCTs are needed to improve outcomes. The complex symptomatology and the involvement of multiple neurotransmitter systems make measuring the efficacy challenging. Validated assessment tools are needed to capture phenotype complexity and optimize the selection of outcome measures. Moving forward, well-designed RCTs are needed to optimize therapy and put an end to disease progression. Results Improvement in CGIC scores compared to placebo (p=0.03). No significant differences in secondary outcome measures. Improvement in outcome measures. Statistical significance on the CRT (p=0.0086) and the word recognition tests (IWR, p=0.0176; DWR, p=0.0161) was noted in the memanitne-treated groups. Recurrence of symptoms was more common in patients who were originally in the memantine group. (p=0.04) Memantine treated patients had a significantly improved 3 year survival. (p=0.045) Improvement in the CGIC (p=0.023). Improvement in the NPI in DLB patients (p=0.041) but not PDD patients. There was no significant improvement in other test scores. Wesnes, et al. [22] Johansson et al. [23] Stubendorff et al. [24] Emre et al. [25]
2019-03-17T13:10:43.176Z
2017-12-13T00:00:00.000
{ "year": 2017, "sha1": "c1c603453e19c9a94b142a01cf29cec8092dc2b8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4172/2161-0460.1000406", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c6a395bd9e0ce631c2922dbaddaf8136a1acd7ef", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
235712234
pes2o/s2orc
v3-fos-license
Dairy Product Intake and Long-Term Risk for Frailty among French Elderly Community Dwellers Dairy products (DP) are part of a food group that may contribute to the prevention of physical frailty. We aimed to investigate DP exposure, including total DP, milk, fresh DP and cheese, and their cross-sectional and prospective associations with physical frailty in community-dwelling older adults. The cross-sectional analysis was carried out on 1490 participants from the Three-City Bordeaux cohort. The 10-year frailty risk was examined in 823 initially non-frail participants. A food frequency questionnaire was used to assess DP exposure. Physical frailty was defined as the presence of at least 3 out of 5 criteria of the frailty phenotype: weight loss, exhaustion, slowness, weakness, and low physical activity. Among others, diet quality and protein intake were considered as confounders. The baseline mean age of participants was 74.1 y and 61% were females. Frailty prevalence and incidence were 4.2% and 18.2%, respectively. No significant associations were observed between consumption of total DP or DP sub-types and frailty prevalence or incidence (OR = 1.40, 95%CI 0.65–3.01 and OR = 1.75, 95%CI 0.42–1.32, for a total DP consumption >4 times/d, respectively). Despite the absence of beneficial associations of higher DP consumption on frailty, older adults are encouraged to follow the national recommendations regarding DP. Introduction In recent years, the world has been experiencing a steady increase in the aging population. It is expected that by 2050, one in six people will be over the age of 65, including one in four in Europe and northern America [1]. This increased life expectancy is associated with a higher risk of morbidities. In fact, nearly a quarter (23%) of the overall global burden of death and illness is in people aged over 60, and much of this burden is attributable to longterm illnesses [2]. Advancing age is indeed accompanied by common geriatric syndromes, such as frailty [3]. Frailty is characterized by a depletion in the functional reserves of physiological systems, which limits the possibility to adapt to changes in the environment over time, leading to falls, hospitalization, disability, and death [4]. Nevertheless, frailty can be prevented, and diet appears to be a major determinant of its development [5,6]. Several studies have reported that particular macronutrients [7,8], food groups [9][10][11] and dietary patterns are associated with frailty [12][13][14][15][16][17]. Particularly, our group has previously reported the relevance of protein intake (>1 g/d being associated with a lower prevalence of frailty) [18], of fruit and vegetable intake (>5 servings/d being associated with a lower risk of frailty) [9], and of the Mediterranean diet (a higher adherence being associated with a lower frailty risk) [17]. In line with our findings, several other longitudinal studies have showed that a higher protein intake is protective against frailty [19][20][21]. Dietary sources of protein include dairy products (DP), which are also important sources of calcium and vitamin D. Interestingly, recent studies have showed that higher DP consumption was associated with better age-related health outcomes, and particularly lower risks of type 2 diabetes [22,23], cardiovascular diseases and mortality [24,25]. The type of DP (i.e., milk, fresh DP and cheese) appears to be key component of such associations. In fact, a meta-analysis on 938,415 participants and 93,518 mortality cases reported an absence of association between total dairy (high-or low-fat) and milk with the risk of death, while total fermented dairy (including sour milk products, yogurt or cheese; +20 g/day) were associated with a significant 2% reduced risk of all-cause mortality and cardiovascular diseases [26]. While two systematic reviews also observed that higher DP intakes were associated with higher appendicular muscle mass, improved balance-test scores, and an attenuation of the loss of muscle strength [27,28], the direct potential benefit of DP on frailty as a whole has scarcely been studied. To the best of our knowledge, a single prospective study implemented in the Spanish Seniors-ENRICA cohort [29] reported that consuming seven or more servings per week of low-fat milk was associated with a significantly lower risk of frailty compared with consuming less than one serving per week. The external validity of such results remains uncertain. Indeed, the SHARE database demonstrated significant heterogeneity in DP consumption across Europe, with higher levels in central and northern countries and in Spain, and the lowest prevalence of dairy intake in eastern European countries [30]. Of note, high cheese consumption is a hallmark of French dietary habits, and France is also characterized by low milk consumption. Finally, several sociodemographic, nutritional characteristics and lifestyle factors have been associated with the French DP consumption, with specificities according to each DP sub-type [31]. Altogether, it is conceivable that the featured consumption of DP sub-types among French older adults could be differentially associated with frailty. Therefore, our objective was to assess the cross-sectional and prospective associations between total DP and DP sub-types (milk, fresh DP and cheese) consumption and the 10-year frailty risk among older adults of the Three-City (3C) Bordeaux cohort. Study Overview The 3C-study is a French population-based prospective study initiated in 1999-2000 to study the vascular risk factors of dementia [32]. Its protocol was approved by the Consultative Committee for the Protection of Persons participating in Biomedical Research at Kremlin-Bicêtre and all participants gave written informed consent. Participants were randomly sampled from electoral rolls from three French cities (Bordeaux, Dijon, and Montpellier). Eligible participants had to be 65 years and older at the time of recruitment and not institutionalized. Among the 9294 participants included at baseline, 2104 were from the Bordeaux center, which completed the initial data collection in 2001-2002 (wave 1). A comprehensive dietary survey of 1597 participants was also performed. This dietary survey served as the baseline for the present study, where DP frequency of consumption and frailty were assessed. Assessment of Dairy Products Dietary data were obtained from a semi-quantitative Food Frequency Questionnaire (FFQ) administered during face-to-face interviews by dietitians. This allowed the assessment of the daily frequency of consumption of 148 foods and beverages (with frequencies assessed in 11 classes, from "never or less than once a month" to "7 times per week") during each of the six meals/snacks of the day, as previously detailed [33]. Data from the FFQ was validated against a 24-h dietary recall in an independent subsample of the 3C-study [34]. DP consumption was considered using the frequency of consumption of milk, fresh DP, and cheese. The milk consumption variable included the consumption Nutrients 2021, 13, 2151 3 of 14 of "milk", "coffee with milk", "tea with milk", "chocolate", "chicory", and "natural milk or with cereal". Consumption of "yogurt and cottage cheese" was classified as fresh DP while frequency of consumption of "cheese" was considered as the cheese category. As already described by Pellay et al. (2020), we considered the DPs' frequency of consumption as four main exposures, including total DP, milk, fresh DP, and cheese [31]. For each DP component, three categories were created based on the quartile distribution of consumption (low frequency: first quartile; intermediate frequency: quartiles 2 and 3; high frequency: fourth quartile). This classification ensured the differentiation between the most infrequent and frequent consumers, as previously described [31]. Assessment of Frailty At baseline and at the 10-year follow-up, frailty was defined following the Cardiovascular Health Study frailty index [4], the tool recommended by the International Conference of Frailty and Sarcopenia Research [35]. Nevertheless, minor modifications were made to adapt this tool to the available data in our cohort study, as already published [17,18]. Briefly, (1) weight loss was defined as self-reported unintentional loss of 3 kg or more or, if missing, as a body mass index (BMI) <21 kg/m 2 ; (2) exhaustion was evaluated using the following statements from the Center for Epidemiologic Studies-Depression scale (CES-D): "I felt that everything I did was an effort" and "I could not get going". Participants were considered frail for this criterion when they answered "a moderate amount of the time" or "most of the time" to either of these statements [36]; (3) walking speed was determined based on a 6-m walking test, adjusting for height and gender. Participants in the highest quintile were considered slow. When this information was missing, participants were considered frail for this criterion when they reported being unable to walk between 500 m and 1 km or to walk up and down a flight of stairs based on the Rosow-Breslau scale [37]. This proxy has been shown to be strongly associated with walking [38]; (4) weakness was identified in different ways at baseline and at the 10-year follow-up, depending on availability of data. At the 10-year follow-up, weakness was identified using the handgrip strength quartiles stratified by sex and BMI, as recommended [4]. At baseline, weakness was identified using the chair standing method, shown to be a good proxy for handgrip strength [39]; (5) physical activity was assessed in a face-to-face interview via an open-ended questionnaire. Low physical activity was defined as less than 1 h of sports activities or less than 3.5 h of leisure activities per week, as previously described [17,18]. Older adults with three or more criteria out of five were considered as frail, otherwise they were considered as non-frail. Prevalent frail participants at baseline were excluded for the prospective analyses. The FRAIL scale was also used to define frailty in sensitivity analyses [40]. The FRAIL scale includes five self-reported components: Fatigue, Resistance, Ambulation, Illnesses and Loss of weight. Fatigue and weight loss were evaluated similarly to those of the frailty index. Resistance and Ambulation were evaluated using the Rosow-Breslau scale, as recommended. Resistance was assessed by asking participants if they could walk up and down a flight of stairs and Ambulation by asking if they could walk between 500 m and 1 km; "no" responses were each scored as 1 point. Lastly, Illnesses was scored 1 for respondents who reported 5 or more chronic conditions out of 13 including hypertension, diabetes, hypercholesterolemia, cardio-and cerebro-vascular diseases (myocardial infarction or cardiac and vascular surgery, or arteritis or stroke), Parkinson's disease, cognitive decline and dyspnea. Cancer was considered when reports were available, i.e., at the 10-year follow-up. The FRAIL score ranged from 0-5, with those scoring three or more considered as frail and those scoring two or less as non-frail. Assessment of Disability Dependency in basic Activities of Daily Living (ADLs) was assessed using the five following items of the Katz scale: bathing, dressing, toileting, transferring from bed to chair, and eating [41]. An individual was considered dependent if they could not perform at least Nutrients 2021, 13, 2151 4 of 14 one activity without a given level of assistance, as defined in the original instrument. All identified dependent participants at baseline and at 10-year follow-up were excluded from the analyses because frailty is considered as risk factor for dependency [35]. Covariates The covariates included age, sex, marital status, education, smoking status, polypharmacy (dichotomous variable with 6 medications/d as a cut-off), multimorbidity (dichotomous variable with 2 chronic diseases or more as a cut-off point), and global cognitive performances using the Mini-Mental State Examination (MMSE) [42] (0-30 points; higher scores indicate better cognitive status). A diet quality score was also computed. This score included seven components: pulses, raw fruits, raw vegetables, cooked fruits and vegetables, fish, alcohol and olive oil. Each component was dichotomized into meeting the current dietary recommendations versus not. The total score of 7 was also dichotomized into having a good diet quality (score > 3) versus not (score ≤ 3). Finally, total protein intake was evaluated from a single 24-h dietary recall that was administered at home in addition to the FFQ [43]. Statistical Analysis Baseline demographic, clinical and dietary characteristics were compared between prevalent frail and non-frail (i.e., sample used in the cross-sectional analysis) and incident frail and non-frail (i.e., sample used in the longitudinal analysis) older adults using the student's t-test or chi-square test, depending on the type of the variables. Logistic regression models were used to estimate odds ratio (OR) and 95% confidence intervals (95% CI) for the association between consumption of total DP or DP sub-type (milk, fresh DP, or cheese) and frailty, both cross-sectionally and prospectively. For each DP exposure, intermediate frequency consumption (quartiles 2 and 3) and high frequency consumption (quartile 4) were compared to the reference category of low frequency consumption (quartile 1). Model 1 was adjusted for age, sex, education and marital status. Model 2 was additionally adjusted for smoking status, multimorbidity, polypharmacy, diet quality score, total protein intake and global cognitive performances. Finally, two sets of sensitivity analyses were performed. First, we assessed frailty using the FRAIL scale and the same multivariate models were applied, except excluding multimorbidity as a covariate from model 2 as this variable is a component of the FRAIL scale. Second, we retained all ADL dependent individuals as we assumed that those who are ADL dependent might already be frail, both in the cross-sectional analysis and the prospective one. All statistical analyses were performed with the SAS Statistical package (Version 9.4 SAS Institute) and statistical significance was set at p < 0.05. Sample Characteristics Among 1597 participants who answered the dietary survey at baseline, 107 were excluded from all analyses for the following reasons: 20 were ADL dependent at baseline, 67 could not be classified for frailty, 9 had missing information about DP consumption and 11 participants had missing information for covariates. Therefore, the final sample for the cross-sectional analysis comprised 1490 participants (including 1427 non-frail). Among those participants, 979 (69%) were followed up at 10 years (during the follow-up, 355 participants died). An additional 156 participants were excluded from longitudinal analyses (n = 79 participants were identified as dependent and n = 77 with missing frailty status at 10 years). Thus, 823 participants were prospectively analyzed ( Figure 1). analyses (n = 79 participants were identified as dependent and n = 77 with missing frailty status at 10 years). Thus, 823 participants were prospectively analyzed ( Figure 1). In cross-sectional analyses, the studied sample (n = 1490) constituted mainly of females (n = 906, 60.8%) and had an average age of 74.1 ± 4.9 (standard deviation) years (Table 1). Half the sample was married (57%) and reported multimorbidity (48%), and a third (32%) was taking 6 medications/d or more. Prevalence of frailty was 4.2% (n = 63). The most prevalent frailty criterion was low physical activity (n = 234, 20.1%) followed by slow walking speed (n = 281, 19%) while the least prevalent frailty criterion was muscle weakness (n = 77, 5.3%) followed by weight loss (n = 82, 5.5%). Those included in prospective analyses (n = 823) were non-frail participants at baseline, mainly females (65.0%) and were on average 72.8 ± 4.4 years old. A total of 150 participants (18.2%) exhibited frailty at the 10-year follow-up and the most incident frailty criterion was low physical activity (n = 473, 58.2%) followed by muscle weakness (n = 199, 26.6%). The least incident frailty criterion was weight loss (n = 69, 8.4%) followed by exhaustion (n = 138, 17.6%). In cross-sectional analyses, the studied sample (n = 1490) constituted mainly of females (n = 906, 60.8%) and had an average age of 74.1 ± 4.9 (standard deviation) years (Table 1). Half the sample was married (57%) and reported multimorbidity (48%), and a third (32%) was taking 6 medications/d or more. Prevalence of frailty was 4.2% (n = 63). The most prevalent frailty criterion was low physical activity (n = 234, 20.1%) followed by slow walking speed (n = 281, 19%) while the least prevalent frailty criterion was muscle weakness (n = 77, 5.3%) followed by weight loss (n = 82, 5.5%). Those included in prospective analyses (n = 823) were non-frail participants at baseline, mainly females (65.0%) and were on average 72.8 ± 4.4 years old. A total of 150 participants (18.2%) exhibited frailty at the 10-year follow-up and the most incident frailty criterion was low physical activity (n = 473, 58.2%) followed by muscle weakness (n = 199, 26.6%). The least incident frailty criterion was weight loss (n = 69, 8.4%) followed by exhaustion (n = 138, 17.6%). Prevalent and incident frail older adults were significantly older, were more likely to be depressed, to take 6 medications/day or more, and to have comorbidities at baseline compared with prevalent non-frail participants, and with participants free from frailty over time, respectively (Table 1). Moreover, prevalent frail participants exhibited a significantly higher BMI on average than non-frail participants, and the daily consumption of proteins was not significantly different between the frail and non-frail participants at baseline (i.e., cross-sectional sample). Regarding the sample enrolled in prospective analyses, incident frail participants had a similar BMI compared to those who remained free from frailty, while a higher percentage of incident frail participants had a lower diet quality score compared with participants who remained free from frailty (53% vs. 44%, p = 0.045). Frequencies of consumption of DP (total DP, milk, fresh DP, and cheese) are presented in Table 2 for both cross-sectional and prospective samples. No significant differences were observed between prevalent frail and non-frail or incident frail and non-frail participants regarding the frequency of total DP and DP-subtypes consumption at baseline. Table 2. Frequency of consumption of dairy products (total and sub-types) according to the frailty prevalence (cross-sectional sample, n = 1490 in 2000) and incidence (prospective sample, n = 823 between 2000-2010) of older adults from the Three-City study, Bordeaux (France). All data are presented as n (%); a Baseline differences between prevalent frail and non-frail (n = 1490) and incident frail and non-frail (n = 823) participants tested by t-tests or chi square tests depending on the type of the variable; m = missing. Associations between Spectrum of DP Exposure and Prevalence of Frailty In models adjusted for age, sex, marital status and education, we did not observe any significant association between total DP and DP sub-types and frailty prevalence, when comparing the highest frequency to the lowest frequency consumption of DP (Table 3). In models additionally adjusted for smoking status, multimorbidity, polypharmacy, protein intake, diet quality and global cognitive score, all associations with the prevalence of frailty remained not significant for all DP exposures: total DP (OR = 1.08, 95% CI = 0.54-2.17 and 1.40, 95% CI = 0.65-3.01 for intermediate and high consumption vs. low, respectively), milk (OR = 1.13, 95% CI = 0.56-2.31), fresh DP (OR = 1.13, 95% CI= 0.54-2.33), and cheese (OR = 0.89; 95% CI = 0.43-1.88) for high vs. low frequency of consumption. Associations between Spectrum of DP Exposure and Incidence of Frailty When focusing on the 10-year risk for frailty, we observed that baseline frequencies of consumption of total DP and DP sub-types were not significantly associated with the frailty risk when we compared the lowest frequency to the highest frequency of consumption of total DP (OR = 0.74, 95% CI = 0.42-1.30), milk (OR = 0.80, 95% CI = 0.48-1.35), fresh DP (OR = 0.68, 95% CI = 0.38-1.20) and cheese consumption (OR = 1.19, 05% CI = 0.68-2.10) in fully adjusted models (Table 4). Sensitivity Analyses The FRAIL scale was also implemented to alternatively identify prevalent and incident frail participants. Sixty out of 1552 participants (3.9%) were considered as frail at baseline according to this scale. In fully adjusted models (i.e., model 2), all associations with the prevalence of frailty were not significant for all DP exposures: total DP (OR = 1.42, 95% CI = 0.64-3.13), milk (OR= 1.11, 95% CI = 0.54-2.32), fresh DP (OR = 0.96, 95% CI= 0.46-1.98), and cheese (OR = 0.86; 95% CI = 0.39-1.88) for high vs. low frequency of consumption. Among 1492 non-frail non-dependent, 1006 were followed at Nutrients 2021, 13, 2151 8 of 14 the 10-year follow-up (lost to follow-up n = 486). Among those, an additional 87 were excluded from the analysis because they were ADL dependent and another 23 were excluded because they were unclassified for the FRAIL scale, leading to a final sample size of 896, with 45 (5.0%) classified as frail on the FRAIL scale. Regarding the spectrum of DP exposures, we only observed that the highest, compared with the lowest, frequency of consumption of fresh DP was associated with lower frailty risk in the fully adjusted model (OR = 0.35, 95% CI = 0.13-0.97, p = 0.04, p global = 0.13) while all other associations were non-significant, for instance, total DP (OR = 1.66, 95% CI = 0.26-1.67), milk (OR= 1.21, 95% CI = 0. .98), and cheese (OR = 1.25; 95% CI = 0.49-3.21) for high vs. low frequency of consumption. Second, when all ADL dependent individuals were maintained in analytic samples, 1501 and 885 participants were included for the cross-sectional and prospective analyses, respectively. Among those 1501 participants, 68 (4.5%) were identified as frail. None of the total DP or DP sub-types exposures were associated with frailty prevalence in the fully adjusted models: total DP (OR = 1.44, 95% CI = 0.68-3.04), milk (OR= 1.04, 95% CI = 0.52-2.1), fresh DP (OR = 1.21, 95% CI= 0.59-2.48), and cheese (OR = 0.85; 95% CI = 0.42-1.74) for high vs. low frequency of consumption. Among those 1433 non-frail at baseline, 449 were lost at the 10-year follow-up and 99 were unclassified for frailty incidence, leading to a final sample size of 885 for the prospective analyses. At the 10-year follow-up, 195 participants (22%) were identified as frail. Regarding the spectrum of DP exposures, none of the frequency of consumption of total DP or DP sub-types were significantly associated with frailty risk in the fully adjusted models: total DP (OR = 0.65, 95% CI = 0.39-1.09), milk (OR = 0.61, 95% CI = 0.38-1.00), fresh DP (OR = 0.64, 95% CI = 0. .08), and cheese (OR = 1.4; 95% CI = 0.83-2.41) for high vs. low frequency of consumption. Table 3. Multivariate association between baseline frequencies of consumption of total DP, milk, fresh DP, and cheese and frailty prevalence among older adults in the Three-City Study, Bordeaux (n = 1490, 2000). Discussion In the present analysis of French community-dwelling older adults enrolled in the 3C-Bordeaux study, the frequency of DP consumption was not significantly associated with frailty, assessed using proxies of the frailty phenotype, in either cross-sectional or prospective analyses. In particular, total DP, milk, fresh DP and cheese were not associated with frailty prevalence at baseline. Similarly, these food groups were not associated with frailty risk at 10 years. Similar results were observed when frailty was assessed using the FRAIL scale, strengthening our conclusions. Several studies have evaluated the association between DP and age-related chronic diseases and mortality [22,23,44]. Nevertheless, to our knowledge, very few studies have evaluated the relationship between DP and frailty and their results were mixed. A crosssectional study evaluated the association between dairy intake and physical function among1456 older women aged 70 to 85 years [45]. The authors observed that compared to those in the lowest tertile of dairy consumption, those in the highest tertile of consumption had significantly higher handgrip strength and lower odds for a poor Timed Up and Go while no differences were observed in the prevalence of falls. In a sample of 1871 Spanish older adults enrolled in the Seniors-ENRICA cohort [29], greater consumption of low-fat dairy products and low-fat milk in particular was associated with lower frailty risk over 3.5 years of follow-up, while no significant results were observed for whole milk, yogurt, cheese and low-fat yogurt. Contrary to the presented studies, our findings did not show any association with frailty prevalence or risk at the 10-year follow-up. Interestingly, our results are similar to a recent analysis of the InCHIANTI study where the main objective was to evaluate the associations between adherence to a Mediterranean-type diet (MeDi) and frailty index at baseline and at the 10-year follow-up [46]. In a sub-analysis, the authors investigated the effect of individual components of the MeDi and frailty, and they observed that DP intake was not significantly associated with frailty in both the cross-sectional and prospective analyses. Several possible explanations could justify the absence of associations between DP and frailty in our sample. First, the FFQ used to collect dietary data assessed the frequency of consumption only, while information about the quantities, which could have been interesting, were only assessed by a single 24-h dietary recall (which questions its relevance). Therefore, despite the higher consumption frequency, this intake might have been below what is recommended and therefore affected our results. In fact, in a recent analysis of the 3C-study participants describing their DP intake at baseline, it was observed that participants with the highest frequency of total DP per day consumed lower than the recommended intakes [31]. These results were in line with a previous national report where it was observed that 64% of participants aged 55 to 79 years old reported consumption below recommendations [47]. Second, in the 3C-study, total DP consumption and its sub-types have been previously shown to be associated with different eating patterns [31]. Although we have adjusted for diet quality in our analyses, we cannot exclude the possibility of some residual confounding that has led to an absence of significance. This is noteworthy as it was observed that higher total DP consumption was associated with a higher consumption of biscuits, sweets and cooked vegetables, and higher frequency of milk consumption was significantly associated with higher intakes of biscuit and sweets, a dietary pattern described as "biscuits and snacking" in the 3C-study. Moreover, it was observed that the highest frequency consumers of fresh DP had a low total energy intake. Finally, the highest frequency of cheese consumption was associated with a high consumption of cereals and grains, sweets, charcuterie, meat, poultry, and alcohol [31]. These results showed that the higher consumption of total DP or sub-types was associated with less-than-optimal diets, rich in sugar and saturated fatty acids, part of a western-type diet [48] and potential risk factors for frailty [14,49,50]. For instance, in a cross-sectional NHANES study including 4062 participants ≥50 y of age, a higher percentage of saturated fatty acid intake was associated with higher frailtyprevalence [50]. Therefore, we speculated that the null association between highest DP consumption and frailty observed here might be the result of possible positive effects of some favorable nutrients on frailty (i.e., higher protein and energy intake), but attenuated by the possible negative effects of saturated fatty acid intake and the overall diet quality, although we have controlled for components of the diet in the analyses. Third, no information was available about the quality of DP, whether they were natural or sweetened, or fermented or not. In fact, flavored milk, whole yogurt and fermented milk, dairy desserts and sweetened cheeses are all sources of added sugars, which were shown to be associated with an increased risk of frailty in an analysis of the Seniors ENRICA cohort [49]. The highest tertile of added sugars consumption was associated with a higher frailty risk (i.e., multiplied by 2.3) compared to those in the lowest tertile. Finally, unlike the analyses from the Seniors-ENRICA cohort study [29], we were not able to differentiate between types of DP consumed based on their fat content. Nevertheless, the 24-h dietary recall administered at baseline of the 3C-Bordeaux study showed that only 7% of the participants had whole-fat milk and among those, only 10% reported regular consumption of whole-fat milk while up to 25% of the sample consumed whole fresh DP and 19% consumed flavored fresh DP or yogurt with fruits (unpublished data). This might imply that factors other than the fat content of DP might play a role in the association between DP and frailty. Altogether, we speculated that the observed null association between highest DP consumption (whatever the subtypes) and frailty might be the results of interactions between the different concentrations of beneficial and harmful ingredients leading to an unbalanced quality of DP and of related dietary patterns. The present study has some methodological limitations. First, as previously stated, we had no detailed information about the portion sizes and this would have affected our results as national recommendations emphasize the quantity consumed rather than frequency. Moreover, a high frequency consumption does not necessarily mean reaching the recommended levels, as older adults might have frequent but smaller intakes. Therefore, the inability to evaluate the portion sizes might have hindered any potential association of DP with frailty status. We also did not assess DP intake from mixed dishes, and this might have led to underestimation of the DP consumption frequency. This is an important issue to consider in future studies as milk is a recurrent constituent of several French recipes. Another limitation is that we did not adjust for important micronutrients related to DP and associated with frailty, namely, vitamin D [51,52]. Furthermore, recall bias cannot be excluded as it can lead to under or overestimation of DP intakes despite meticulous data collection. Regarding the assessment of frailty, we complemented slowness and handgrip strength with the Rosow-Breslau scale and the chair stand test, respectively, to minimize the loss of participants due to missing data. Indeed, the Rosow-Breslau scale has been shown to be strongly associated with walking [38] and the chair stand test was shown to be a good proxy for handgrip strength [39,53]. Furthermore, we were not able to check frailty incidence over 10-years at different waves of follow-up because the frailty phenotype could not be calculated at each time interval. Nevertheless, this limit was toned down when using the FRAIL scale, which identified a lower number of frail participants, but provided similar results on the DP-frailty associations in both the cross-sectional and prospective analyses. Despite these similarities, we speculate that the imbalance between frail and non-frail groups might have led to underpowered comparisons, hindering the observation of real differences if any. In addition, a selection bias cannot be dismissed, since not included participants (cross-sectional sample) were older, had lower educational levels and cognitive performance, had more frequent depressive symptoms, multi-morbidities, polypharmacy, and worse diet scores than included participants (data not shown). Finally, although we adjusted for several major covariates, some residual confounding factors cannot be dismissed. In fact, we acknowledge that the collected dietary data dates back to 2000, which is old, and this might affect the relevance of our results. Nevertheless, the French RDA applied to the year of data collection (2000)(2001) is still applied till now and it has been previously reported that intakes of major food groups appeared to be relatively stable during follow-up in 3C Bordeaux [54]. Despite these limitations, the current study has several strengths. First, we focused our analyses on a large sample of French elderly consumers, known to exhibit distinctive DP consumption, notably cheese [31], within a population-based setting while adjusting for major confounders (note, less than 0.1% of 3C-Bordeaux participants were consumers of food supplements at baseline, which precluded using this data as a confounder). Second, survival analyses were performed to check if there is any competitive risk with death (data not shown). We observed that DP exposures were not significantly associated with mortality, eliminating the selection bias leading to survival effect often faced in prospective studies involving older adults. Moreover, we confirmed our main results using a different scale to assess frailty and when keeping participants who exhibited dependency in both cross-sectional and prospective studied samples, which allowed us to further decrease the selection bias (i.e., frailty being considered as a pre-dependency stage and risk factor for disability [35,55]). In conclusion, we did not observe any association between DP consumption, whatever the sub-types, and frailty prevalence or incidence among this sample of French older adults. Studies on this topic are scarce and future studies are still needed while taking into consideration the identified limitations, such as the potential benefits/risks ratio of DP nutrient contents. In the meantime, and beyond frailty, older adults are encouraged to follow French nutritional recommendations for DP consumption (2 to 3 times/d) as their benefits on the general well-being of older adults to prevent osteoporosis and malnutrition are well established, and recent large-scale settings have also suggested their protective ability to prevent chronic diseases and mortality.
2021-07-03T06:16:56.444Z
2021-06-23T00:00:00.000
{ "year": 2021, "sha1": "74a1f8e12a2ecc433e411e6a974c84606ee01d2f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/13/7/2151/pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "191ee81a0242f8a677ed9e270b9652c77d6328cc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
222420152
pes2o/s2orc
v3-fos-license
Understanding IGF-II Action through Insights into Receptor Binding and Activation The insulin-like growth factor (IGF) system regulates metabolic and mitogenic signaling through an intricate network of related receptors and hormones. IGF-II is one of several hormones within this system that primarily regulates mitogenic functions and is especially important during fetal growth and development. IGF-II is also found to be overexpressed in several cancer types, promoting growth and survival. It is also unique in the IGF system as it acts through both IGF-1R and insulin receptor isoform A (IR-A). Despite this, IGF-II is the least investigated ligand of the IGF system. This review will explore recent developments in IGF-II research including a structure of IGF-II bound to IGF-1R determined using cryo-electron microscopy (cryoEM). Comparisons are made with the structures of insulin and IGF-I bound to their cognate receptors. Finally discussed are outstanding questions in the mechanism of action of IGF-II with the goal of developing antagonists of IGF action in cancer. Introduction The insulin-like growth factor (IGF) system controls metabolic and mitogenic responses in mammalian cells and importantly regulates embryonic growth and development as well as adult growth [1]. The IGF system is regulated by three structurally similar ligands, IGF-I, IGF-II and insulin ( Figure 1). These ligands act via one or more of the three related receptor tyrosine kinases: the two splice variants of the insulin receptor (IR-A and IR-B) and the type 1 Insulin-like growth factor receptor (IGF-1R). IR-B signaling is responsible for the classic IR metabolic activities. IGF-II is unique in that it can activate both IGF-1R and IR-A to promote cell growth and survival. However, of the three ligands, the molecular mechanisms underlying IGF-II action are the least understood. For this reason, this review will focus on IGF-II. There is some evidence to suggest that IGF-II, IGF-I and insulin can promote shared and unique signaling outcomes through IGF-1R and IR [2,3]. However, IGF-II specific actions are generally attributed to tissue specific expression. This review will highlight new discoveries regarding IGF-II, including a cryo-electron microscopy (cryoEM) structure of IGF-II bound to IGF-1R that has provided vital information on the structure and function of IGF-II. IGF-II plays important roles in fetal growth and development, when it is most abundant [4,5]. Notably, IGF-II fetal plasma concentrations are several fold higher than that of IGF-I [6]. Knockout of Igf2 leads to a 60% reduction in weight at birth [7]. IGF-II serum concentrations in many mammalian species decline rapidly after birth [8][9][10]. Interestingly, in adult mice, IGF-II serum levels are barely At the tissue level, IGF-II promotes cell growth and survival. It regulates bone growth by promoting proper timing of chondrocyte maturation and perichondrial cell differentiation and survival [15]. Overexpression of Igf2 in smooth muscle and pancreatic beta cells results in the development of cardiovascular defects and type 2 diabetes [16,17]. Conversely, knockout of placental Igf2 leads to reduced placental growth and fetal growth restriction [18]. IGF-II is most abundant in the fetal and adult brain, primarily produced by the choroid plexus but also the leptomeninges and endothelial cells [19][20][21][22][23]. IGF-II has been identified in cerebral spinal fluid and has been found to promote neurogenesis in the subventricular and subgranular zone of the adult brain [24][25][26]. Several investigations have also identified that IGF-II promotes stem cell self-renewal through activation of IR-A. For example, IGF-II:IR-A signaling supports neural stem cell maintenance and the expansion of neural progenitor cells [27]. This role in stem cell renewal extends to other tissues, as identified using stem cell specific knockout of Igf2 in young adults in which growth of intestinal stem cells is also inhibited [28]. IGF-II action is highly regulated by its interaction with soluble IGF binding proteins, including IGF-II specific IGFBP-6. IGFBPs retain IGF-II in circulation and deliver it to target tissues [29]. In addition, the type 2 IGF receptor (IGF-2R, also called cation-independent mannose-6-phosphate receptor) is responsible for the control of circulating IGF-II levels, by binding to IGF-II with high affinity and targeting it for lysosomal degradation [30,31]. IGF-II and Cancer It is well established that abnormal function of the IGF system promotes growth and metastasis of the 3 most commonly diagnosed cancers: breast, prostate and colorectal [32][33][34]. It also promotes growth and survival of brain, thyroid and ovarian cancers among others [11,14]. Specifically, dysregulation of IGF-II expression has been associated with cancer progression [11]. IGF-II expression is often upregulated in these cancers [33][34][35] and often results in both autocrine and paracrine effects [36]. For example, in the MDA-MB-157 breast cancer cell line, autocrine production of IGF-II stimulates cell growth though IR-A activation while expression in stromal and epithelial tissue of breast cancer specimens acts in both autocrine and paracrine manners [37]. Loss of imprinted IGF-II expression has been documented in many forms of cancer, leading to increased levels of intratumoural IGF-II, thereby promoting cell growth and tumorigenesis [34,38,39]. Interestingly, the mechanism by which loss of imprinting occurs has recently been investigated and found to involve overexpression of an intronic miRNA (miR-483-5p) found within the IGF2 gene [40]. miR-483-5p increases IGF-II transcription at the fetal promoter [40]. In cancer, IGF-II can act via IGF-1R and/or IR-A and these autocrine/paracrine signaling loops are regularly observed [41]. IGF-1R, which promotes cell growth and survival, is also commonly upregulated in cancers such as breast, colorectal and prostate cancer [35,41,42]. In contrast to IR-B that signals through metabolic pathways, IR-A has mitogenic signaling capabilities that are important during development when IR-A is most abundantly expressed [33]. IR-A is only expressed at very low levels in most adult cells [43]. However, in malignant cells, including breast, thyroid, colon and prostate cancer, IR is over expressed, and IR-A is the predominant isoform [33,44]. IGF-II:IR-A signaling also supports maintenance of tumour stem and progenitor cells [45,46]. Concomitant upregulation of both IGF-II and IR-A signaling thus provides cancer cells and tumour stem cells with an additional growth and survival mechanism [11]. IGF-II Signaling The biological processes that IGF-II promotes result from activation of signaling pathways through its binding to the extracellular region of IR-A or IGF-1R. The overall mechanisms of binding of IGF-II, IGF-I and insulin to IGF-1R and IR are conserved. Receptor binding results in structural rearrangement of the receptor (further discussed below) causing autophosphorylation of the tyrosine kinase (TK) domains on the intracellular region of the receptor [47,48]. Extensive studies conducted by Cabail et al. [49] have determined that in the unbound state, each monomer is autoinhibited by self-interaction of the activation loop within its TK active site, thereby precluding the binding of ATP. Upon ligand binding, structural rearrangement occurs allowing the juxtamembrane (JM) domain of one monomer to interact with the TK domain of the opposite monomer. This releases the autoinhibitory state and allows for the binding of ATP and subsequent substrate phosphorylation. How does IGF-II Bind and Activate IGF-1R and IR-A? In order to understand how IGF-II promotes normal cell growth and survival and to develop ways to inhibit its action in cancer, a detailed knowledge of the molecular mechanisms underlying IGF-II receptor binding and activation is required. Our understanding so far has largely been derived through site-directed mutagenesis and comparative structural studies, with a recent cryoEM study revealing the structure of IGF-II bound to IGF-1R. The details of our current understanding will now follow. IGF-II Structure IGF-II is a 67 amino acid single chain polypeptide with sequence and structural similarity to IGF-I (70 amino acids) and insulin (51 amino acid two-chain peptide) ( Figure 1). Sequence alignments of the IGFs and insulin (Figure 1a) reveal 50% sequence homology between the B-and A-domains of the IGFs and the equivalent domains of insulin [1]. Three intrachain disulfide bonds hold together the specific three-dimensional structure, which comprises three α-helices (Figure 1b). IGF-I and IGF-II each comprise four domains: B, C, A and D [55]. Insulin, in contrast, is a two-chained mature protein composed of A and B domains joined together by two inter-chain disulfide bonds and having one intra-chain disulfide bond within the A chain (Figure 1b) [56]. IGF-II contacts the receptor through two surfaces originally defined by site-directed mutagenesis that are named site 1 and site 2. Equivalent residues of IGF-I and insulin are involved in binding IGF-1R and IR, respectively (Table 1). Receptor Structure, Mechanism of Binding and Activation IR-A, IR-B and IGF-1R are similar in amino acid sequence and structure (Figure 3a). The two IR isoforms differ by the expression of exon 11, which consists of 12 amino acids that are absent in IR-A splice variant. The receptors are disulfide-linked (αβ)2 homodimers and the extracellular domains of each αβ monomer assemble in an anti-parallel, Λ-shaped conformation, generating two equivalent ligand binding regions. In the apo (unbound) state, the sites of membrane entry are situated far apart, thereby holding the intracellular tyrosine kinase in an inactive monomeric state (Figure 3b left) [57,58]. Site-directed mutagenesis and structural studies have identified two binding surfaces within each binding region (site 1 and site 2) that represent high-and low-affinity binding sites, respectively. Upon ligand binding, the receptors undergo extensive structural change, whereby the FnIII stalks come close together, permitting dimerization of the intracellular region to release the autoinhibition of the TK domains (Figure 3b right). Notably, such a conformation is as predicted by Kavran et al. [59] to be essential for IGF-1R activation. Molecular detail of receptor binding in the extracellular domain has been derived from a series of crystallographic and cryoEM studies of IGF-1R and IR in the holo and soluble ectodomain forms. The α-chain C-terminal (αCT) helices of each monomer lie on the L1 surface of the opposing monomers to form site 1 [57,58]. The αCT shifts to accommodate the ligand, which makes contact via its site 1 residues [60,61]. As defined by Weis et al. [62], site 1 contacts made between the ligand and the receptor L1 and αCT domains [62][63][64]. In the case of IGF-II and IGF-I binding IGF-1R as well as insulin binding IR, the residues identified in site-directed mutagenesis studies correspond to those involved in this site 1 interaction (Table 1). Several additional residues were revealed in these structures to contact the L1 and αCT and can now be defined as site 1 residues (Table 1). Recently, a structure of the IGF-II:IGF-1R complex was determined using cryoEM to an average maximum resolution of 3.2 Å (Figure 4b) [63]. The site 1 ligand binding interaction is similar to the previous insulin:IR and IGF-I:IGF-1R structures [64,69,70]. The IGF-II molecule contacts the L1, L2, αCT', and FnIII-1' domains within the head region of the receptor (Figure 4c) [63]. The L1-CR + (αCT') module folds to the top of the receptor, permitting sparse interactions between IGF-II and the membrane-distal loops of FnIII-1', facilitated by an outward rotation of domain L2 from its location in the apo ectodomain. The αCT' helix on the L1 domain surface threads through the IGF-II C-domain loop (residues 33-40). The C-terminal segment of the IGF-II B-domain is displaced from the core of the ligand (in the unbound state) and engages with the receptor to make the site 1 interaction. The B chain of IGF-II is stabilized by an interaction between IGF-II residue Arg30 and the hydroxyl group of IGF-1R residue Tyr28 and possibly a salt bridge between IGF-II residue Arg38 and IGF-1R residue Glu305. The ligand forms a 'clip' on the extended αCT helix in the active conformation, stabilizing a tight interaction between L1-CR-L2 and αCT' with only sparse interactions between the ligand and FnIII-1' (Figure 4c) [63]. [62,64]. The IGF-II:IGF-1R complex structure [63] was determined using a similar leucine-zippered receptor (IGF-1RZip) to that of the insulin receptor used in the Weis et al. study [62]. The general topology of IGF-1RZip:IGF-II and IRZip:insulin structures also reflects that of the recently reported holoIGF-1R:IGF-I structure [64], providing further evidence that this is the common activated conformation. The asymmetry observed in the activated structure is necessary for negative co-operativity, a hallmark of both IGF-1R and IR ligand binding summarized in a 'harmonic oscillator model' by Kiselyov et al. [71], whereby binding of a second ligand (to the unoccupied receptor binding pocket) accelerates the dissociation of the first bound ligand. The major difference in the structures of IGF-1R ectodomain-bound IGF-II and IGF-I occurs in the respective growth factor C-domains. In the receptor complex, the IGF-II C-domain residues 33-36 are disordered, as are the adjacent receptor CR domain residues 258-265, suggesting that the C-domain is too short to form stable interactions with the receptor in this region (Figure 5a) [63]. By contrast, the C-domain of IGF-I in holoIGF-1R:IGF-I is relatively well ordered, with IGF-I residue Tyr31 in its distal loop engaging receptor residues Pro5 and Pro256 (Figure 5a) [64]. Although the resolution of the structure is low at IGF-I residues Arg36 and Arg37, they appear to contact IGF-1R L2 domain. With no equivalent to Tyr31, the IGF-II C-domain instead appears to be stabilized by self-interactions (a salt bridge with IGF-II residue Glu45 near the N terminus of the first helix of the IGF-II A-domain, and a polar interaction with the IGF-II residue Ser39). An as yet unexplained observation is the limited correlation of the site-directed mutagenesis data for IGF-II site 2 residues (Table 1) and their involvement in binding in the IGF-II:IGF-1R complex structure (Figure 5b). Of the residues defining site 2 by site-directed mutagenesis, only Glu12 appears to contact the receptor FnIII-1' in this activated conformation (Figure 5b). In addition, IGF-II B-domain residues Glu6, Thr7, Cys9 and A-domain residues Cys47 and Phe48 are seen to contact the FnIII-1' in the IGF-II:IGF-1R complex structure, thereby completing the definition of site 2 (Table 1). A similar conundrum was revealed by IGF-I:IGF-1R and insulin:IR complex structures and their corresponding site-directed mutagenesis data (Table 1). For both IGF-I:IGF-1R and insulin:IR complexes additional structures have been described that have led to a proposed transient interaction of the ligand with a different site on the receptor. This may represent the first site of contact for the ligand or an intermediate site to facilitate conformational change of the ligand and receptor (schematically represented in Figure 3b, middle panel). In the case of insulin, cryoEM structures of insulin-saturated IR constructs [69,70] identified potential transient binding sites on the FnIII-1' spanning residues Tyr477-488 and 552-554 involving all insulin site 2 residues (Table 1). Such a site has not been reported for IGF:IGF-1R complexes. For IGF-I:IGF-1R, the first ligand-bound ectodomain structure was determined by X-ray crystallography by Xu et al. [58]. This structure was determined by ligand soaking in apo crystals, resulting in an induced fit of IGF-I to the L1-αCT' binding site. In this structure, the receptor remained in the "apo/legs apart" conformation without the major J-shaped rearrangement. Additional FnIII-2' contacts (residues 788-792) were observed that involved essentially all IGF-I site 2 residues identified by site-directed mutagenesis except Glu9 and Asp12. It is possible that this interaction represents an IGF-1R transient binding site and suggests a major difference in the activation mechanism between the two receptors. Whether these transient interactions also occur for IGF-II on IGF-1R and IGF-II on IR-A remains to be determined. In summary, whilst IGF-II binds and activates IGF-1R through a similar mechanism to IGF-I, there are some significant differences that likely explain their different binding affinities. Notably the C-domain interactions are quite different, with IGF-II barely making receptor contact, whereas IGF-I C-domain contributes to binding affinity through several contacts. How this influences ligand specific signaling outcomes is still not understood. Importantly, no structure of IGF-II bound to IR-A has been reported. Conclusions and Implications of Structural Information for Developing Treatments for Disease IGF-II plays a fundamental role in mammalian growth and fetal development. It is an important regulator of bone growth and promotes cellular growth and survival. While IGF-II is the least investigated ligand of the IGF system, the recently determined structure of IGF-II bound to IGF-1R has certainly advanced our understanding of the mechanism of IGF-II binding and activation. This structural information has confirmed that upon IGF-1R engagement, the receptor undergoes major structural rearrangement, from an open Λ-shape conformation to a J-shaped structure where the legs of the receptor are brought into contact in the active signaling conformation of the receptor. Comparison of IGF-I and IGF-II bound to IGF-1R confirmed that the C-domain of IGF-I contacts the receptor, whereas IGF-II lacks an equivalent contact. While site 1 contacts of IGF-II are in accordance with mutagenesis data, only one site 2 residue is seen to contact the receptor (Glu12) as observed in IGF-I (Glu9) and insulin bound to IR (HisB10). The remaining residues identified by mutagenesis as contacting the receptor may be involved in transient interactions with the receptors. The same transient interaction is expected with IGF-II binding; however, this is yet to be observed. A detailed understanding of how IGF-II engages with its receptors and confers downstream signaling activation is essential in developing drug therapies that target IGF action in cancer. The relatively minor role of IGF-II in adult cell function means that blocking this pathway as a cancer therapy may have little effect on healthy adult cells whilst slowing cancer cell growth. Currently, most approaches target IGF action by directly blocking binding to IGF-1R:IGF-1R antibodies inhibit ligand binding and stimulate receptor internalisation [34]. Such inhibitors have been shown to reduce growth of IGF-II dependent cancers. However, increases in IGF-II:IR-A signaling can give rise to resistance to treatment [72,73], highlighting the need for inhibitors of IGF-II acting via both IGF-1R and IR-A and the need for structural data of IGF-II bound to IR-A. Such studies will further inform on how IGF-II is uniquely capable of binding and activating both IR-A and IGF-1R with high affinity and will suggest strategies to design inhibitors or allosteric regulators for the treatment of IGF-1R/IR-A regulated disease.
2020-10-16T13:06:45.695Z
2020-10-01T00:00:00.000
{ "year": 2020, "sha1": "3c619bde0c9af6075e371a696e3a675f0aa1283b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4409/9/10/2276/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b94642a69a1f6943e243951be12fdf203dbf863e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
209426631
pes2o/s2orc
v3-fos-license
A restoration ecology perspective on the treatment of inflammatory bowel disease Abstract The human gut can be considered an ecosystem comprised of a community of microbes and nonliving components such as food metabolites and food additives. Chronic diseases are increasingly associated with disruption of this ecosystem. The science of restoration ecology was developed to restore degraded ecosystems, but its principles have not been applied widely to gut medicine, including the treatment of inflammatory bowel disease (IBD). One principle of ecological restoration is that ‘passive’ restoration, which involves removing an ecosystem disturbance, should occur before attempting additional ‘active’ interventions. We discuss evidence that poor diet is principle source of disturbance in IBD, and therefore requires better attention in its research and clinical care. Another restoration principle is that higher biodiversity may improve ecosystem behavior, but this idea has not been tested for its possible importance in donor stool during fecal microbiota transplants. Lay summary: In patients with chronic disease the gut microbiome behaves like a disturbed ecosystem. Principles borrowed from the science of restoration ecology identify a need to better understand the influence of diet on treatment of inflammatory bowel disease and the importance of donor diversity in fecal microbiota transplants. transplants (FMTs; Box 1) [3,4]. The science of restoration ecology manipulates the health of natural ecosystems, but there exists little communication between restoration ecology and medicine, even though shared principles may illuminate both disciplines [2]. Inflammatory bowel disease (IBD) presents a complex problem in medicine requiring a multifactorial approach to improve patient health. Here we discuss how principles of restoration ecology generate multiple testable, but generally under tested, hypotheses that address the role of diet and gut diversity in the treatment of IBD. A basic principle of ecological restoration is that an ecosystem cannot be repaired until the underlying disturbance causing degradation has been removed. Removal of disturbance is known as 'passive restoration', and can be as simple as fencing cattle away from a stream where they have trampled the bank, denuded vegetation, and caused the stream to act erosively against itself [2]. If passive restoration proves insufficient for complete recovery, then 'active restoration' is implemented by, for instance, planting woody vegetation [5]. Just as it would be difficult to revegetate a streambank still disturbed by livestock, it may prove difficult to restore beneficial gut microbes using active interventions such as probiotics and FMT if underlying sources of gut disturbance go untreated [2]. This line of reasoning elevates the importance of identifying environmental disturbances that cause IBD. What might they be? Genetic factors explain only 19-26% of the hereditary variance of IBD [summarized in 4], which leaves ample potential for environmental influences. Levine et al. [6] reviewed the following lines of evidence identifying diet as a key underlying disturbance in IBD. First, epidemiological studies associate increased risk of IBD with red meat, fatty food, processed food and desserts and decreased risk with a diet high in fiber. Second, many of the same foods that are associated with IBD in human epidemiological studies also promote IBD symptoms in animal models. Third, exclusive enteral nutrition (EEN), which replaces whole foods with elemental liquid nutrients, leads to clinical remission in a high proportion of Crohn's disease patients (40-80%), but partial enteral nutrition (PEN), which consists of enteral nutrition plus a regular diet of whole foods, generally does not [7,8]-a difference widely attributed to continued intake of a regular diet [6][7][8][9]. Fourth, when the whole foods component of PEN consists of a Crohn's disease exclusion diet (CDED), patients show clinical remission similar to subjects consuming 100% EEN [9]. The CDED excludes foods that are associated with microbiome alteration, increased intestinal permeability, impairment of innate immunity and degradation of the gut mucous layer and epithelial barrier, such as dairy, wheat, processed food, sauces, emulsifiers, canned food, packaged snacks, soda, juice, sweetened beverages, candy and baked sweets [6,9]. Fifth, patients on 100% CDED exhibit similar remission to patients on 100% EEN [9]. Such findings have led to a model of IBD in which gut disturbance caused by poor diet is followed by microbiota disruption, then inflammation [10]. Evidence that diet is a fundamental disturbance in IBD, combined with the primacy of passive restoration in ecoystem repair, generates the hypothesis that active approaches to treating IBD such as FMT, probiotics, prebiotics, and pharmaceuticals will be more effective if diet is controlled [2]. This hypothesis is poorly tested. Few if any studies have examined interactions between specific diet regimens and other IBD interventions. Possibly because of this lack of study, the medical literature does not prioritize diet in treatment recommendations. Despite acknowledging evidence for a fundamental role of diet in IBD [11], the American College of Gastroenterology (ACG) clinical guideline for the management of Crohn's disease in adults [12] recommends that diet may be considered as an adjunct to other therapies-and not that it should be prioritized in conjunction with other therapies, or better tested before other therapies are utilized. Moreover, the ACG guideline recommends dietary manipulation only in patients with low-risk, but not moderate-to-high-risk, disease [12]. It further minimizes diet by stating that its benefits are not 'durable' Box 1. Terms and abbreviations used in this article Active restoration-Interventions taken to restore a degraded ecosystem that go beyond mere removal of the disturbance(s) leading to degradation. For example, active restoration in a natural ecosystem may involve reseeding native plants; active restoration of a gut ecosystem may involve adding microbes via probiotics or FMT. CDED-Developed for passive restoration of IBD [8]. EEN-Replaces whole foods exclusively with elemental liquid nutrients. FMT-An active restoration measure that moves microbes from a donor's gut to a diseased recipient. IBD-Includes Crohn's disease and ulcerative colitis. Passive restoration-Interventions taken to restore an ecosystem that remove the original disturbance. For example, passive restoration in a natural ecosystem may involve fencing cattle or removing an anthropogenic dam. Passive restoration in the gut may involve improved diet. PEN-Replaces whole foods partially with elemental liquid nutrients. because symptoms reoccur upon resumption of an unrestricted diet-which is akin to stating that the benefits of diet in treating high blood pressure are not 'durable' because symptoms will recur if diet lapses. Strictly speaking, benefits of a restricted diet would lack durability only if symptoms were to recur while remaining on the diet, but long-term studies of the effects of diet on IBD, or long-term interactions between diet and other treatments, are largely nonexistent [6,11]. Recent literature reviews found no evidence that probiotics induce or maintain remission of IBD [3], whereas FMT holds some promise [4]. Although both reviews called for further research, neither they nor the studies they referenced acknowledged the possible limitations of implementing FMT or probiotics without first addressing underlying dietary disturbance. Another review and meta-analysis by Asto et al. [13] found that probiotics containing Bifidobacterium improve symptoms of ulcerative colitis, but the majority of studies it addressed administered probiotics in conjunction with pharmaceutical therapies (e.g. mesalazine, hydrocortisone), making it difficult to know the efficacy of probiotics alone. Asto et al. [13] also found that few reliable studies of prebiotics or synbiotics (probiotics plus prebiotics) exist, and called for further work in this area. From a restoration ecology perspective, future work on prebiotics or synbiotics should account for the fact that mechanism(s) of diet-based disturbances in the etiology of IBD are incompletely understood. If diet-based disturbance is largely caused by an absence of food substrates that support healthy microbiome function, then prebiotics alone may ameliorate that disturbance and provide effective treatment of IBD. If, however, diet-based disturbances arise not only from an absence of beneficial nutrients but also from compounds that promote dysbiosis, then prebiotics, by failing to remove all disturbance, will be less effective. Assessment of initial conditions is another principle of restoration ecology that may inform research in prebiotics and synbiotics. Restoration ecologists assess the initial conditions of a disturbed system in order to better identify the steps needed to restore it [14]. If a diseased gut contains healthy microbes, then they may primarily lack food substrates needed to perform their beneficial functions, and prebiotics alone may improve IBD. If, on the other hand, the microbe community has been severely compromised, then it may be necessary to administer probiotics in conjunction with prebiotics. To our knowledge, no study has compared the efficacy of prebiotics versus synbiotics in conjunction with variation in the quality of the gut microbe community. Long-term studies of diet and IBD are lacking in part because of the difficulty of adhering to restrictive diets. To test for dietary effects on IBD and translate findings into clinical practice, patient eating behavior must be addressed. Many IBD patients think diet plays a role in their disease [11] and are responsive to dietary recommendations, but believe that their doctors underemphasize diet-the perception practitioners do not share [15]. Better physician-patient communication about diet would be beneficial because alignment among restoration stakeholders fosters project success [2,16] and physician behavior influences the likelihood that patients will adhere to medical treatment [17]. To successfully enlist patients as stakeholders in their gut restoration, it may be necessary to consider psychological factors that promote unhealthy eating [18,19]. To better support physician-patient communication, foods whose presence or absence cause IBD need to be better identified [6], with awareness that results may vary among individuals because of genetic differences or the inherent variability of gut ecosystems [2]. In the same way that stream restoration may require collaboration among experts in fisheries, botany and hydrology, studying and restoring gut ecosystems may require collaboration among physicians, dieticians and psychologists. Ecological models generate additional testable hypotheses in the role of diet in IBD. Ecologists study relationships between ecosystem structure (i.e. the identity and diversity of species present) and their effects on how natural ecosystems function (e.g. biomass production, decomposition rates and nutrient flows). Mounting evidence suggests that specific microbe taxa in the gut influence health functions such as immunity, obesity, psychology and digestion [2,20,21]. Numerous different microbe species likely provide redundant support for each of these health functions [2], and therefore even individuals with a depleted microbiome may retain enough species to remain healthy. Mathematical models indicate that FMT success is compromised when depleted but seemingly healthy individuals are chosen as FMT donors [2], a finding we call the donor diversity hypothesis. To our knowledge, the donor diversity hypothesis has not been rigorously tested. FMT recipients often are examined for stool microbiota diversity, but donor stool tends to be screened only for pathogenic risk factors [22]. A recent literature review written to promote more uniform methodology and reporting of FMT did not mention donor diversity [22]. We are aware of only two studies that measured donor stool diversity in FMT. One found that fecal samples pooled from multiple donors were more diverse than samples from single donors [23]; however, only pooled donor samples were used to treat patients, and therefore their efficacy compared with single-donor samples is unknown. Another small experiment involving 13 patients tested the donor diversity hypothesis using retrospective evidence. Donors who provided transplants to responders had higher microbiota species richness than donors who provided to nonresponders [24]. However, responders also appeared to have had higher baseline species richness than nonresponders, which obfuscates any effect of donor diversity. We developed an interactive online version of the mathematical model underlying the donor diversity hypothesis that allows for manipulation of parameters such as FMT donor diversity and the probability that a microbe species establishes in an FMT recipient [25]. The interactive model reveals that large reductions in FMT success caused by using a depleted donor can be drastically improved under some circumstances with small increases in the probability that microbe species transferred during FMT establish in the recipient [25]. It is likely that a healthy diet in the recipient would promote microbe establishment given the relationship between diet and microbe diversity [2], and thus a therapeutic recipient diet could mitigate reductions in FMT success caused by poor donor diversity. Studies testing the donor diversity hypothesis should control for recipient diet to better understand its interaction with donor diversity and to minimize confounding influences on FMT success. Higher diversity does not correlate with better health in some body sites, such as the female reproductive tract [26], and therefore the ideas raised here do not apply universally. With respect to the gut, however, insights gained from ecological restoration into treatment of IBD can (i) motivate studies that test whether passive restoration of gut health via diet improves patient outcomes from FMT, probiotics and pharmaceuticals; (ii) support research to identify the dietary factors that contribute to IBD and their possible variation among individuals; (iii) promote alignment of physicians and patients as partnering stakeholders in gut health; and (iv) justify randomized controlled tests of the donor diversity hypothesis. Scientists studying both natural and gut ecosystems have claimed that ecology is harder than rocket science [2]. Principles applied to the successful restoration of natural ecosystems merit attention for their possible contribution to the understanding and treatment of IBD.
2019-11-28T12:17:07.166Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "20cfd41b454190680fa0e129ef282a62a42bcc90", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/emph/advance-article-pdf/doi/10.1093/emph/eoz031/31083511/eoz031.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "822874f6457726d6991bc12e20bb7ca2b0466296", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15748864
pes2o/s2orc
v3-fos-license
A double stellar generation in the Globular Cluster NGC6656 (M 22). Two stellar groups with different iron and s-process element abundance AIMS. In this paper we present the chemical abundance analysis from high resolution UVES spectra of seventeen bright giant stars of the Globular Cluster M~22. RESULTS. We obtained an average iron abundance of [Fe/H]=-1.76\pm0.02 (internal errors only) and an \alpha enhancement of 0.36\pm0.04 (internal errors only). Na and O, and Al and O follow the well known anti-correlation found in many other GCs. We identified two groups of stars with significantly different abundances of the s-process elements Y, Zr and Ba. The relative numbers of the two group members are very similar to the ratio of the stars in the two SGBs of M22 recently found by Piotto (2009). Y and Ba abundances do not correlate with Na, O and Al. The s-element rich stars are also richer in iron and have higher Ca abundances. The results from high resolution spectra have been further confirmed by lower resolution GIRAFFE spectra of fourteen additional M22 stars. GIRAFFE spectra show also that the Eu -- a pure r-process element -- abundance is not related to the iron content. We discuss the chemical abundance pattern of M22 stars in the context of the multiple stellar populations in GC scenario. Introduction Globular Clusters (GC) are generally chemically homogeneous in their Fe-peak elements, while they show star-to-star abundance variations in light elements like C, N, O, Na, Mg, Al among others. In some cases, these chemical inhomogeneities result in well defined anticorrelations. For example, all GCs for which Na and O abundances have been measured, show a well defined NaO anti-correlation (see Carretta et al. 2006Carretta et al. , 2008 for an update), often associated with an anti-correlation between Mg and Al contents. The origin of these variations has not yet been well understood, since both a primordial and an evolution-Send offprint requests to: XXX. XXX. ⋆ Based on data collected at the European Southern Observatory with the VLT-UT2, Paranal, Chile. ary explanation, or a combination of both have been proposed (see Gratton et al. 2004 for a recent review). Interestingly enough, abundance anomalies have been found also among stars in the lower part of the red giant branch (RGB) or even in the main sequence (MS). E.g., Cannon et al. (1998) found a CN bi-modality in MS and sub giant branch (SGB) stars in 47 Tuc, and the NaO anti-correlation was observed at the level of MS Turn Off (TO) and SGB in M 13 (Cohen & Melendez 2005), NGC 6397 and NGC 6752 (Carretta et al. 2005;Gratton et al. 2001), NGC 6838 (Ramírez & Cohen 2002). By themselves, these results suggest the possibility of a primordial origin for the abundance inhomogeneities. More recent results, made possible by a significant improvement of the photometric precision on HST images, indicating the presence of multimodal sequences in the color-magnitude diagrams (CMD) of some GCs (Bedin et al. 2004, Piotto et al. 2007 further confirm that at least in some GCs there is more than one generation of stars formed from material chemically contaminated by previous generations. Among the clusters with multiple stellar populations, ω Centauri is the most complex and interesting case. This object is the only one for which variations in iron-peak elements have been certainly identified (Freeman & Rodgers 1975;Norris et al. 1996;Suntzeff & Kraft 1996). The Fe multimodal distribution is at least in part responsible of the multiple RGB (Lee et al. 1999, Pancino et al. 2000 of ω Cen. Also the MS of this GC splits into three sequences as shown by Bedin et al. (2004). Among the two principal MSs, the bluest one is more metal rich than the redder one (Piotto et al. 2005). So far, the only way we have to explain the photometric and spectroscopic properties of the MS of ω Cen is to assume that the bluest MS is also strongly He-enhanced. Recently, Villanova et al. (2007) showed that also the SGB splits in at least four branches with a large age difference (larger than 1 Gyr) among the different populations. NGC 2808 is the second (in time) cluster in which a MS split into three branches was found (Piotto et al. 2007). Also in this case, the MS multimodality was associated with different values of Helium, to the observed multimodal distribution of the stars along the NaO anti-correlation, where three groups of stars with different O (and Na) content were found by Carretta et al. (2006). In NGC 1851 Yong & Grundahl (2008) found abundance variations in various elements by studying a sample of 8 RGB stars. Sodium and oxygen follow the NaO anti-correlation. There is also some evidence for the presence of two groups of stars with different s-process element content, as well as of two groups of stars with different CN-strenght, that are possibly related to the two sequences photometrically identified by Milone et al. (2008) along the SGB. The split SGB of NGC 1851 can be explained as due to the presence of two stellar populations. The stars in the fainter SGB could either be part of a population about 1 Gyr older than the bright SGB one, or could indeed be slightly younger than the bright SGB ones, but strongly enriched in total C+N+O content (Cassisi et al. 2008). Another recent evidence for a primordial origin of abundance variations related to the presence of different populations of stars comes from Marino et al. (2008). By studying a large sample of RGB stars in the GC M 4, they found two distinct groups of stars with a different sodium content, which also display a remarkable difference in the strenght of the CN-band. These two spectroscopic groups were found to populate two different regions along the RGB. They also noted that the RGB spread is present from the base of the RGB to the RGB-tip, suggesting that the spread must be related to the presence of two distinct stellar generations. At the basis of the present investigation, there is a very recent result by our group, who identified a bimodal distribution of the stars in the SGB of the GC NGC 6656 (Piotto 2009, see Fig. 18 in the present paper). Located at a distance from the Sun of ∼3.2 kpc (Harris 1996), NGC 6656 (M 22) is a particularly interesting GC, be-cause a large number of photometric and spectroscopic studies suggested a complex metallicity spread, similar to, albeit significantly smaller than, that found in ω Centauri. In particular, it has been often suggested, though never convincingly confirmed, that M 22 may have also a spread in the content of iron peak elements. The first evidence for a spread in metallicity comes from the significant spread along the RGB (Hesser et al. 1977;Peterson & Cudworth 1994) observed both in (B − V) and in Strömgren colors. However, it is still uncertain whether this spread can be attributed to a metallicity spread or to reddening variations. Due to its location on the sky, close to the Galactic plane and toward the Galactic Bulge [(b, ℓ) ≃ (10 • , 7 • )], M 22 is affected by high and spatially varing interstellar absorption, with a reddening in the interval 0.3< E(B − V) <0.5. This differential reddening creates a degeneracy in measuring metallicity when the atmospheric parameters of the stars are derived from their color. Spectroscopic studies are divided between those which conclude that no significant metallicity variations is present in M 22 (Cohen 1981, based on 3 stars; Gratton 1982, 4 stars) and studies claiming a spread in iron, with −1.4 < [Fe/H] < −1.9 (Pilachowsky et al. 1984, 6 stars;Lehnert et al. 1991, 4 stars). Particularly interesting are the findings on CN-band strengths. Norris & Freeman (1983) showed that CN variations in M 22 were correlated with Ca, H and K line variations, similar to those in ω Cen. By studying a sample of 4 stars, Lehnert et al. (1991) found Ca and Fe variations that also correlated with variations in CH and CN band strength. However, Brown & Wallerstein (1992) found no Ca abundance differences between CN-strong and CN-weak stars, though they observed differences in [Fe/H] correlating with the CN-strength. More recently, Kayser et al. (2008) found some indications of a CN-CH anti-correlation in SGB stars, maybe diluted by large uncertainties introduced by differential reddening. In the present study we analyze high resolution UVES spectra for a sample of seventeen RGB stars in M 22 in order to study the chemical abundances and possible relations with the recently found SGB split. In order to increase the statistical significance and reinforce our findings, we also added the results from a sample of fourteen RGB stars from medium resolution, high S/N GIRAFFE spectra. In Section 2 we provide an overview of the observations and of the data analysis, and in Section 3 we describe the procedure used to derive the chemical abundances. Our results on the chemical composition of M 22 are presented in Section 4, and a discussion on them is provided in Section 5. In Section 6 we look for possible connections between our spectroscopic results and the two stellar populations photometrically observed by . A comparison between the results of this paper and those of Marino et al. (2008) on the GC M 4 is provided in Section 7. In Section 8 we present a brief discussion on the results obtained from GIRAFFE spectra. Section 9 summarizes the most relevant properties of the two stellar populations of M 22. Observations and membership analysis Our data set consists of spectra of seventeen RGB stars retrieved from the ESO archive. The observations were ob- tained using UVES (Dekker et al. 2000) and FLAMES@UVES (Pasquini et al. 2002). The spectra cover the wavelength range 4800-6800 Å, have a resolution R≃45000, and have a typical S/N ∼ 100 − 120. Data were reduced using UVES pipelines (Ballester et al. 2000), including bias subtraction, flat-field correction, wavelength calibration, sky subtraction, and spectral rectification. The membership of the analyzed stars was established from the radial velocities obtained using the IRAF@FXCOR task, which cross-correlates the object spectrum with a template. As template, we used a synthetic spectrum obtained through the spectral synthesis code SPECTRUM (see http://www.phys.appstate.edu/spectrum/spectrum.html for more details), using a Kurucz model atmosphere with roughly the mean atmospheric parameters of our stars T eff = 4500 K, log(g) = 1.3, v t = 1.6 km/s, [Fe/H] = −1.70. At the end, each radial velocity was corrected to the heliocentric system. We obtained a mean radial velocity of −146 ± 2 km/s from all the selected spectra, which agrees well with the values in the literature (Peterson & Cudworth 1994). Within 2σ, where σ is our measured velocity dispersion of 10 km/s, all our stars are members. The list of the analyzed stars, their coordinates, radial velocities, and magnitudes, are reported in Table1. Figure 1 shows the location of the target stars in the CMD of M 22. We also analysed a sample of stars observed with GIRAFFE HR09, HR13, and HR15 set-ups at a resolution of R∼20000-25000. These spectra were reduced by using the pipeline developed by Geneva observatory (Blecha et al. 2000). More datails on these data and their analysis are provided in Section 8. Abundance analysis Abundances for all elements, with the exception of oxygen, were measured from an equivalent width analysis by using the Local Thermodynamical Equilibrium (LTE) program MOOG (freely distributed by C. Sneden, University of Texas at Austin). The atmospheric parameters, i.e. temperature, gravity, and micro-turbolence, were determined from Fe lines by removing trends in the Excitation Potential (EP) and Equivalent Widths (EW) vs. abundance respectively, and satisfying the ionization equilibrium. At odd with other elements, we measured O content by comparing observed spectra with synthetic ones, because of the blending of the target O line at 6300 Å with other spectral features. More details on the line-list, atmospheric parameters and abundance measurements can be found in Marino et al. (2008). In Table 2 we report the reference chemical abundances obtained for the Sun used in this paper. The obtained stellar parameters for the M 22 analyzed stars are listed in Table 3. It is important to remark here that the method employed for the measurements of the atmospheric parameters is based on the spectra, and hence our temperatures are not color dependent. This is an important advantage in analyzing GC, as M 22, affected by high differential reddening. In Fig. 1 we marked with red symbols the location of our seventeen UVES target stars on the B vs. (B − I) CMD. Photometry has been obtained with the Wide Field Imager (WFI) camera at the ESO/MPI 2.2 m telescope by Monaco (2004) and stars with the highest photometric quality were carefully selected. Magnitudes have been corrected for sky concentration ( i.e. the increasing of the background level in frames near the center due to light reflected back from the detector to the optics.) by using the best available solution (Bellini et al. 2009). Since the photometry is affected by spatially variable interstellar reddening, we have corrected the CMD for this effect (by using the method described in Sarajedini et al. 2007). To separate probable asymptotic giant branch (AGB) from RGB stars, we have drawn by hand the black dashed line of Fig. 1. Stars that, on the basis of their position in the CMD, are possible AGB stars, will be marked from here on as triangles, while RGB stars with circles. The star #200083 is not plotted because the B magnitude is not available for this star. The stars observed with GIRAFFE are shown by blue crosses. Internal errors associated to chemical abundances The main goal of this paper is to study the intrinsic variation of chemical abundances affecting M 22 stars. The differences in the measured chemical abundances from star-to-star are a consequence of both measurements errors and intrinsic variations in their chemical composition. In this section, our final goal is to disentangle between internal errors, due to measurements uncertainties, and real variations in the chemical content of the stars. To this aim, we will compare the observed dispersion in the chemical abundances σ obs , listed in Table 4, with that produced by internal errors σ tot . Since we are interested in the study of star-to-star intrinsic abundance variations, we do not treat possible external sources of errors that do not affect relative abundances. Two sources of errors mainly contribute to σ tot : the uncertainties in the EWs, and the uncertainties in the atmospheric parameters. In order to derive the typical error in the EWs, we consider two stars (#200101 and #71) with similar atmospheric parameters and roughly the same iron abundance. In this way, any difference in the EWs in the iron spectral lines, can be attributed to measurement errors. The dispersion of the distribution of the differences between the EWs of the iron lines of the two selected stars, that is 2.3 mÅ, has been taken as our estimate of the typical error in the EWs measurement. The corresponding error in the chemical abundances has been calculated by varying the EWs of a star (#200101) at intermediate temperature, representative of our sample, by 2.3 mÅ. The variations in the obtained abundances for each chemical specie, listed in Table 5 (column 7), have been taken as our best estimate of the internal errors introduced by uncertainties in the EWs. We used the same procedure described in Marino et al. (2008) in order to estimate the uncertainties associated to the atmospheric parameters, and the corresponding errors related to chemical abundances. From our analysis, the obtained uncertainties related to atmospheric parameters are: ∆T eff =±50 K, ∆log(g)=±0.14, and ∆v t =±0.13 km/s. These internal errors in the atmospheric parameters translate into the errors in chemical abundances listed in Table 5 (columns 2, 3 and 4). We investigated also the influence of a variation in the total metallicity ([A/H]) of the model atmosphere on the derived abundances. By varying the metallicity of the model by 0.10 dex, that is the iron observed dispersion, the element abundances change by the amount listed in Col. 5 of Table 5. A variation in the metallicity of the model atmosphere mainly changes the ionization equilibrium, and hence the values of log(g), since we used the ionization equilibrium between FeI and FeII to derive gravities. We calculated that by increasing [A/H] by 0.10 dex, log(g) decreases by ∼0.06, while temperature and microturbolence do not change significantly. Since we are interested in the search for possible small star-to-star variations of iron abundances, we measured the effect in the [Fe/H] abundances due to this change in gravity in a model atmosphere with increased metallicity, and verified that it does not affect the derived iron abundances by more than 0.01 dex. By increasing the total metallicity by 0.2 dex, the FeII abundances change by ∼0.06 dex, and we have to decrease log(g) by ∼0.12 to re-establish the ionization equilibrium between FeI and FeII. Also in this case the effect on the iron abundances is smaller than 0.01 dex. Column 9 of Table 5 reports the quadratic sum (σ tot ) of the errors coming from the EWs (σ EW ) and from the atmospheric parameters (σ atm ) uncertainties. Column 8 gives the observed dispersion (σ obs ). Since the oxygen abundance was calculated from the same spectral line and the spectra have similar S/N, we assume as an estimate of the error related to [O/Fe], the σ tot calculated in Marino et al. (2008) for M 4 red giants. This error, for M 4 stars, was calculated as the dispersion in O of the O-rich stars, i.e. the Na-poor group, assumed to be homogeneous in oxygen content. Iron-peak and α elements The wide spectral range of UVES data allows us to obtain chemical abundances for fifteen chemical species. Table 4 gives the mean abundance for each element (Col. 2), the rms of the mean of the abundances σ obs (Col. 3), and the number of stars (N stars ) used to calculate the mean (Col. 4). In Table 4, to each average abundance we associated an uncertainty which is the rms scatter (σ obs ) divided by √ N stars − 1, although some of the distributions are clearly not Gaussians. A plot of our measured abundances is shown in Fig. 2, where, for each box, the central horizontal line is the mean value for each element, and the upper and lower lines contain the 68.27% of the distribu-tion around the mean. The points represent individual measurements. We measured the chemical abundances for five α elements: O, Mg, Si, Ca and Ti. The corresponding abundances are listed in Table 4. Here we consider only Mg, Si, Ca, and Ti. The results for oxygen will be discussed in Section 4.2. These four α elements are all overabundant with respect to solar values, with an average of: For calcium we obtained a mean value of [Ca/Fe]=+0.31 ± 0.02, similar to that found in other GCs, Interestingly enough, our stars show quite a large dispersion (σ obs = 0.07, see Table 4) in the Ca abundance. This spread will be discussed more in detail in Section 5. NaO anti-correlation As displayed in Fig. 3, sodium and oxygen show the typical NaO anti-correlation found in RGB stars in all GCs where Na and O have been measured so far (see Carretta et al. 2006). The [Na/Fe] values range from ∼ −0.25 to ∼0.7 dex, with a dispersion σ obs =0.30, and [O/Fe] abundances cover the interval from ∼ −0.10 to ∼ +0.5 dex, with a dispersion σ obs =0.20. As it will be discussed in Section 6, Piotto (2009) have shown that the SGB of M 22 is split into two separate branches, indicating the presence of two stellar populations. In Fig. 4 we Table 5. Sensitivity of derived UVES abundances to the atmospheric parameters and EWs. We reported the error σ atm due to the uncertainties in the atmospheric parameters (∆T eff , ∆log(g), ∆v t , and ∆([A/H])) due to the error in EW measurements (σ EW ), the squared sum of the two (σ tot ), and the observed dispersion (σ obs ) for each element. +0.14 +0.13 +0.10 (Carretta et al. 2006). On the contrary, the MS of M 22 is narrow and a spread or split, if any, must be smaller than 0.02 magnitudes in m F435W − m F814W color (see ). Figure 4 shows that NGC 2808 stars cover almost the same range of Na abundances as M 22, but span a range of O abundance at least two times larger. In M 4, the presence of multiple populations is inferred by the bimodal distribution of the Na abundance (Marino et al. 2008). In addition, stars from the two groups with different Na content populate two distinct RGB sequences in the U vs. As in M 22, in NGC 1851 ) and NGC 6388 (Piotto 2008;Moretti et al. 2008;) the presence of multiple populations is inferred by a split of the SGB. Unfortunately, for both these clusters, the available chemical measurements from high resolution spectroscopy are limited to seven RGB stars for NGC 6388 (Carretta et al. 2007) and eight RGB stars for NGC 1851 (Yong & Grundahl 2008). In the case of NGC 6388, the stars are located in a portion of the NaO plane that is not populated by any M 22 star of our sample, being NGC 6388 stars systematically O-poorer. On the contrary, the range of NaO anti-correlation in NGC 1851 matches quite well that of M 22. Aluminum and Magnesium The abundances of aluminum and magnesium have been determined from Al lines at 6696 Å, and 6698 Å, from the Mg doublet at 6318-6319 Å and the Mg line at 5711 Å. Figure 5 shows the [Mg/Fe] ratio as a function of [Al/Fe]. There is no clear MgAl correlation, despite of the presence of a well defined NaO anti-correlation, and of a clear AlNa correlation, as shown in Fig. 6. Assuming that Na enhancement comes from proton capture process at the expenses of Ne, we expect to observe also a MgAl anti-correlation, since Al forms at the expenses of Mg. This means we expect a decrease of Mg abundance with the increasing of Al content. We do not observe such an effect but, given our uncertainties, it could be too small to be detected. The lack of such a clear correlation was observed also in M 4 by Marino et al. (2008). However, they found a small difference in Mg content among stars characterized by large Na content differences, according to the scenario proposed by Ivans et al. (1999), who predicted that a drop of only 0.05 dex in Mg is needed to account for the increase in abundance of Al (see their Section 4.2.2). s-process elements We have measured abundance for three s-process elements: yttrium, zirconium, and barium. All of them span a wide range of abundance values. tively, despite of the small estimated internal error (σ tot ≤ 0.1, see Table 5). In Fig. 7 Figure 7 suggests that we can isolate a s-process elements rich group, and a s-process elements poor one, by selecting stars with [Y/Fe] greater and smaller than 0, respectively. Stars rich in s-process elements are represented by red filled symbols, while s poor stars by the black empty ones. Red and black crosses with error bars indicate the average abundances for stars of each group. In Table 6, we have listed the mean abundances for all the elements studied in this paper, calculated separately for each of these two groups. There are few remarkable differences in the average abundances for the two groups. As an example, in Fig. 8 The bi-modality in s-process elements in M 22 resembles the case of NGC 1851. In NGC 1851, Yong & Grundahl (2008) noted that the abundances of the s-process elements Zr and La appear to cluster around two distinct values. They suggested that the two corresponding groups of stars should be related to the two stellar populations photometrically observed by Milone et al. (2008) along the SGB. In NGC 1851 the s-element abundance appears also to correlate with Na, Al and O abundance. Note that barium, yttrium, and zirconium can be considered as the signature of the s-process that occurs in intermediate mass AGB stars (Busso et al. 2001), whose wind could have polluted the primordial material from which the second generation of stars in M 22 and NGC 1851 formed. Also the fact that s-element rich stars seem to be also rich in Na and Al could be due to the pollution from intermediate mass AGB stars of the material from which a second generation of stars formed, though the lack of a clear correlation is more difficult to interpret. We note that also the results by Piotto Villanova et al. (2007), who observed that SGB metal-poor stars have a Ba content lower than intermediate-metallicity ones by about 0.2 dex. The relations between the s-process element abundance and the iron content in M 22 is the argument of the following section. The spread in Fe of M 22 As discussed in Section 1, for long time, the existence of an intrinsic Fe spread in M 22 has been debated in the literature (see Ivans et al. 2004 for a review), since photometric and spectroscopic studies have yielded conflicting results. Some spectroscopic studies found no significant variations (Gratton 1982, Ivans et al. 2004, whereas others seem to find a variation in [Fe/H] up to ∼0.5 dex (Pilachowsky et al. 1984). Photometric studies gave similar controversial results. Undoubtedly, the RGB of M 22 has a large spread in color. This spread is observed both in BVI and Strömgren photometry, but interpreted either in terms of metallicity variation or differential reddening, or a combination of the two. We want to emphasize again that the [Fe/H] measurements presented in this paper are not based on photometric data, and therefore do not suffer the effects of differential reddening. For this reason, they constitute an appropriate tool to make the issue of [Fe/H] variations in M 22 clearer. From Table 5, we see that the observed dispersion σ obs in iron is comparable with the estimated internal error σ tot , i.e. the observed star-to-star metallicity scatter could be interpreted as due to measurement errors only. This fact demonstrates how difficult it is to establish the statistical significance of any intrinsic spread in [Fe/H]. In this paper, we can tackle the problem of the iron dispersion in M 22 in a different way. The observed dispersion in iron content alone could be a poor indicator of any intrinsic metallicity dispersion. First of all, because of the abundance measurements errors. In addition, we note that if anomalies in [Fe/H] affect only a small fraction of M 22 stars, their effect on the iron dispersion of the whole sample of stars could result to be negligible. A visual inspection of some spectra reinforces the suggestion that there may be star-to-star iron variations. As an example, in Fig. 9, we show spectra of two pairs of stars with very similar stellar atmospheric parameters. The two spectral lines in Fig. 9 are iron lines. The upper panel shows the spectra of the stars #200068 and #200083 for which we have measured an iron abundance [Fe/H]=−1.84 and [Fe/H]=−1.63, respectively (see Table 3). The line depths differ significantly and, because of the similarity in the atmospheric parameter, must indicate an intrinsic iron difference. As a comparison, in the bottom panel we plot the same spectral region for two stars (#200101 and #71) with almost the same iron content. Our results on the iron dispersion can not be conclusive by themselves. However, it is rather instructive to look for possible correlations between [Fe/H] and the other chemical abundances (mainly the ones showing a spread or bimodal distribution, as the s-process elements). Figure 10 shows that Fe abundance is not significantly correlated with the Al, Na, and O abundances, elements which, on the other hand, are involved in a well defined anti-correlation, as discussed in Section 4.2. However, when we compare the iron abundances with the s-process element abundances, we find a strong correlation, with s-process element rich stars having systematically higher [Fe/H], as shown in the three panels of Fig. 11 We want to emphasize that the significance in the Fe variation can be appreciated only when we consider the average iron content of the two groups characterized by a different s-elements content. This is further demonstrated by a simple test we run, and whose results are summarized in Fig. 12. We simulated 100,000 stars with iron content, and iron abundance dispersion as the observed stars, i. e. 41% of the sample are s-process element rich stars (red Gaussian) and a 59% are sprocess element poor ones (blue Gaussian) with a mean [Fe/H] of −1.82 and −1.68 dex, respectively. The dispersion in iron content of each group was taken equal to 0.09 dex, that is the error we estimated for the iron abundance (see Table 5). The resulting metallicity abundance dispersion for the total sample is of 0.11 dex, which is close to, and only marginally greater, than the dispersion expected from the measurement errors. As shown in Fig. 13, the iron abundances are well correlated also with calcium abundances, and the Ca content correlates with the s-process element abundance. Again, a similar behavior is present in ω Centauri, where a spread in Ca was known since Freeman & Rodgers (1975). More recently, Villanova et al. (2007) showed that the SGB population with [Fe/H]∼ −1.2 has a mean [Ca/Fe] larger by ∼ 0.1 dex, than that found in the more metal-poor population. In M 22, evidence for a calcium spread was already noted and Norris & Freeman (1983) showed a correlation between CN variations and Ca, similar to those in ω Centauri. Lehnert et al. (1991), by studying a sample of 4 stars, found both Ca and Fe variations correlated with the CN-band strengths. Our abundance measurements show that both calcium and iron correlate with the s-process elements (Fig. 14 and Fig. 11). Moreover, as shown in Fig. 15, we found that calcium, like iron, is not clearly correlated with Na, although the stars with an higher Ca content seem to be slightly Na rich. In Fig. 7, Fig. 11, Fig. 13, and Fig 14 six out of seven probable AGB stars, represented by triangles, belong to the spoor group. We want to emphasize here that our selection of the probable AGB stars is based only on a visual inspection of the stars on the CMD, without considering photometric errors. In any case, assuming that all these stars are indeed real AGB members, our s-poor sample would include both RGB and AGB stars. If in our s-poor sample there is really a group of AGB stars, they could be the evolution of low mass stars of the primordial population, not able to activate the third dredgeup and enrich their surface of s-process elements. Hence they should trace the primordial composition of the cluster. Possibly, due to the larger iron content, stars with higher s element content, have systematically redder colors with respect s-process element poor stars, and have low probability to be pushed by photometric errors in to the AGB region. also Table 6). This seems to suggest that core collapse SNe (CCSNe) are the best candidates to produce the iron excess of the second generation of stars. Indeed, would iron be produced by Type Ia SNe, we would expect a lower [Mg/Fe] and [Si/Fe] ratio for the group of stars enriched in s-process elements with respect to the first generation of (s-process element poor) stars. In fact, SNeIa events are selectively enriched in iron (although modest quantity of Si are also produced). On the contrary, CCSNe along with iron, produce also Mg and Si in higher quantity with respect to SNeIa. With a present day mass of ∼ 5 × 10 5 M ⊙ (Pryor & Meylan 1993) NGC 6656 is one of the most massive GCs of the Milky Way. The s-elements rich population, with a mass of ∼ 2 × 10 5 M ⊙ and an iron abundance [Fe/H]=−1.68 ± 0.02 dex, includes ∼1.5 M ⊙ of fresh iron if we assume Z Fe ⊙ = 0.0013. On average each CCSN produces ∼0.07 M ⊙ of iron (Hamuy 2003), therefore about twenty SNe are needed to produce the fresh iron of the second stellar population. In this scenario, the fainter SGB (and TO) of this second generation of stars is attributed to their different chemical mixture, rather than to an age difference. Figure 18 from shows that the SGB of M 22 is splitted into two branches. In the previous section, we have shown that in M 22 there are two groups of stars, with different s-process element contents, and with two different average iron contents. In this section,we want to investigate whether the different iron content can explain the split of the SGB. Can the iron spread account for the SGB split? In Fig. 19, we compare two isochrones from Pietrinferni et al. (2004) in the ACS/WFC plane m F606W vs. m F606W − m F814W of Fig. 18. Both of them have an age of 14 Gyr, but different metallicities. The black line corresponds to the mean metallicity of the group of s-process element poor stars ([Fe/H]=−1.82) and the dashed red line is an isochrone with the average [Fe/H]=−1.68 of the s-process element rich stars. The different metallicity mainly reflects in a split of the RGB and of the SGB. In the inset, we show a zoom of the SGB region. At m F606W − m F814W = 0.85 the difference in magnitude m F606W between the two isochrones is δm F606W = 0.10, about 0.07 magnitudes smaller than that observed by . We conclude that the observed difference in [Fe/H] can contribute to produce the splitting of the SGB observed by Piotto (2009), but it is not sufficient. On the other hand, the entire shape of the turn off-SGB-RGB region is difficult to reproduce with standard, alpha-enhanced isochrones. Likely, this is due to the fact that the origin of the split may be much more complicated and involves also the NaCNO abundances, as suggested by Cassisi et al. (2008) for the case of NGC 1851. Comparison with M 4 In this section, we present a comparative analysis between the chemical abundances obtained in this paper for M 22 and the abundance measurements on M 4 RGB stars by Marino et al. (2008) to better outline the complexity of the multiple population appearence in different clusters. Such a comparison is rather instructive. First of all, because M 4, similarly to M 22, is affected by high differential reddening (Lyons et al. 1995, Ivans et al. 1999, and moreover, their spectra were analysed employing the same procedure. We note also that the UVES spectra of M 4 were collected with the same set-up and have almost the same S/N ratio as the M 22 spectra analysed in this work. Marino et al. (2008) have shown that M 4 hosts two distinct stellar populations, characterized by different Na content, and different CN-band strenght. These two groups of stars also define two sequences along the RGB, but there is no SGB split. At variance with M 22, M 4 does not show any evidence of intrinsic Fe spread. Marino et al. (2008) set an upper limit for the [Fe/H] spread of 0.05 dex (1σ) in M 4. In the case of M 22, as discussed in Section 4.2, we identified a well defined NaO anticorrelation, but we have no evidence of a dichotomy in Na distribution, as in M 4. Instead, we found a dichotomy in the s-process elements, and the SGB is splitted into two branches. As a comparison, we show in Fig.17 the Fe abundances as a function of Ba and Ca in M 4, using the same scale used for the same plot for our M 22 targets (Fig. 11, [Fe/H]. Note that the rms of the calcium abundance in M 4 is 0.04 dex, to be compared with the σ obs = 0.07 (see Table 5) we found for the same element in M 22. We find a rms in the calcium abundance almost egual to the one found in M 4 when we divide our stellar sample into the two s-process element rich and poor groups (σ obs = 0.04 for both the two s groups). Apparently, the two stellar populations in M 4 and M 22 have different origin, or the mechanisms responsible for this dichotomy must have been acting with different intensities in the two clusters. An independent check of the results In this paper we have presented a clear evidence of a spread in [Fe/H] and of the presence of a bimodal distribution in sprocess element content among the M 22 stars. These results are based on high resolution UVES spectra of seventeen stars. Because of the high relevance of these results in the context of the ongoing lively debate on the multipopulation phenomenon in star clusters, we further searched in the ESO archive for additional spectra of M 22 stars in order to strenghten the statistical significance of the results from UVES data. In fact, we found GIRAFFE spectra for 121 stars. We reduced all of them, but only fourteen were in the appropriate RGB location to have high enough S/N to pass all of our quality checks (see the following discussion) and useful for the abundance measurements. In any case we could double the original UVES sample of stars. Each star was observed with HR09, HR13, and HR15 set-ups, which give a resolution of about R∼20000-25000 in 514-535, 612-640, and 660-696 nm respectivelly. Data were reduced using the last version of the pipeline developed by Geneva observatory (Blecha et al. 2000), and were biassubtracted, flat-field corrected, and extracted applying a wavelength calibrations obtained by ThAr lamps. Each spectrum was then normalized. Finally for each star we obtained radial velocity and applied a membership criterion as done for UVES data. Spectra of each member star were then reported to restframe velocity and combined together. We performed also a test in order to verify the influence of scattered light on the spectral line shape (and could alter the final abundances). To this aim we reduced some spectra covering the whole GIRAFFE CCD with and without scattered-light subtraction and compared spectral features. We found that scattered-light has no significant influence on lines. For GIRAFFE data analysis, we wanted to follow the same procedure used for UVES, i.e. obtaining atmospheric parameters only from spectroscopy and not from photometry, which can be altered by differential reddening. This approach has the advantage to put the abundance determinations from the two data-sets in the same abundance scale, avoiding systematic effects. However, this choice, coupled with the selection of stars located only in the RGB, was paid by the rejection of 103 spectra because of their low S/N(≤70-80). Indeed an high S/N ratio is necessary to measure a sufficient number of isolated FeI/II lines in GIRAFFE spectra, which have lower resolution and cover a smaller wavelength range with respect to UVES ones. In addition, the brightest stars were not analized because of their very low temperature (T eff < 4000). In this T eff regime also metal poor stars (as in the case of M 22) show very strong lines, which are blended in GIRAFFE data, not allowing a reliable EW measurement. After a star passed through our selection criteria, atmospheric parameters (and Fe content) were obtained by EW method, as done for UVES and using the same line-list for the spectral lines in common between the two set-ups. Because of the two different data-sets, some comparisons are needed between the results obtained from the two spectrographs. Since we have only one star (#51) in common between the two data-sets, we could not compare directly (apart for this star) the results. In Tab. 7 we list the chemical abundances in common obtained for this star from the two different data. We note that the atmospheric parameters are in agreement within the errors calculated for UVES, the values for Fe and Y abundances are within the σ tot listed in Tab. 5, while for the other elements there are larger discrepancies probably due to errors associated to GIRAFFE results (that we have not considered here). Since the comparison of one star is not enough to verify the compatibility between the two set of abundances, we compared the atmospheric parameters and the mean metallicity. Figure 20 summarizes our tests. Filled circles represent GIRAFFE results, while open squares the UVES ones. In the upper left panel we plotted log(g) vs. T eff , while the upper right panel shows v t vs. log(g). Both gravity as a function of temperature and microturbolence velocity as a function of gravity follow the same general trend for the two data-sets, with similar dispersions. Further tests are shown in the two lower panels. The lower left one shows the Fe abundance vs. V magnitude (i.e. vs. the evolutionary state of the star along the RGB). No correlation appears, meaning that no systematic errors due to the different evolutionary phase are presents. The lower right panel reports T eff vs. B − V color. The line is the empirical relation by Alonso et al. (1999), obtained assuming a reddening E(B − V)=0.34 (Harris 1996). Also in this case a good agreement was found, not only for the zero-point of the relations (i.e. the absolute average reddening of the cluster), but also for the shape of the relations themselves. An important test comes from the comparison of the mean iron content obtained from the two data-sets. From the fourteen GIRAFFE stars, we obtain: which perfectly agrees with the UVES value within 1 σ. A rough estimate of errors on atmospheric parameters can be done as in Marino et al. (2008), assuming that stars with the same V magnitude (corrected for differential reddening) have the same parameters. In this way we can obtain an upper (also the dispersion in metal content can in part contribute to the dispersion of the stellar parameter values at a given luminosity along the RGB) estimate of the errors, which are ∆T eff =±65 K, ∆log(g)=±0.20, and ∆v t =±0.11 km/s respectively. We can see that errors are a bit larger, but still comparable with UVES ones. All the other elements, with the exception of Ti, in GIRAFFE data were measured by spectral synthesis because of the severe blends with other lines. In addition to Fe and Ti we measured O (from the forbidden line at 630 nm), Na (from the doublet at 615 nm), Y (from the doublet at 520 nm), Ba (from the line at 614 nm), Nd (from the line at 532 nm), and Eu (from the line at 665 nm). Having verified the good agreement between the atmospheric parameters and the iron content obtained from the two data-sets, we can further proceed to verify whether GIRAFFE data confirms UVES results. For this reason, in Fig. 21 we show some of the trends discussed in the previous sections. Filled circles are GIRAFFE measurements, while open circles are UVES ones. It is clear from this comparison that UVES results are fully confirmed. In particular, we can confirm the Y-Ba bimodality (central panel), as well as the different Fe content for the two s-element rich and s-element poor groups of stars (see leftmost and rightmost middle panels). In addition, from GIRAFFE data we measured also Nd (a combined s and r element) and Eu (a pure r element) lines. Their abundances as a function of [Fe/H] are shown in the central and right lower panels. There is no trend for [Eu/Fe], while [Nd/Fe] clearly correlates with [Fe/H]. This is a further evidence that the iron enrichment of the s-process rich group is due to core-collapse SNe. Two stellar populations in M 22 In the present paper we presented high resolution spectroscopic analysis for a sample of seventeen RGB stars in the GC M 22 from UVES and FLAMES+UVES data. We confirm that M 22 is a metal poor GC, with a mean iron-content [Fe/H]=−1.75 ± 0.02 (weighted mean between UVES and GIRAFFE) and a mean α-enhancement [α/Fe]=+0.36 ± 0.04. Sodium and oxygen follow the well known anti-correlation, while no evidence for an Mg-Al anticorrelation was revealed. A clear correlation was found between Na and Al. We find a strong dichotomy in the distribution of the sprocess elements barium, yttrium and zirconium. Most importantly, we find that the abundance of these elements is correlated with the iron abundance. Stars enriched in s-process elements also show larger values of [Fe/H], by ∼ 0.14 dex. The s-process elements abundance correlates with calcium, and calcium with iron. No clear correlation is present between sprocess elements and sodium, and between iron and sodium, but we noted that s-element and Fe enriched stars show higher values of sodium. These stars show also an overabundance of magnesium and silicon. The correlation between s-process elements, and Ca abundances, with [Fe/H] is the strongest argument in favor of the presence of two groups of stars with a different Fe content in M 22. All these results have been confirmed by a sample of fourteen lower resolution GIRAFFE spectra, which allowed us to double the original UVES sample. According to the most recent theoretical models by Pietrinferni et al. (2004), a difference in metallicity of 0.14 dex should cause a difference of ∼0.10 mag in the F606W ACS/WFC band at the level of the SGB. have in-deed found that the SGB of M 22 is separated into two, distinct branches. However, the average separation in F606W band is of 0.17 magnitudes: it appears that the SGB split can not be attributed to a difference in [Fe/H] alone. The fraction of stars on the bright SGB (bSGB) corresponds to 62%±5% of the total SGB population, while the faint SGB (fSGB) includes the remaining 38%±5% of the SGB stars ). In the stellar sample of the present paper, the fraction of Ba-strong, Y-strong, Zr-strong stars is ∼41%. It is therefore tempting to connect the s-process element poor sample to the bright SGB stars, while the faint SGB stars could be the ones with enhanced s-process elements. The correct reproduction of the two SGBs needs an accurate determination of the NaCNO abundances, as shown by Cassisi et al. (2008) for the analogous case of NGC 1851. Also an He variations between the two populations can affect the SGB morphology. Indeed, we note that M 22 shares similarities with ω Centauri and NGC 1851: these clusters, where multiple stellar populations have been photometrically identified along the SGB, exhibit a large range, not only in C, N, O, Na, Al, but also in s-process element abundance. M 22 is the only globular, apart ω Centauri, where some evidence of an intrinsic spread in iron were observed. Some hints (even if very uncertain due to the low number statistics of the analyzed sample) for a some iron spread in NGC 1851 was suggested by Yong & Grundahl (2008). NGC 1851, ω Centauri, and M 22 show a large variation in the Strömgren index, traditionally used as a metallicity indicator. All of these three GCs have a splitted SGB. From our observations and from the results of , it is tempting to speculate that the SGB split could be related to the presence of two groups of stars with different s-process element content and a difference, albeit small, in iron. According with this scenario, s-process element poor stars are those populating brighter SGB stars and constitute the first M 22 population. The second stellar generation should have been formed after that the AGB winds of this first stellar generation have polluted the protocluster interstellar medium with s-process elements. This second generation may have formed from material which was also enriched by core-collapse supernovae ejecta, as indicated by the higher iron, magnesium, and silicon content, and the lack of correlation of the iron content with a pure r-process element (Eu). A detailed analysis of the C, N, O abundances of the SGB stars in M 22 is strongly needed in order to properly settle this problem.
2009-05-25T19:28:00.000Z
2009-05-25T00:00:00.000
{ "year": 2009, "sha1": "2bf88b67a6815c7fdea566c75739f4208c638ad1", "oa_license": null, "oa_url": "https://www.aanda.org/articles/aa/pdf/2009/39/aa11827-09.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "2bf88b67a6815c7fdea566c75739f4208c638ad1", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Chemistry" ] }
261448397
pes2o/s2orc
v3-fos-license
GRPR versus PSMA: expression profiles during prostate cancer progression demonstrate the added value of GRPR-targeting theranostic approaches Introduction Central to targeted radionuclide imaging and therapy of prostate cancer (PCa) are prostate-specific membrane antigen (PSMA)-targeting radiopharmaceuticals. Gastrin-releasing peptide receptor (GRPR) targeting has been proposed as a potential additional approach for PCa theranostics. The aim of this study was to investigate to what extent and at what stage of the disease GRPR-targeting applications can complement PSMA-targeting theranostics in the management of PCa. Methods Binding of the GRPR- and PSMA-targeting radiopharmaceuticals [177Lu]Lu-NeoB and [177Lu]Lu-PSMA-617, respectively, was evaluated and compared on tissue sections of 20 benign prostatic hyperplasia (BPH), 16 primary PCa and 17 progressive castration-resistant PCa (CRPC) fresh frozen tissue specimens. Hematoxylin-eosin and alpha-methylacyl-CoA racemase stains were performed to identify regions of prostatic adenocarcinoma and potentially high-grade prostatic intraepithelial neoplasia. For a subset of primary PCa samples, RNA in situ hybridization (ISH) was used to identify target mRNA expression in defined tumor regions. Results The highest median [177Lu]Lu-NeoB binding was observed in primary PCa samples, while median and overall [177Lu]Lu-PSMA-617 binding was highest in CRPC samples. The highest [177Lu]Lu-NeoB binding was observed in 3/17 CRPC samples of which one sample showed no [177Lu]Lu-PSMA-617 binding. RNA ISH analyses showed a trend between mRNA expression and radiopharmaceutical binding, and confirmed the distinct GRPR and PSMA expression patterns in primary PCa observed with radiopharmaceutical binding. Conclusion Our study emphasizes that GRPR-targeting approaches can contribute to improved PCa management and complement currently applied PSMA-targeting strategies in both early and late stage PCa. Introduction: Central to targeted radionuclide imaging and therapy of prostate cancer (PCa) are prostate-specific membrane antigen (PSMA)-targeting radiopharmaceuticals. Gastrin-releasing peptide receptor (GRPR) targeting has been proposed as a potential additional approach for PCa theranostics.The aim of this study was to investigate to what extent and at what stage of the disease GRPR-targeting applications can complement PSMA-targeting theranostics in the management of PCa. Methods: Binding of the GRPR-and PSMA-targeting radiopharmaceuticals [ 177 Lu]Lu-NeoB and [ 177 Lu]Lu-PSMA-617, respectively, was evaluated and compared on tissue sections of 20 benign prostatic hyperplasia (BPH), 16 primary PCa and 17 progressive castration-resistant PCa (CRPC) fresh frozen tissue specimens.Hematoxylin-eosin and alpha-methylacyl-CoA racemase stains were performed to identify regions of prostatic adenocarcinoma and potentially high-grade prostatic intraepithelial neoplasia.For a subset of primary PCa samples, RNA in situ hybridization (ISH) was used to identify target mRNA expression in defined tumor regions. Results: The highest median [ 177 Lu]Lu-NeoB binding was observed in primary PCa samples, while median and overall [ 177 Lu]Lu-PSMA-617 binding was highest in CRPC samples.The highest [ 177 Lu]Lu-NeoB binding was observed in 3/17 CRPC samples of which one sample showed no [ 177 Lu]Lu-PSMA-617 binding.RNA ISH analyses showed a trend between mRNA expression and radiopharmaceutical binding, and confirmed the distinct GRPR and PSMA expression patterns in primary PCa observed with radiopharmaceutical binding. Introduction Prostate cancer (PCa) is the second most common male cancer type with 1.4 million new cases and 375.000 deaths worldwide in 2020 (1).In recent years, nuclear medicine has rapidly gained an important status in PCa management.Central to targeted radionuclide imaging and therapy of PCa are prostate-specific membrane antigen (PSMA)-targeting radiopharmaceuticals (2).PSMA is a type II transmembrane glycoprotein that is expressed in normal prostate tissue with significantly increased expression in PCa, especially in advanced stages of the disease (3)(4)(5)(6).Following the success of various clinical studies, the radiopharmaceuticals [ 68 Ga]Ga-PSMA-11 and [ 18 F]F-DCFPyL for positron emission tomography (PET) and [ 177 Lu]Lu-PSMA-617 for radionuclide treatment have recently been approved by the FDA and EMA for PCa patients (7)(8)(9). PSMA PET has shown to detect pelvic lymph nodes and metastatic lesions with higher sensitivity and specificity compared to conventional imaging methods such as computed tomography (CT) and bone scintigraphy (10,11).Regarding treatment with [ 177 Lu]Lu-PSMA-617, impressive results were obtained in PSMApositive metastatic castration-resistant PCa (mCRPC) patients who received [ 177 Lu]Lu-PSMA-617 plus standard of care versus standard of care treatment alone; with significantly prolonged progression-free survival and overall survival of 8.7 vs. 3.4 months and 15.3 vs. 11.3 months, respectively (12).Importantly, a proportion (<10%) of prostate carcinomas has low PSMA expression and ~30% of mCRPC patients do not respond to treatment with [ 177 Lu]Lu-PSMA-617, when response is defined as any decrease in prostate-specific antigen (PSA) levels (13,14).Moreover, serious side effects have been frequently reported, such as xerostomia as a consequence of unwanted but specific binding to PSMA in the salivary glands.The impact of xerostomia on patients' quality of life is the main reason for treatment discontinuation, especially when [ 225 Ac]Ac-PSMA-617 is applied (15).All of the above underline the need for new developments with improved efficacy and safety. NeoB, formerly called NeoBOMB1, is a GRPR theranostic agent that has been extensively validated in preclinical and initial clinical studies with promising results (27,(29)(30)(31).Although studies with GRPR radiopharmaceuticals, including NeoB, have demonstrated high uptake not only in the tumor, but also in the GRPR-expressing pancreas, multiple studies have shown that the pancreas is not expected to be a dose limiting organ for GRPR-mediated treatment (28,30,32).The relatively low estimated absorbed dose by the pancreas is most likely due to the rapid washout of the radiopharmaceutical from this organ (29).GRPR-targeting radiopharmaceuticals may therefore offer an advantage over [ 177 Lu]Lu-PSMA-617 with respect to safety.The use of GRPR targeting may be of particular importance when radionuclide treatment is considered in earlier stages of PCa, which is currently an active area of research. Taken together, GRPR-targeting nuclear approaches may complement PSMA targeting in the management of PCa.Therefore, few clinical studies have been initiated making a direct comparison between these two approaches.Since clinical studies are often costly, resource-intensive and time-consuming, investigations are limited to a specific patient population.We believe that preclinical studies can therefore greatly contribute to exploring the potential role of GRPR-targeting applications in the context of the currently applied PSMA targeting for detection and treatment of PCa by studying a broad patient population using the same methodology.To this end, we evaluated and compared ex vivo binding of the GRPR-and PSMA-targeting radiopharmaceuticals [ 177 Lu]Lu-NeoB and [ 177 Lu]Lu-PSMA-617, respectively, to patient tissue sections obtained from benign prostatic hyperplasia (BPH), primary PCa and progressive CRPC lesions. Human prostate specimens This study adhered to the Code of Conduct of the Federation of Dutch Medical Scientific Societies.Fresh frozen BPH and primary PCa tissue specimens were retrieved from the Erasmus MC Tissue Bank.BPH tissues (adenomyomatous hyperplasia) from 20 patients (mean age ± standard deviation (SD): 69 ± 10 years) obtained after transurethral resection of the prostate (TURP) and primary PCa tissues from 16 patients (65 ± 7 years) obtained from radical prostatectomy were retrieved.Seventeen CRPC fresh frozen samples were selected from the Erasmus MC Urology Department Tissue Biobank and were obtained from TURP of progressive patients treated in hospitals in the Rotterdam region (73 ± 6 years).Patients were included as CRPC when they presented with biochemical or radiological progressive disease after surgical or medical castration.For the BPH and CRPC sample set, 3 different fragments of one TURP per patient were included to study a larger tissue area.Clinicopathological characteristics, such as Gleason score (GS) and PSA, of all PCa patients are summarized in Tables S1 and S2. Tissue sectioning and staining Each specimen was cut into 10 μm thick sections and mounted on SuperFrost slides (VWR).Adjacent sections were successively used for autoradiography studies with [ 177 Lu]Lu-NeoB (2 sections) and [ 177 Lu]Lu-PSMA-617 (2 sections), 1 section was used for hematoxylin-eosin (H&E) staining, in case of primary PCa 1 section was used for Alpha-methylacyl-CoA racemase (AMACR) staining, and 1 section was used for RNA in situ hybridization (ISH) analysis (only in a subset).H&E staining was performed according to a standard protocol in order to determine the presence of cancerous areas.Tumor regions were manually drawn by an experienced pathologist (GvL) and graded according to the ISUP 2014 GS (Table S1).Immunohistochemistry for AMACR was conducted by the Erasmus MC Pathology Research and Trial Service to identify regions of prostatic adenocarcinoma and potentially premalignant high-grade prostatic intraepithelial neoplasia (PIN), although definitive cytological atypia required for the diagnosis of PIN cannot be established well on frozen sections (33).Staining was performed with an automated, validated and accredited staining system (Ventana Benchmark ULTRA, Ventana Medical Systems) using UltraView Universal DAB Detection Kit.In brief, heat-induced antigen retrieval was performed using the Ventana CC1 solution for 8 min.The tissue samples were then incubated with a monoclonal rabbit anti-AMACR (clone 13H4) for 32 min at a concentration of 1.27 μg/ mL.Incubation was followed by hematoxylin II counter stain for 12 min and then a blue coloring reagent for 8 min according to the manufactures instructions (Ventana). High resolution images of the H&E and AMACR stained sections were acquired using a NanoZoomer digital slide scanner (Hamamatsu Photonics) and analyzed using NDP View 2 software (Hamamatsu Photonics). Radiopharmaceuticals The GRPR antagonist NeoB (Advanced Accelerator Applications, a Novartis company) and the PSMA inhibitor PSMA-617 were labeled with lutetium-177 (LuMark, IDB Holland) as described previously (29,34).Quenchers (ascorbic and gentisic acids) were used to prevent radiolysis (35).Highpressure liquid chromatography and instant thin-layer chromatography on silica gel were used to determine the radiochemical purity (>95%) and radiolabeling yield (>95%) of all labelings.A molar activity of 40 MBq/nmol was used for all in vitro autoradiography experiments. In vitro autoradiography To compare radiopharmaceutical binding, an in vitro autoradiography was performed on frozen human prostate sections.Frozen sections of cell line-derived PC-3 (GRPRpositive) and patient-derived PC295 (PSMA-positive) xenograft tumors were used as positive controls (36,37).In short, tissue sections were incubated for 10 min at room temperature (RT) with washing buffer (167 mM Tris-HCl pH 7.6, 5 mM MgCl 2 ) containing 0.25% bovine serum albumin (BSA) to prevent non-specific binding to the glass slides.Tissue sections were subsequently incubated for 1 h at RT with 100 μL of incubation buffer (washing buffer with 1% BSA) containing 1 nM [ 177 Lu]Lu-NeoB or [ 177 Lu]Lu-PSMA-617 (i.e. total binding).To assess binding specificity, parallel-sections were co-incubated with an excess (1 μM) of unlabeled Tyr 4 bombesin (Merck Life Science NV) or PSMA-I&T (Huayi Isotopes Co. via ATT Scintomics), respectively.Following incubation, slides were washed and dried before exposure to super-resolution (<50 microns) phosphor screens (Perkin Elmer) for >24 h.Screens were read using the Cyclone (Perkin Elmer) and data was processed in Optiquant software (Perkin Elmer). [ 177 Lu]Lu-NeoB and [ 177 Lu]Lu-PSMA-617 binding in the tumor regions, as identified on the adjacent H&E-stained sections, was quantified and expressed as digital light units per surface area (DLU/mm 2 ).Standards consisting of 1 μL drops of incubation buffer were quantified to determine the percentage of added activity per mm 2 (%AA/mm 2 ).DLU/mm 2 was then converted to %AA/mm 2 by normalizing data to the standards.Specific binding was defined by subtracting the nonspecific binding, as measured on the sections blocked with an excess of unlabeled ligand, from the total binding. RNA in situ hybridization RNA ISH assay was performed to determine cellular GRPR and PSMA mRNA expression levels which could be correlated to radiopharmaceutical binding.To detect GRPR and PSMA mRNA simultaneously in one sample, ISH was performed using the RNAscope 2.5 HD Duplex Reagent Kit (cat.#322430; Advanced Cell Diagnostics (ACD)) in accordance with the manufacturer's instructions for the manual chromogenic assay for fresh frozen tissue using optimized sample preparation and pretreatment conditions.Tissues were fixed in pre-chilled 10% neutral buffered formalin (Sigma-Aldrich) for 2 h at 4°C and then rinsed twice with phosphate buffered saline (PBS; ThermoFisher Scientific).After fixation, tissues were dehydrated using a series of ethanol washes and air dried.For tissue pretreatment, sections were exposed to kitprovided hydrogen peroxide for 10 min at RT and then rinsed in PBS.Immediately after, slides were placed in a dry incubator (37°C) for 30 min before being treated with kit-provided protease IV for 30 min at RT. Hybridization of the GRPR probe Hs-GRPR (cat.#460411; ACD) and the newly designed and synthesized PSMA probe Hs-PSMA1-C2 (cat.#311251-C2; ACD) to target respective mRNA sequences was performed by incubation in the HybEZ Oven (cat.#321720; ACD) for 2 h at 40°C.Hybridization was followed by standard signal amplification steps and fast red and green chromogenic detection.Tissues were then counterstained with Gill's Hematoxylin I (Polysciences Inc.), air dried, mounted and imaged using a NanoZoomer digital slide scanner (Hamamatsu Photonics). From the whole slide images, regions of interest (ROIs) were selected covering 10% of the tumor area and 5 additional ROIs outside the tumor area to identify non-tumor cell staining.ROIs were manually drawn by a researcher who was blind to the study using QuPath software (38).These ROIs were analyzed using inhouse written Python based scripts and graphical interface (Tumor Microenvironment (TME) Analyzer, H.E. Balcioglu, manuscript in preparation), blind to the clinical information (Figure S1).Briefly, the bright field images were converted into pseudo-fluorescent images through image inversion followed by manual identification of signal intensity patterns and using these patterns to assign a pixel intensity per signal (i.e.PSMA probe, GRPR probe and nucleus).To remove nonspecific signal detection, signals low in intensity were removed.Additionally, for the GRPR probe, areas larger than 500px in size were removed to overcome the wrong assignment of brown and blue artifacts to this channel.Nuclei were detected by applying StarDist algorithm '2D_versatile_fluo' (39), and cell regions were assigned through Voronoi segmentation up to 50 pixels from the nucleus.Probe positive regions were detected and clusters were defined as large, elongated regions with at least 100 px 2 size and eccentricity of at least 0.7.Cells were then assigned to 5 bins: bin 0, no probe positivity; bin 1, probe positivity but no clusters; bin 2, 1 probe cluster; bin 3, 2 probe clusters; bin 4, at least 3 probe clusters.H-scores were calculated as follows: H-score = o bin 0 → 4 ðbin number x percentage of cells per binÞ. Statistics All statistical analyses were carried out using GraphPad Prism software, version 9 (GraphPad Software Inc.).The distribution of radiopharmaceutical binding within disease stages was depicted in violin plots.When multiple samples from the same patient were available, only one randomly selected sample was used for statistics.As the Shapiro-Wilk test indicated that data was not normally distributed, the nonparametric Kruskal-Wallis test followed by Dunn's post-hoc test for multiple comparisons was performed to compare mean GRPR or PSMA radiopharmaceutical binding between disease stages, and between ISUP grades for primary PCa.A p value of <0.05 was considered statistically significant.Pearson's correlation coefficient was determined to measure the association between radiopharmaceutical binding and mRNA expression. The percentage specific binding of [ 177 Lu]Lu-PSMA-617 in 4/20 BPH samples was within or above the upper limit of the 95% CI of binding in primary PCa, indicating relatively high PSMA expression levels in these samples.In contrast, for [ 177 Lu] Lu-NeoB binding this was not true for any of the BPH samples.The highest variation of binding of both radiopharmaceuticals was observed within the CRPC sample set, illustrating that there is a high degree of heterogeneity in GRPR and PSMA expression between patients with advanced disease.Although [ 177 Lu]Lu-PSMA-617 binding was generally high in CRPC samples, 6/17 samples showed no or very low binding (i.e.binding below the lower limit of the 95% CI).Interestingly, one of these samples showed relatively high [ 177 Lu]Lu-NeoB binding.Moreover, 3/17 CRPC samples showed a high level of [ 177 Lu]Lu-NeoB binding that falls outside the 95% probability limit of binding in primary PCa. Intrapatient heterogeneity Next to interpatient heterogeneity, we observed large intrapatient heterogeneity for both [ 177 Lu]Lu-PSMA-617 and [ 177 Lu]Lu-NeoB binding across the various disease states.This intrapatient heterogeneity was reflected in differences in signal intensity between various locations within the prostate and within the tumor region of one section (Figure 2).In order to study the heterogeneity between various locations within the prostate, fragments from different sites in the prostate or prostate tumor of the same patient were analyzed for the BPH and CRPC stages, respectively.It was observed that not all samples from the same patient showed binding or showed the same degree of binding. Tumor specificity Analysis of tumor specificity was conducted in the relatively larger sections of the primary PCa samples using H&E and AMACR stainings to detect prostatic adenocarcinoma cells.This analysis showed that binding of [ 177 Lu]Lu-PSMA-617 and [ 177 Lu]Lu-NeoB occurred in a focal pattern that corresponded with AMACR staining intensity (Figure 3).Small areas of AMACR positively stained cells were observed within and outside the tumor regions identified by H&E supported pathology.In the majority of cases, these AMACR positive areas outside of the tumor region also showed relatively high [ Here, histological evaluation revealed the presence of normal epithelial cells lining glandular ducts.The high [ 177 Lu]Lu-PSMA-617 binding was target specific as no binding was observed in the blocked section (Figure S2). mRNA expression An RNA ISH assay was conducted to evaluate the relation between mRNA expression and radiopharmaceutical binding (Figure 4, Figure S3).Although significance was not observed, probably due to the low number of samples, PSMA mRNA expression levels (expressed as H-score) showed a trend that correlated positively with radiopharmaceutical binding (n = 5; r = 0.64; p = 0.25).No strong trend was found for GRPR (n = 5; r = 0.34; p = 0.57).In line with the results of the autoradiography studies, the H-score for PSMA was significantly higher than for GRPR (mean ± SD; 72.4 ± 18.1 for PSMA vs. 10.0 ± 5.4 for GRPR; p < 0.01), indicating higher PSMA expression (Table S3).PSMA mRNA levels in non-tumor areas were also significantly higher than GRPR mRNA levels (p < 0.05) (Table S4).Moreover, analysis showed that of all target-positive cells there was a considerable proportion of single positive cells (range; 76.7-92.8% and 12.4-43.9%for PSMA and GRPR positive cells, respectively), reflecting the distinct expression patterns of PSMA and GRPR. Discussion PSMA-targeting radiopharmaceuticals have emerged as powerful agents for PCa management.However, not all PCa patients have PSMA overexpression and a considerable proportion of PSMApositive patients does not respond to treatment with [ 177 Lu]Lu-PSMA-617.Moreover, PSMA targeting comes with serious side effects as the result of unwanted but specific binding to PSMA on the salivary glands.This calls for improved PCa theranostics by, for example, using other targets.In this study, we examined whether GRPR-targeting radiopharmaceuticals can complement PSMAtargeting theranostic approaches and where to position them in the progression of PCa.We compared [ 177 Lu]Lu-NeoB and [ 177 Lu]Lu-PSMA-617 binding in patient samples of BPH, primary PCa and CRPC using the same methodology across all stages.Whilst some clinical research has been carried out on such a comparison, in this preclinical research we were able to cover a broad range of prostate conditions thereby our study contributes to an increased understanding of the link between radiopharmaceutical binding and disease stage. The data showed the highest median binding of [ 177 Lu]Lu-NeoB in primary PCa samples, while the highest median binding of [ 177 Lu]Lu-PSMA-617 was observed in CRPC samples.This finding confirms the work of others, in which high GRPR and PSMA expression were linked to the respective disease stages (5,18).Although, in contrast to our previous report on breast cancer (40), we did not observe a statistically significant relationship between radiopharmaceutical binding and target mRNA expression, we expect this to be due to sample size rather than different tissue type.Even though we found the highest median binding of [ 177 Lu]Lu-NeoB in primary PCa, the 3 samples with the highest [ 177 Lu]Lu-NeoB binding belonged to the CRPC sample set.This may suggest that GRPR-targeting approaches may be relevant in a small proportion of patients with late stage disease. We observed that 1/6 PSMA-negative CRPC samples showed high [ 1 7 7 Lu]Lu-NeoB binding, indicating a potential complementary role of GRPR targeting to PSMA theranostics.Although these are low numbers, Baratto et al. (41) also underlined the complementary value of GRPR targeting as they identified 7 additional lesions with [ 68 Ga]Ga-RM2 (a potent GRPRtargeting agent) PET that were not visible on [ 68 Ga]Ga-PSMA11/ [ 18 F]F-DCFPyL PET in 4/50 biochemically recurrent PCa patients.Kurth et al. (28) investigated 35 patients with metastatic CRPC who had insufficient PSMA expression or showed lower tumor accumulation after previous cycles of [ 177 Lu]Lu-PSMA-617 treatment.They identified 6 patients with high uptake on [ 68 Ga] Ga-RM2 PET/CT and who thus qualified for [ 177 Lu]Lu-RM2 therapy.In their study, the absorbed doses in the tumor lesions delivered by [ 177 Lu]Lu-RM2 were found to be therapeutically relevant.Taken together, our findings support that a subset of metastatic CRPC patients might benefit from GRPR-mediated radionuclide therapy. Although [ 177 Lu]Lu-PSMA-617 radionuclide therapy is currently only available for patients with metastatic CRPC, further studies are ongoing to explore its use in earlier stages of PCa.In our study, we observed binding of both [ 177 Lu]Lu-NeoB and [ 177 Lu]Lu-PSMA-617 in all but one primary PCa samples.This finding is consistent with that of Mapelli et al. (42) who reported detection of primary PCa in 18/19 patients with both GRPR-and PSMA-targeting radiopharmaceuticals separately.A limitation for the use of PSMA radiopharmaceuticals in primary PCa is the relatively high binding of [ 177 Lu]Lu-PSMA-617 to BPH samples and normal tissue surrounding primary PCa with ISUP grade 2 and 3, as observed in our study.Our results indicate that PSMAtargeting applications, in contrast to GRPR, may not always distinguish cancerous tissue from benign or normal tissue, reducing the tumor-specific value.This is one of the pitfalls of PSMA PET that has also been described before (43). Analyzing radiopharmaceutical binding in primary PCa within the ISUP grades for PCa classification revealed no significant differences between grades.However, prior preclinical studies evaluating the expression of GRPR and PSMA in PCa samples have reported on a higher GRPR expression for low-grade PCa specimens.Faviana et al. (44) found that the number of cells expressing GRPR as determined by immunohistochemistry was significantly higher in low-grade tumors and Schollhammer et al. (45) demonstrated higher [ 111 In]In-RM2 binding in primary PCa samples with Gleason score 6 (i.e.ISUP grade 1) using autoradiography studies.The absence of a found correlation in our study may be due to the unequal distribution of samples across the 5 ISUP grade groups in combination with the small sample size reducing the statistical power.Unlike the aforementioned studies, our primary objective was to compare expression levels across the different disease stages and thus ISUP grade was not taken into account when samples were selected.Similarly as our study contrasts with other preclinical reports, clinical studies reported contradictory results as well; Gao et al. (46) noted higher uptake in low-ISUP PCa, while Schollhammer et al. (47) saw no differences in uptake between ISUP grades for GRPR-mediated PET/CT.More research is needed to get a clear answer on the association between ISUP grade and GRPR expression levels in primary PCa. For primary PCa, the complementary value of GRPR targeting may also be found in the fact that we observed binding of [ 177 Lu]Lu-NeoB and [ 177 Lu]Lu-PSMA-617 to overlapping, but also to different tumor areas within the tumor region of one sample.This finding of distinct GRPR and PSMA expression patterns was further supported by our RNA ISH results indicating differential mRNA expression of these targets per cell.There was a considerable proportion of single positive cells that showed mRNA signal only for GRPR or PSMA, although GRPR positivity might be underestimated due to color overlap with the cell nucleus.This complementary role for GRPRtargeting radiopharmaceuticals based on the different expression patterns has also been suggested in previous studies (48,49).The observed intrapatient heterogeneity of GRPR and PSMA suggests that future theranostics for primary prostate tumors may benefit from an approach in which GRPR-and PSMA-targeting radiopharmaceuticals are combined.One such approach currently being explored by various research groups is the use of GRPR/PSMA-targeting heterodimers (50)(51)(52).Reported results so far are preliminary and the value of such heterodimers remains to be investigated.Of note, if GRPR/PSMA heterodimers will be applied for radionuclide therapy, the off-target organ toxicity should critically be evaluated as GRPR and PSMA are physiologically expressed on different background organs, which may increase toxicity. The generalizability of these results is subject to limitations.While autoradiography studies provide a direct measurement of radiopharmaceutical binding and allow for the high-resolution visualization of binding, this does not reflect in vivo pharmacokinetics.The large difference in specific binding of [ 177 Lu] Lu-NeoB and [ 177 Lu]Lu-PSMA-617 observed in our study could partly be attributed to this, as the difference in SUVmax in PET studies is generally much smaller.Furthermore, because of the heterogeneous CRPC sample set, correlations with clinicopathological parameters were not addressed in this study.With regard to the RNA ISH analysis, only a single primary PCa section was analyzed per patient for a limited number of patients, limiting the statistical significance of our findings.Therefore, given the general knowledge of tumor heterogeneity, we should interpret our results with caution.Despite these limitations, our study contributes to the knowledge of GRPR and PSMA expression profiles across PCa disease stages and their use as potential targets for theranostic applications. Conclusion Our study demonstrates that GRPR-targeting radiopharmaceuticals may have complementary value for a theranostic approach in both early and late stages of PCa.Furthermore, we showed that relatively high PSMA binding, in contrast to GRPR binding, may be non-tumor specific at early stage PCa.Our study contributes to a better understanding of how to position GRPR targeting in the context of PSMA-directed PCa theranostics to ultimately advance clinical care for PCa patients. FIGURE 2 FIGURE 1 FIGURE 2Hematoxylin-eosin (H&E) stains (top row), binding of [ 177 Lu]Lu-NeoB (middle row) and [ 177 Lu]Lu-PSMA-617 (bottom row) to representative samples of benign prostatic hyperplasia (BPH), primary prostate cancer (PCa) and castration-resistant PCa.Encircled tissue sections belong to the same patient.The black marking in the H&E stained sections indicates tumor area(s) as identified by a pathologist.The scale selected for the autoradiography sections shows optimized contrast.DLU, digital light unit. FIGURE 3 FIGURE 3 Tumor specificity of [ 177 Lu]Lu-NeoB and [ 177 Lu]Lu-PSMA-617 in primary prostate cancer sections of four representative patients.From left to right in the figure: hematoxylin-eosin (H&E) staining, immunohistochemical staining of alpha-methylacyl-CoA racemase (AMACR) expression, [ 177 Lu]Lu-NeoB and [ 177 Lu]Lu-PSMA-617 binding.The black marking in the H&E stained sections indicates the tumor area(s) as identified by a pathologist.For the AMACR stained section, a 10x magnification of two specified areas (black/pink box numbered 1 or 2) is shown (scale bar = 250 µm).The presented autoradiography sections are displayed at an optimal scale for each tissue to show optimized contrast. FIGURE 4 FIGURE 4Detection of gastrin-releasing peptide receptor (GRPR) and prostate-specific membrane antigen (PSMA) mRNA expression in one representative primary prostate cancer section using RNA in situ hybridization (ISH).The hematoxylin-eosin (H&E) stained section with black marking indicates the tumor area as identified by a pathologist.The corresponding autoradiography images for [ 177 Lu]Lu-NeoB and [ 177 Lu]Lu-PSMA-617 binding are displayed and the selected color scale shows the optimized contrast for the sections.Forty times magnifications of the RNA ISH section showing a region with background [ 177 Lu]Lu-NeoB and [ 177 Lu]Lu-PSMA-617 binding in non-tumor tissue (I) and with relatively low (II) and high (III) binding within the tumor are shown.Each dot represents a single GRPR (blue) or PSMA (red) mRNA molecule.Nuclei were counterstained with hematoxylin (purple).DLU, digital light unit.
2023-09-02T15:14:14.263Z
2023-08-31T00:00:00.000
{ "year": 2023, "sha1": "d6709b4ee2ff19b74120e3cd655d5ef5eca0b5b3", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2023.1199432/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5a69a88a24b5f99b7a018a25ebce4554e6d56203", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
247416779
pes2o/s2orc
v3-fos-license
Total Wrist Arthroplasty—A Systematic Review of the Outcome, and an Introduction of FreeMove—An Approach to Improve TWA The Swanson silicone prosthesis was one of the first devices to realize total wrist arthroplasty (TWA). It has been used regularly since the early 1960s. This systematic review of the literature evaluated the status quos of TWA. The present study was conducted according to the PRISMA guidelines. A literature search was made in Medline, PubMed, Google Scholar, and the Cochrane Library databases. The focus of the present study was on implant survivorship and related functional outcomes. Data from 2286 TWA (53 studies) were collected. Fifteen studies were included for the analysis of implant survivorship. Fifteen studies were included for the analysis of pain. Twenty-eight studies were included for the analysis of the Disabilities of the Arm, Shoulder, and Hand (DASH) score. Grip strength was tracked in 16 studies. The range of motion (RoM) was evaluated in 46 studies. For supination and pronation, 18 articles were available. Despite some methodological heterogeneities, TWA may be effective and safe in pain reduction and improving function and motion. There is still a range for a future improvement of the procedure. Introduction Total wrist arthroplasty (TWA) is still a controversial issue but it has become a challenge to total-and sometimes also partial-wrist arthrodesis (TWAD/PWAD). Even today, TWA has not found widespread acceptance, and most surgeons prefer to recommend a TWAD to their patients [1]. For patients who present with advanced joint degeneration and painful wrist, TWA and TWAD/PWAD are appropriate options for reconstruction [2,3]. Especially TWA has been shown to be effective in improving quality of life in patients with wrist rheumatoid and osteoarthritis (RA/OA) [4][5][6][7]. In this case, conservative means have not provided adequate pain relief, and other motion-preserving procedures are impossible, hopeless, or have failed [3]. Patients eligible for TWA should report chronic pain (RA/OA, or posttraumatic arthritis), low-activity lifestyle, and the desire to preserve wrist motion and have adequate bone stock and good quality of the soft tissue [7,8]. Thus, TWA has been considered an option only for certain individuals with specific needs and desires for motion who clearly understand the risks and benefits (Adams, 2001). Themistocles Gluck firstly performed the first TWA in 1891 (an ivory ball-and-socket device) [9]. The evolution of wrist implants has been slower than that of, e.g., hip, knee, and spine [10]. The lower prevalence of symptomatic wrist RA/OA and the use of other treatments, such as TWAD/PWAD, dampened the interest in the development of wrist in the analysis if (1) the design was a comparative study, (2) patients underwent primary TWA/TWR, (3) and at least one quantifiable pre-specified outcome measure was reported. Exclusion Criteria We excluded papers about cadaveric studies, biomechanical studies, studies not accessible in journals, and books or online reviews without primary data. Double publications and articles with an overlap of cases were relative exclusion criteria. Articles not written in English or German were evaluated based on an English abstract, if available. Study Reviews Two reviewers (JE and FM) independently analyzed the resulting articles and conducted an initial review for eligibility based on title and abstract. Studies that were not related to our research question were immediately excluded. The remaining studies were then divided among the two reviewers such that both reviewers independently assessed each to confirm final eligibility. We developed and piloted a standardized form for collecting data related to study methodology, participant characteristics, and outcomes of interest. Data extraction was independently performed by both reviewers. For the statistical analysis, the tools of MS-Excel (Microsoft, Office package 2016) were used. Quality Assessment and Handling of Data The focus was on, e.g., the number of cases, the duration of TWA, and the observation period. TWA duration was evaluated based on papers mentioning the keyword implant survival without any restriction. The function was evaluated by validated and relevant outcome measurement tools such as the Disabilities of Arm, Shoulder, and Hand (DASH/QuickDASH), or the worst pain reported by a Visual Analog Score (VAS). Table 1 is a summary of the overall patient demographics. The majority of the data are based on RA cases. Additionally, diagnoses are increasingly represented in recent publications. Table 1. Patient demographics-overview (right side number in brackets: references). Statistical Analysis Summary statistics including mean values were calculated. Studies in this systematic review include partly small case series with nonrandomized design and are largely retrospective. This level of evidence contains inherent biases, making statistical testing inappropriate [34]. Therefore, mean values were calculated to highlight general trends. The limitation is here, that a conclusion whether statistically significant differences exist cannot be reached. Study Selection The Figure 1 shows the study selection flow diagram of the systematic literature search for TWA. review include partly small case series with nonrandomized design and are largely retrospective. This level of evidence contains inherent biases, making statistical testing inappropriate [34]. Therefore, mean values were calculated to highlight general trends. The limitation is here, that a conclusion whether statistically significant differences exist cannot be reached. Study Selection The Figure 1 shows the study selection flow diagram of the systematic literature search for TWA. . Figure 1. Study selection flow diagram of the systematic literature search for TWA. A total of 54 articles were included for a qualitative evaluation of the clinical outcome (n = numbers of papers). Inclusion and exclusion criteria were determined before the literature search. Studies from the literature search that were excluded through title and abstract review were studies of wrist arthrodesis, proximal row carpectomy, and fusion interventions arthroplasty. Selected Publications More than 42,000 papers were eligible as the outcome of the literature search ( Figure 1). The screening of the publication lead to an exclusion of more than 600 articles. We checked the full text of round about 200 papers, which lead us to 54 studies with an input for analyzation. We found four systematic reviews about TWA [5,22,34,35]. The eligible studies represent a maximum of 2286 cases (Table 1). Inclusion and exclusion criteria were determined before the literature search. Studies from the literature search that were excluded through title and abstract review were studies of wrist arthrodesis, proximal row carpectomy, and fusion interventions arthroplasty. Selected Publications More than 42,000 papers were eligible as the outcome of the literature search ( Figure 1). The screening of the publication lead to an exclusion of more than 600 articles. We checked the full text of round about 200 papers, which lead us to 54 studies with an input for analyzation. We found four systematic reviews about TWA [5,22,34,35]. The eligible studies represent a maximum of 2286 cases (Table 1). Included Prosthesis Models The Table 2 gives an overview about the included types of prostheses. Elos prosthesis [26] Swemac, Linkoping, Sweden The Elos prosthesis: • with its different versions were all preliminary types of the Gibbon prosthesis. • version 1 had a short metacarpal screw that was fully threaded, as was the radial screw. • in later versions, the metacarpal screws were longer, the diameter smaller, and the heads lower. Gibbon prosthesis [26] Swemac, Linkoping, Sweden The Gibbon prosthesis: • is a modular (4-component) prosthesis. • articulation is cobalt chrome-molybdenum alloy treated with chromium nitride • stem is made of titanium alloy blasted and coated with a resorbable calcium phosphate combination. • was CE-marked in late 2005 and changed the name to Motec in 2010, without any change to the prosthesis. Destot implant [47] The Destot implant: • is a non-constrained, metal-polyethylene condylar prosthesis. • has carpal components made of 316 L steel. • stems have a sandblasted/porous-coated surface to eliminate the need for cement and to enhance osseointegration. • has a concave articular surface of the radial component, which is made of UHMW polyethylene. • The stem of the radial component is V-shaped and has grooves at either side for bone growth. • The surface is corundum rough blasted. The ball head is coated with titanium nitride. • The cup inset is made of UHMW polyethylene. • The special design of the prosthesis with two sizes in rightand left-hand versions helps to center and balance it. • is designed so that it could be cemented or uncemented. Manufacturer Short Description Anatomic physiologic wrist prosthesis (APH) [50] Implant-Service Vertriebs-GmbH, Hamburg, Germany The Anatomic physiologic wrist prosthesis • is an uncemented cobalt-chrome prosthesis, combining titanium/titanium articular surfaces a hydroxyapatite-coated cobalt-chrome prosthesis with a titanium coating of the articular surfaces. • The radial component has an articular surface inclination of 10 • toward the ulna. • The carpal component is anchored with its tip in the third metacarpal bone and the distal carpal bones. It has a mobile bearing surface with a radial inclination of 10 • . • The radial component is made in four sizes, and the carpal component is available in one standard size. RWS Prosthesis [51] HowmedicaTM, Pfizer Hospital Products Group, The Netherlands The RWS Prosthesis • is a semi-constrained device that has three components: a radial component consisting of a UHMW polyethylene insert in a Vitallium tray and a metacarpal component. • The design allows for a mechanical arc of 100 • motion in the anteroposterior plane, 40 • of radio-ulnar deviation, and minimal axial rotation. • The center of rotation is located at the proximal pole of the capitate and is placed slightly palmar and ulnar to the long axis of the radius by off-setting the intra-medullar system of the radial component. • is an uncemented implant, • it includes screw fixation into the carpus, bone preserving, and deep radial articulation (prevent subluxation) and is designed as a mobile bearing ellipsoidal polyethylene component. Resection of bone is required upon prosthesis insertion, which preserves the ligamentous and soft tissue attachment of the wrist. • is an elliptical ball and socket design of radial and carpal Cr-Co components that are titanium-coated, and an intercalated polyethylene component that mainly articulates with the radial component but also permits a rotational articulation of 20 degrees with the carpal plate. • The carpal plate has fixated the carpus by its stem and two screws, of which only the most radial may penetrate the metacarpal for a very short distance even though many advocate not doing so aimed to be to the carpus and minimally in the metacarpals. The fixation is often performed without cement. • is designed to replace the distal end of the radius as part of a TWA to treat a severe bone fracture or degenerative disease. • is made of titanium or cobalt-chrome and is implanted into radial bone proximally; • interfaces with a polyethylene (PE) spacer distally. • could be implanted with or without bone cement. Total modular wrist prosthesis [69] Micromed, Germany The Total modular wrist prosthesis • is available as a constrained or non-constrained device consisting of four components. • comes with a titanium radial component that articulates with a titanium carpal plate with a variable thickness polyethylene insert in between. • has separate shapes of the insert to provide a constrained or non-constrained version. • the carpal plate is fixed to the second, third, and fourth metacarpal bones by titanium screws of variable length. • comes along with an optional ulna component prosthesis consisting of a proximal screw and blunt tip at the distal end articulates with the radial component to form a ball-and-socket type joint. • components are coated with hydroxyapatite and an uncoated radial component is available for cemented purposes. Modular Physiological Wrist prosthesis (MPW) [70] Link Company™, Hamburg, Germany The Modular Physiological Wrist prosthesis: • is a modularly designed, cementless, implantable Titanobium endoprosthesis. • a special feature is the encapsulated sliding pairing of the distal olive, which is intended to imitate the mobility of the intercarpal joint line. • has a solution for bad bone quality, and various components are available, including a coupled implant. Resurfacing Capitate Pyrocarbon Implant (RCPI) [20,21] Tornier, Grenoble, France The Resurfacing Capitate Pyrocarbon Implant: • contains a central core of graphite resurfaced with pyrocarbon. • has good biochemical and biomechanical compatibility, excellent wear resistance, and an extremely low coefficient of friction. • comes along with a modulus of elasticity of the material, which is comparable with that of the bone. • is a single block, with a 15 • tilt between the stem and head. • is a cementless prosthesis. • has commercially available head diameter sizes of 14 and 16 mm. • is a single/double-stemmed prosthesis • is made of CoCr metacarpal and radial components • includes a polyethylene articular component proximally Trispherical total wrist prosthesis [74] The trispherical total wrist prosthesis: • consists of metacarpal and radial components articulated with a polyethylene bearing and an axle restraint. • the metacarpal component has a central stem for the third metacarpal, with an offset stem for the base of the second metacarpal and scaphoid. • the radial component has a stem for the radius and the articulation is offset ulnarward so that the instant center of the wrist is within the capitate. • the radial component has a 12-degree palmar tilt. The high-density polyethylene bearing fits into the metacarpal component and forms a ball-and-socket joint with the radial sphere. • is designed to provide 15 degrees of radial and ulnar deviation, 90 degrees of flexion, and 80 degrees of extension without constraint. Amandys [16][17][18] Tornier, Bioprofile The Amandys implant: • is a non-restrictive implant made of pyrocarbon termed Amandys. Pyrocarbon possesses excellent biocompatibility, an elasticity modulus close to that of bone tissue, and virtually does not wear out due to a very low friction coefficient against these structures, thus causing no wear to the bone. • comes in eight sizes with two widths (24 and 26) and four different thicknesses (S, M, L, and XL). • has an almond shape with two surfaces of different convexity, the most convex coming into contact with the radial projection and the other into contact with the capitate bone. • is cementless, monoblock, and mushroom-shaped, with a central core of graphite (99 percent), covered by a thin layer of pyrocarbon (1 percent). [31,75,76] Wright Medical, Memphis, TN, USA The Swanson Wrist Joint Implant • is a one-piece intramedullary stemmed implant fabricated from implant-grade silicone elastomer. • is designed for use in implant resection arthroplasty of the radiocarpal joint. • is available in five sizes to satisfy most anatomical requirements. • has a wide mid-section to match the width of the radius. • comes along with a shorter distal stem that extends through the carpus into the base of the third metacarpal. DARTS-Total Wrist System [77] Teijin Nakashima Medical Co., Ltd., Okayama, Japan The DARTS-Total Wrist System: • is a new semi-constrained total wrist prosthesis that positions the joint line at the midcarpal joint to limit stress on surrounding soft tissues. • consists of UHMWPE radial and titanium-6 aluminum-4 vanadium (Ti-6Al-4V) carpal components, Ti-6Al-4V bone screws, and a cobalt-chromium-molybdenum (Co-Cr-Mo) carpal head. • has a radial component with an offset volarly and radially. • includes an articular surface of the carpal component, which forms an ovoid to reproduce the physiological movements of the wrist. • The carpal component for the base of the third metacarpal bone has a volar flange that was added to resist the posterior and rotational displacement forces thought to contribute to early carpal loosening and is augmented by two cancellous screws placed in the second and fourth metacarpals. • flexion-extension axis is rotated outwardly by 10 around the line of intersection of the horizontal plane and the distal articular surface of the radial component to provide wrist movement from radial-extension to ulnar-flexion. • is available in three sizes; the appropriate size was determined using preoperative templating on radiographic images of the radius and metacarpals and the intraoperative findings. Table 3 gives an overview about the duration of the included different prosthesis models. The Kaplan-Meier approach ( Table 3) is one of the best options to measure the fraction of subjects (in our case the duration of the implant) living for a certain amount of time after treatment. Primary Outcome-Duration of Implants In clinical investigations, the effect of a therapy is assessed by measuring the number of subjects that survived after that therapy over a period of time. The time starting from a defined point to the occurrence of a given event, e.g., the revision of the implant is called survival time and the analysis of group data is called survival analysis. The life span (Table 4) means the period of time between the implantation of the prosthesis and the failure (revision) of it. Secondary Outcome-Patient-Reported Measures of Pain Pain is a critical outcome because it is the symptom that most often leads patients to seek surgical intervention [34]. Reporting was more complete for postoperative pain than for preoperative pain. It was still limited by inconsistent measures. The Visual Analogue Scale (VAS) ( Table 5) is a measurement instrument that tries to measure a characteristic or attitude that is believed to range across a continuum of values and cannot easily be directly measured [78,79]. In case of epidemiologic and clinical research, the VAS is used to measure the intensity or frequency of various symptoms [78,80]. The DASH Score (Table 6) is a questionnaire for orthopedic patients and was developed in 1996 by the Council of Musculoskeletal Specialty Societies, the American Academy of Orthopaedic Surgeons, and the Institute for Work and Health Canada. The DASH score was designed to be a standardized assessment of the impact on the function of a variety of musculoskeletal diseases and injuries in the upper extremity [81]. and in the postoperative situation from 60.7 (highest) to 11.4 (lowest). It comes along with a large range. Secondary Outcome-Patient-Reported Measures of Function The grip strength (Table 7) is the force applied by the hand to pull on or suspend from objects. It can be assessed through standard methods and is a specific part of hand strength. Grip strength is a general term also used to refer to the physical strength of a patient, to the muscular power and force that can be generated with the hands. This parameter depends on the physical condition of the patients. The Tables 8 and 9 show the Range of Motion (RoM) of the wrist and the pro-and supination of the forearm. Introducing the Concept of FreeMove Early generations of implants had high complication and failure rates [84]. The common modes of failure have been fracturing, loosening, pain on pronation and supination at the level of the distal radioulnar joint, and muscle soft-tissue imbalance. Problems coming along with for example distal component loosening and wrist imbalance with existing prostheses were the impetuses for developing, e.g., the Universal total wrist implant [85]. Total wrist arthrodesis for the salvage of failed TWA results in a complete limitation of wrist FE and RUD. It was suggested that attempts to recreate the natural joint should be avoided, and different materials and methods for fixation should be considered for new implants [86,87]. To prevent these limitations, the new approach FreeMove was developed (see Figure 2). We developed the new wrist prosthesis from 2018 to 2021. Our intention of the new approach is that the implantation requiring only minimal bony resection, an uncemented (optional a cemented) radial component, firm and reliable cemented distal fixation via covering the proximal carpal row bones, and a prosthesis that involves simple instrumentation. Our intention of the new approach is that the implantation requiring only minimal bony resection, an uncemented (optional a cemented) radial component, firm and reliable cemented distal fixation via covering the proximal carpal row bones, and a prosthesis that involves simple instrumentation. The Principal Idea of FreeMove The design of wrist prostheses has evolved based on clinical experience and kinematic and biomechanical studies [85]. This new implant design of FreeMove differs from the reported total wrist prostheses ( Table 2) by transforming the wrist to an ellipsoid joint with a polyether−ether−ketone (PEEK) bearing and a variable center of articulation. The ellipsoidal design was found to accommodate greater width of the concave proximal component, resulting in better capture and prosthetic stability [85]. The articulating surfaces of the carpal and radial components create a dual-axis articulation that is best suited for radial and ulnar motions [85]. We try to use PEEK because wear, metallosis, and the systemic influence of metallic ions were suspected problems. Press-fit fixation on the radial part secures primary stability. The distal part covers the proximal carpal row, and modularity on both sides of the joint simplifies the replacement. Furthermore, the intention of using PEEK is here to reduce wear and the need to remove the bone. If the prosthesis fails, a second TWA prosthesis or a wrist arthrodesis should be easy because so little bone needs to be removed. With this approach, the current fixation technique of the distal part of the wrist prosthesis with a screw in the, e.g., third metacarpal bone will be avoided. This decreases the risk of screw loosening may eventually also decrease the risk of loosen other prosthesis parts by enabling a more physiological movement of the implant. The Principal Idea of FreeMove The design of wrist prostheses has evolved based on clinical experience and kinematic and biomechanical studies [85]. This new implant design of FreeMove differs from the reported total wrist prostheses ( Table 2) by transforming the wrist to an ellipsoid joint with a polyether−ether−ketone (PEEK) bearing and a variable center of articulation. The ellipsoidal design was found to accommodate greater width of the concave proximal component, resulting in better capture and prosthetic stability [85]. The articulating surfaces of the carpal and radial components create a dual-axis articulation that is best suited for radial and ulnar motions [85]. We try to use PEEK because wear, metallosis, and the systemic influence of metallic ions were suspected problems. Press-fit fixation on the radial part secures primary stability. The distal part covers the proximal carpal row, and modularity on both sides of the joint simplifies the replacement. Furthermore, the intention of using PEEK is here to reduce wear and the need to remove the bone. If the prosthesis fails, a second TWA prosthesis or a wrist arthrodesis should be easy because so little bone needs to be removed. With this approach, the current fixation technique of the distal part of the wrist prosthesis with a screw in the, e.g., third metacarpal bone will be avoided. This decreases the risk of screw loosening may eventually also decrease the risk of loosen other prosthesis parts by enabling a more physiological movement of the implant. The design included a PEEK-on-PEEK coupling with an ovoid surface interaction with more or less an elliptical articulation. The elliptical concept has been stable and resulted in a good range of motion. Furthermore, to avoid luxation, protection was built in via an artificial ligament. This should improve the stability of the joint. The radial component includes an inclination of 20 • to mimic the physiological orientation of the articular surface of the normal distal radius [85]. In the normal wrist, the center of rotation for FE and RUD should be located in the head of the capitate, which is slightly distal to the center of the prosthesis [89][90][91]. The introduced prosthesis has no fixed center of rotation. The distal part can slide and rotate on the proximal (radial) part depending on the external load. The manufacturing of the new prosthesis is addressed via 3D-printing. This allows a patient-specific design and adaption, respectively. From a CT-scan, the geometry of the wrist could be reconstructed and transferred to an individual prosthesis design. The including of a luxation protection via a surrounding robe increases the function of the implant and was to our best knowledge never introduced before. Conclusion and Future Work Concerning FreeMove This new implant differs from most of the reported total wrist prostheses by transforming the wrist into an ellipsoid floating joint with a PEEK-on-PEEK bearing and a flexible center of articulation. Considering the fact that the wrist joint articulates with six other bones (radius, ulna, capitate, trapezoid, trapezium, and hamate) and shows rotational and also translational motion, our impression is that any wrist prosthesis must replicate more or less patient-specifically the original shape of the joint surface as precisely as possible to minimize non-physiological kinematics and wear. This requires a patient-specific adapted implant. Future steps are planned with several experiments with this concept carried out on cadaver wrists. Discussion The wrist was one of the first joints treated by a prosthesis. Given the lower prevalence of symptomatic wrist OA/RA and the ease and predictability of TWAD, the evolution of TWA has lagged behind advancements made in large joint replacements [64,92]. The main potential advantage of TWA over TWAD is the potential for preservation of movement for patients with painful wrist OA/RA. This study adds to the current evidence in support of the use of TWA in all kinds of patients and kinds of the prosthesis. This overview should allow obtaining an impression of the performance of TWA. However, the limited available data limited the current spread of such implants, and future studies are required to overcome current limitations. First experiences with TWA wrist are based on developments by Meuli and Volz [93,94]. Early outcomes showed a high rate of complications at an early stage with malpositioning, dislocation, and loosening of the components [93,94]. In their original form, they are no longer implanted [94]. Because of the complex intervention and the semi-optimal results, TWA is not a routine process. The majority of the data are based on rheumatoid cases (59.5%), although other diagnoses are increasingly represented in recent publications. The strength and advantage of the presented systematic review is the comprehensive literature search and the assessment of the methodological quality of the available data. Duration Based on the currently available evidence comparing outcomes following TWA/TWR, we cannot conclude the superiority of the success of such an intervention. Articles provided Kaplan-Meier survivorship curves are shown in Table 3, and one paper provided the life span of the implants (Table 4). There was a wide variation in survival from 42% [31] for the Swanson silicone prosthesis to 57% after five years [26] for the Elos prosthesis, to 94% after 10 years [32] for the Remotion prosthesis, as shown in Table 3. The Elos prosthesis displayed a very steep failure rate on the Kaplan-Meier curve over the first 4 years before reaching a plateau [22,26]. Because of the heterogeneous studies, it could not be decided which implant is the best. In the end, the conclusion could be that an improvement of the existing procedure of TWA including the current used implants must be one future goal. In comparison to the success of total hip and total knee arthroplasty, TWA has to be considered for further research. Pain Pain is a complex and patient-specific experience, and attempts to make valid assessments of it have been fraught with difficulties. Pain is influenced by different factors and depends on the personal constitution of the individual patient. Fifteen articles detected the pain. The mean value preoperatively was 7.5, and the postoperative mean value was 2. A decrease in pain could be seen and thus an increase the quality of life for the patients. The problem in the case of pain as a valid parameter to benchmark the intervention outcome is the subjectivity. Patients handle the situation in case of pain more or less individually. The outcome depends on the individual sensation of each patient. Disabilities of the Arm, Shoulder, and Hand (DASH) Functional scores as measured by DASH appear to improve at follow-up post-TWA. The DASH score is one of the most established questionnaires for disorders of the upper limb. The collection and analysis of the results are easy to use and interpret. The mean value for the preoperatively DASH score was 58, and for the postoperative situation, it was 36. There was, in the mean, an increase in the DASH score. That shows that the approach supporting the damaged wrist joint with an artificial implant leads to an increase in the quality of life for the patients. In consideration of the duration of the included implants, to date, it is only a temporary solution with a high risk of revision interventions. Grip Strength It is difficult to objectively quantify grip strength improvement. The reason for that was inconsistent in the pre-and postoperative measurements. Additionally, the varying means of measurement and different acquisition methods lead to confusion. We focused on articles that acquired the grip strength in kg. The mean value for the preoperative grip strength was 12 kg, and for the postoperative situation, it was 18 kg. There was, in the mean, an increase in the grip strength. While grip strength alone does not predict the performance of patients' outcomes, periodic measurement of grip strength could be beneficial in terms of patient performance and injury prevention. Only mirroring the postoperative situation does not show the future development of the patient situation. Additionally, the influence of grip strength as a parameter of success is not clear. In Table 7, the grip strength in case of the preoperative status shows the diversity of this parameter: the lowest grip strength was 2.1 kg up to 21 kg, and in the postoperative situation, it starts at 7.9 kg and goes up to 32 kg. This shows a large range of this parameter. A correlation with a body/trainings condition of the patient must be considered to judge the measurement results. The establishment of a baseline data in the context of grip strength would be a valuable approach to rate therapy outcomes. Range of Motion The results for the RoM suggest that, with TWA, the postoperative is preserved compared with preoperative RoM. There exists a functional range of wrist motion (based on activities of daily living) that has been defined as 5 • of flexion, 30 • of extension, 10 • of radial deviation, and 15 • of ulnar deviation [95][96][97][98]. Of the included articles in our study, 46 papers analyzed the RoM. There is a mean RoM postoperatively for the flexion of 32 • , for the extension of 31 • , with a mean overall flexion−extension of 63 • . Furthermore, there exists a mean RoM for the radial deviation of 9 • , for the ulnar deviation of 10 • , with a mean overall radial−ulnar deviation of 28 • . When the mean RoM of the postoperative situation is compared to the functional range of wrist motion, flexion, extension, and ulnar motion fit well. Only the radial motion is too small. Of the included articles in our study, 18 papers additionally analyzed the range of supination and pronation. There is a mean range postoperatively for the pronation of 72 • , for the supination of 72 • , with a mean overall motion of 155 • . There is an improvement compared to the preoperative situation where the pronation was 67 • and the supination was 61 • , with a mean overall motion of 137 • . When the mean range of the postoperative situation is compared to the range preoperatively, the motion increased nearly about 20 • . Not all papers compared the preoperative with the postoperative situation. Some articles provided only a range in the data, and others expressed this in detail split to the single motion. There is a wide RoM presented by all studies, with a wide spread of data. The question is how valuable this parameter is to obtain an impression and how good the outcome of the therapy is. Limitations in General Any review of the literature is limited by the quality of published reports. The presented study is limited by the inability to perform a quantifiable meta-analysis in the case of analyzing patient-reported pain and function because of missing randomized clinical trials of TWA compared to TWAD. Moreover, given the variability of outcome measures, detailed pros and cons of such intervention were not possible to discuss. The available evidence is limited, and the current literature surely benefits from further biomechanical and clinical investigations. Given the limited number of papers analyzing TWA, we decided against establishing an exclusion cut-off based on study design and eliminating potentially useful data from our review. This led to the inclusion of some studies of poor methodological rigor that likely represent bias. Standard statistical testing requires input of high-quality data obtained through standardized methods and detailed reporting of all outcomes. Our statistical analysis was limited to calculation of mean values, which provide a summary estimate of the results. Furthermore, the inclusion of complication rates, revision rates, Patient-Rated Wrist Evaluation (PRWE), the explicit results for each prosthesis model, the explicit results for each pathology, satisfaction, and radiological output was too much for this paper, and it is planned to realize this in an additional publication. Methodological Quality of Included Studies The included studies sometimes demonstrate moderate methodological quality and a likelihood of (systematic) error. There is an inaccuracy, e.g., in describing the included patients vs. procedures, describing exactly the numbers of complications, and the numbers of analyzed procedures at each time point. Sometimes it was difficult to find out the correct numbers for these parameters. Conclusions Despite advances in the field of arthroplasty, TWA significantly lags behind, e.g., total knee or hip arthroplasty. Besides this fact, some general conclusions are possible: it seems that TWA has a strong potential for improvement of function through pain reduction and preservation of mobility [5]. It seems also that TWA is a possible alternative to total wrist arthrodesis in patients with painful, debilitating degenerative pathologies of the wrist [92]. The multiple numbers of implants with varying designs indicate a lack of universal acceptance for wrist anatomy and biomechanics. There is a need for additional research. The focus should be on long-term results achieved through large retro-/prospective studies. Furthermore, the initiation of a surveillance register of implants should be a next step that is not available to date [5]. This investigation emphasizes the need for methodologically rigorous, multi-centered, prospective, randomized controlled trials with predefined reporting, standardized follow-up intervals, outcome measures, anesthesia and rehabilitation protocols, and reporting of pre-operative indication [5]. In reviewing the different designs of the prostheses and the recent outcomes of the different implants, only time will tell if these implants will further the advances in TWA [92]. Furthermore, the question as to which causes and consequences of the periprosthetic loosening must be exposed by multiple methods to improve the outcome [5]. Another improvement for a better comparison of TWA outcome could be better standardization of data acquisition and investigation methods of the different parameters for benchmarking the TWA results.
2022-03-14T15:26:46.335Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "1261896cf698c9c6ae35a41e9c6428492d9f9530", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-1729/12/3/411/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f071b507420e3c0464625cb24d62e827cb8d014b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14240017
pes2o/s2orc
v3-fos-license
Antibacterial Activity of Juglone against Staphylococcus aureus: From Apparent to Proteomic The proportion of foodborne disease caused by pathogenic microorganisms is rising worldwide, with staphylococcal food poisoning being one of the main causes of this increase. Juglone is a plant-derived 1,4-naphthoquinone with confirmed antibacterial and antitumor activities. However, the specific mechanism underlying its antibacterial activity against Staphylococcus aureus remains unclear. To elucidate the mechanism underlying its antibacterial activity, isobaric tags for relative and absolute quantitation methods of quantitative proteomics were applied for analysis of the 53 proteins that were differentially expressed after treatment with juglone. Combined with verification experiments, such as detection of changes in DNA and RNA content and quantification of oxidative damage, our results suggested that juglone effectively increased the protein expression of oxidoreductase and created a peroxidative environment within the cell, significantly reducing cell wall formation and increasing membrane permeability. We hypothesize that juglone binds to DNA and reduces DNA transcription and replication directly. This is the first study to adopt a proteomic approach to investigate the antibacterial mechanism of juglone. Introduction Food-borne diseases (FBD) are defined by the World Health Organization (WHO) as "diseases of infectious or toxic nature caused by, or thought to be caused by, the consumption of food or water [1]." Numerous food-borne diseases, including staphylococcal food poisoning (SFP), are caused by ingestion of microbial and plant toxins [2], and SFP is mainly caused by Staphylococcus aureus [3]. SFP causes various symptoms, including copious vomiting, diarrhea, abdominal pain, and nausea [4], owing to the production of staphylococcal enterotoxins (SEs). Although approximately 22 SEs [3] are known, only a few of these proteins, such as SEA and SEB, are related to FBD [5]. Hence, it is imperative to control the spread of S. aureus to ensure food safety. Natural products with pharmacological properties often exhibit broad-spectrum antibacterial activity and have unique advantages. Naphthoquinones, such as juglone, lawsone, plumbagin, and lapachol, are natural products with high antibacterial activity. In particular, juglone (5-hydroxy-1,4-naphthoquinone) ( Figure 1) has been used for centuries in folk medicines to cure acne, allergies, gastrointestinal disorders, intestinal parasitosis, cancer, fungal infections, bacterial infections, and viral infections [6]. Our previous study revealed that juglone shows antibacterial activity against S. aureus, Escherichia coli, Bacillus subtilis, Penicillium sp., Aspergillus sp., and Hansenula sp. [7]. activity against S. aureus, Escherichia coli, Bacillus subtilis, Penicillium sp., Aspergillus sp., and Hansenula sp. [7]. According to previous studies, naphthoquinones exert their antimicrobial, antiparasitic, and cytotoxic activities via several mechanisms, including inhibition of electron transport, uncoupling effects during oxidative phosphorylation, intercalation of agents into the DNA double helix, reduction of alkylating properties of biomolecules, and production of reactive oxygen species (ROS) under aerobic conditions [6]. However, in recent years, most investigations of juglone have focused on its antitumor activity and the related molecular mechanisms. However, a more in-depth understanding of how juglone acts against bacteria, especially S. aureus, is still lacking. Therefore, to elucidate the possible mechanism of action of juglone on S. aureus, we adopted a proteomic approach owing to its suitability for high-volume data processing. Furthermore, proteomics could help reveal changes in the whole proteome post juglone-treatment, in contrast to currently used methods such as superoxide dismutase activity assays, malondialdehyde evaluation, and electron microscopic analysis. In this study, we investigated the proteomic alterations in S. aureus following treatment with juglone using isobaric tags for relative and absolute quantitation (iTRAQ) technology, and then identified the altered proteins to reveal the antibacterial mechanism of juglone. iTRAQ Analysis of the Proteome after Treatment with Juglone Compared to the initially popular gel-based proteomic technology, MS-based proteomic analyses are now widely used because of their high-throughput capacity, repeatability, and high success rate for protein identification. In the current study, normal S. aureus and S. aureus treated with juglone for 2 h were collected for protein extraction, digestion, and iTRAQ labeling during the exponential growth phase. As a mainstream MS-based proteomics technology, iTRAQ can provide multiplexing of up to 8-plex isobaric tags including a reporter group, a balance group, and a peptide-reactive group. Once the isobaric tags have reacted with the proteolytic peptides, the balance group is removed to identify the differentially expressed peptides at the second mass spectrometry (MS2) level. In a search using the Mascot 2.2 program, we identified 9834 unique peptides (FDR ≤ 0.1), corresponding to 1379 protein groups including 1376 proteins that were quantified by Proteome Discoverer 1.4 in each channel. In total, the expression levels of 53 proteins were shown to be significantly different (>1.2-fold change, p < 0.05) between treated and untreated cells. Among these proteins, 22 were up-regulated and 31 were down-regulated in the treated cells compared to the untreated cells. Functional Annotation Analysis of Proteomic Differences To determine the function of the 53 differentially expressed proteins, we performed annotation analysis using Blast2Go. The proteins were grouped into six categories (Table 1): oxidative damage, DNA replication and transcription, protein synthesis, stress response, cell wall synthesis and cell division, and membrane permeability. iTRAQ Analysis of the Proteome after Treatment with Juglone Compared to the initially popular gel-based proteomic technology, MS-based proteomic analyses are now widely used because of their high-throughput capacity, repeatability, and high success rate for protein identification. In the current study, normal S. aureus and S. aureus treated with juglone for 2 h were collected for protein extraction, digestion, and iTRAQ labeling during the exponential growth phase. As a mainstream MS-based proteomics technology, iTRAQ can provide multiplexing of up to 8-plex isobaric tags including a reporter group, a balance group, and a peptide-reactive group. Once the isobaric tags have reacted with the proteolytic peptides, the balance group is removed to identify the differentially expressed peptides at the second mass spectrometry (MS2) level. In a search using the Mascot 2.2 program, we identified 9834 unique peptides (FDR ď 0.1), corresponding to 1379 protein groups including 1376 proteins that were quantified by Proteome Discoverer 1.4 in each channel. In total, the expression levels of 53 proteins were shown to be significantly different (>1.2-fold change, p < 0.05) between treated and untreated cells. Among these proteins, 22 were up-regulated and 31 were down-regulated in the treated cells compared to the untreated cells. Functional Annotation Analysis of Proteomic Differences To determine the function of the 53 differentially expressed proteins, we performed annotation analysis using Blast2Go. The proteins were grouped into six categories (Table 1): oxidative damage, DNA replication and transcription, protein synthesis, stress response, cell wall synthesis and cell division, and membrane permeability. Upregulation of Glyoxalase, Potassium Uptake Protein, and Nitroreductase After 2 h of treatment, upregulation of the proteins glyoxalase, potassium uptake protein, and nitroreductase, which belong to the oxidoreductase protein family, was induced by juglone, and resulted in subsequent cell collapse. Additionally, the formation of superoxide radicals is often triggered by metal ions (mostly iron, but also copper, cobalt, and titanium), which are defined as cofactors. These transition metal ions can be transferred from superoxide radicals to hydroxyl free radicals, which are the strongest known oxidizing agents. Such activity may be suggested by the observed up-regulation of serine-rich-repeat-containing protein, which has a calcium-binding function. As shown in Figure 2a, in the groups treated with 12.5, 25, and 37.5 µg/mL juglone, the superoxide dismutase (SOD) activity decreased before 2 h, from 91.07 to 83.15, 89.81 to 80.34, and 91.59 to 76.79 U/mgprot, respectively. This result suggests that S. aureus had not adapted to the excess of superoxide anions before 2 h and consumed the original SOD to generate hydrogen peroxide, resulting in catalase (CAT) activity increasing from 8.72 to 8.91, 8.7 to 9.42, and 8.56 to 10.12 U/mgprot before 2 h, as shown in Figure 2b. At 4 h, the superoxide anion concentration had far exceeded the capacity of SOD, resulting in the decrease observed after 4 h. Correspondingly, CAT was quickly consumed starting at 4 h. Combined with our proteomic results, these results suggest that juglone could accelerate the redox process leading to oxidative damage to S. aureus, and that the cell's own CAT and SOD were not sufficient to cope with the oxidative damage starting at 4 h. Significance of the Downregulation of Thioredoxin, Threonine Dehydratase, and Ribulose-5-phosphate 3-Epimerase-epimerase To survive in a peroxidative environment, organisms produce several natural antioxidants, including vitamin C [8], glutathione (GSH) [9], and carotenoids [10], among others. Carotenoids show antioxidative activity based on their ability to trap peroxyl radicals and quench singlet oxygen. Here, 4,4 1 -diaponeurosporenoate glycosyltransferase, which plays a major role in carotenoid biosynthesis, was down-regulated after treatment with juglone. This result suggested that production of 4,4 1 -diaponeurosporenoate glycosyltransferase was inhibited by juglone, which resulted in fewer carotenoids in the cells [11]. Moreover, thioredoxin was down-regulated. This protein is a cell redox homeostasis regulator, and its role is to maintain the stability of cellular levels of ROS. Threonine dehydratase was also down-regulated, possibly because of a shortage of iron [12]. Moreover, ribulose-5-phosphate 3-epimerase-epimerase was found to be down-regulated. This molecule in E. coli was reported to be rapidly damaged by hydrogen peroxide [13]; therefore, its down-regulation might have been directly caused by hydrogen peroxide. The change in expression levels of these proteins supports the proposition that oxidative damage was the main mechanism of activity against S. aureus. Downregulation of Proteins Related to DNA Replication and Transcription All proteins related to DNA replication and transcription were down-regulated after treatment with juglone for 2 h. DNA-binding response regulator, family transcriptional regulator, and transcriptional regulator MraZ regulate DNA-dependent transcription, while Queuosine biosynthesis protein plays a major role in the tRNA modification process. Queuine is one of the most radically modified nucleosides known to occur in tRNA [14], and its expression level is regulated by queuine tRNA-ribosyltransferase. Other proteins related to DNA transcription, including uridine kinase and urease accessory protein ureg, participate in guanosine triphosphate (GTP) and cytidine triphosphate (CTP) biosynthesis, which plays a major role in the formation and phosphorylation of RNA. Our results suggest that anaerobic ribonucleoside triphosphate and single-stranded DNA-binding protein, which are associated with DNA replication, were also down-regulated. Similarly, as shown in Figure 2d,e, the fluorescence intensity of DNA and RNA in the treated groups maintained a sustained decreasing trend from 0 to 6 h. It was previously reported that juglone induces scission of isolated DNA by reducing glutathione and Fe (II) ions in vitro [15]. In addition, 1,4-naphthoquinones induce oxidative damage to DNA base pairs and accumulation of DNA breaks [16]. The best-known mechanism is intercalation between two base pairs of DNA or RNA through the ring of the polycyclic chromophore of quinone. Anthracyclines, which are quinone compounds, exhibit antitumor activity by forming bonds between positively charged amino sugars and the sugar phosphate backbone of DNA [17]. In this manner, several vital biological processes, such as replication and transcription, are blocked. Thus, based on these experimental results, we hypothesize that juglone binds to DNA and directly causes DNA damage. 2.6. Role of Juglone in Stimulating Stress Response in S. aureus DNA damage and oxidative damage can stimulate the stress response in S. aureus. The triglyceride lipase, lipase 1, was also upregulated. Lipase 1 can catalyze use of triacylglycerols (TAG) as a carbon and energy source for survival during starvation conditions [18]. This result suggested that energy metabolism was induced to protect against stress. However, we did not identify upregulation of any proteins related to energy metabolism. This might be attributed to limitations of the database, as 14 of the upregulated proteins could not be annotated and need to be studied further. ATP guanido phosphotransferase and the heat resistance chaperone protein clpB were both upregulated. CtsR-dependent genes were previously shown to be weakly induced in response to oxidative stress; CtsR is a repressor of heat-and stress-specific proteins [19]. However, ATP guanido phosphotransferase acts as a modulator of CtsR repressor activity under oxidative stress [20]. Hence, protein chaperones might be positively regulated by ATP guanido phosphotransferase. Iron-sulfur cluster repair di-iron protein is a di-iron-containing protein involved in repairing iron-sulfur clusters damaged by oxidative and nitrosative stress [21,22]. Chang et al. [23] found that iron-sulfur cluster repair di-iron protein could be induced by hydrogen peroxide, which also supports that juglone acts via oxidative damage. Only one protein, single-stranded DNA-binding protein, related to the DNA damage response was found to be upregulated. Role of Juglone on Protein Synthesis, Cell Wall Formation, and Permeability In addition, we noted the impact of juglone on protein synthesis. Three structural components of ribosomes, i.e., 50s ribosomal protein l36, 50s ribosomal protein l14 and 50S ribosomal protein L33 2, participate in translation and were down-regulated after treatment with juglone. Proteins associated with cell division, i.e., iron-sulfur cluster repair di-iron protein and transcriptional regulator MraZ, were also down-regulated. These results indicate that protein synthesis and cell division were inhibited after treatment with juglone. Figure 2f shows that in the juglone-treated group, the peptidoglycan content was markedly lower than that in the control group, increasing before 2 h and decreasing starting at 4 h. Accessory sec system glycosylation protein is an N-acetyltransferase that is part of the SecA2/SecY2 system for synthesis of serine-rich cell wall proteins, and is up-regulation suggested that the formation of the cell wall, was not inhibited before 2 h of treatment. Similarly, we observed upregulation of N-acetylmuramoyl-l-alanine amidase, an enzyme that catalyzes a chemical reaction that cleaves the link between N-acetylmuramoyl residues and L-amino acid residues in certain cell-wall glycopeptides. These results suggest that cell wall formation was weakly inhibited by 2 h and strong inhibited starting at 4 h. Moreover, two types protein related to cell membrane synthesis, thioredoxin and acetyl-biotin carboxylase subunit, were down-regulated. Thioredoxin participates in metabolism of glycerol ether, which is an important component of cell membranes. Acetyl-biotin carboxylase subunit has acetyl-CoA carboxylase activity, and multi-subunit acetyl-CoA carboxylase can catalyze the first step in fatty acid biosynthesis [24], promoting cell membrane formation. In Figure 2c, the groups treated with 12.5, 25, and 37.5 µg/mL juglone, the malondialdehyde (MDA) content steadily increased from initial 0.9 to final 3.78, 0.89 to 4.55, and 1.19 to 5.43 nmol/mgprot, respectively. These results suggest that cell membrane formation was mainly inhibited through oxidative damage. In addition, potassium uptake protein and chaperone protein clpB, which have cation transmembrane transporter activity, were up-regulated, indicating an increase of permeability. However, gas chromatography analysis (Table 2) showed a decrease in saturated fatty acids (SFA)/unsaturated fatty acids (UFA), from 1.3% to 1.21%, and ultimately to 1.17% as the juglone concentration increased. This result suggests that membrane fluidity was increased [25], resulting in increased permeability, consistent with our proteomic analysis. to 1.21%, and ultimately to 1.17% as the juglone concentration increased. This result suggests that membrane fluidity was increased [25], resulting in increased permeability, consistent with our proteomic analysis. In conclusion, this work describes the investigation of the mechanism of action (antibacterial activity) of juglone, a plant-derived 1,4-naphthoquinone. In particular, iTRAQ technology was applied for analysis of the 53 proteins found to be differentially expressed after treatment with juglone, a plant-derived 1,4-naphthoquinone. Combined with verification experiments, the results suggest that oxidative damage was the primary S. aureus cell-killing mechanism. In the induction process, juglone up-regulated oxidoreductase, thereby enhancing the redox process and Cn 1 :n 2 , n 1 and n 2 represent the number of carbon atoms and olefinic bonds, respectively; SFA, saturated fatty acids, shown as the sum of C16:0 and C18:0; UFA, unsaturated fatty acids, shown as the sum of C16:1 and C18:1. In conclusion, this work describes the investigation of the mechanism of action (antibacterial activity) of juglone, a plant-derived 1,4-naphthoquinone. In particular, iTRAQ technology was applied for analysis of the 53 proteins found to be differentially expressed after treatment with juglone, a plant-derived 1,4-naphthoquinone. Combined with verification experiments, the results suggest that oxidative damage was the primary S. aureus cell-killing mechanism. In the induction process, juglone up-regulated oxidoreductase, thereby enhancing the redox process and subsequently creating a peroxidative environment in the cell. In addition, juglone significantly decreased cell wall formation, inhibited cell membrane formation, and increased membrane permeability. However, this study had several limitations. Only one strain of ATCC6538 was used because of funding limitations. Juglone may show a different mechanism of action against other species of bacteria (e.g., Escherichia coli, Bacillus subtilis, Penicillium sp., Aspergillus sp., and Hansenula sp.), and future studies should investigate the effects of juglone on these species. In addition, these data do not comprehensively reveal the antibacterial mechanisms of juglone against S. aureus, and have not, for example, identified potential drug targets. Future studies could employ methods such as cocrystallization, AutoDock, and subcellular proteomic analysis to fully reveal the mechanism of action. Strain and Juglone S. aureus ATCC6538 was purchased from the American Type Culture Collection. The minimal inhibition concentration (MIC) of juglone (Sigma, St Louis, MO, USA) against S. aureus is 37.5 µg/mL [7]. Culture Preparation Juglone (dissolved in anhydrous ethanol) was incubated with S. aureus during the exponential growth phase at final concentrations of 0 µg/mL, 12.5 µg/mL, 25 µg/mL, and 37.5 µg/mL in beef extract peptone medium for 24 h at 37˝C, and a control group was incubated with anhydrous ethanol. Each different concentration group included three biological replicates. Protein Preparation To quantify changes to the proteome after treatment with juglone, the cultures were incubated with juglone during the exponential growth phase at a final concentration of 18.75 µg/mL for 2 h at 37˝C. Then quartz sand and 1 mL of SDT lysate (4% SDS, 1 mM DTT, 100 mM Tris-HCl, pH 7.6) were added to each group and subjected to 10 rounds of homogenization. The homogenate was sonicated on ice. After 5 min of incubation in boiling water and 5 rounds of further homogenization, the crude extract was then incubated in boiling water for 10 min and clarified by centrifugation at 13,400ˆg for 30 min. The supernatant was filtered through a 0.22-µm membrane, and proteins were quantified using a bicinchoninic acid (BCA) protein assay (Beyotime, Shanghai, China). Protein Digestion and iTRAQ Labeling Total protein from each sample was digested using filter-aided proteome preparation (FASP) method as described previously by Wisniewski et al. [26], and the peptide mixture was labeled with 8-plex iTRAQ reagent (AB SCIEX, Framingham, MA, USA) according to the manufacturer's instructions. LC-ESI-MS/MS A quantity of 5 µg of peptide mixture was separated by Easy nLC system using a C18-reversed phase column (Easy nLC system, Thermo Scientific Easy Column) (100 mmˆ75 µm, 3 µm). The separation was achieved using a linear gradient of buffer B (80% acetonitrile and 0.1% formic acid) at a flow rate of 250 nL/min over 140 min. Q Executive mass spectrometer (Thermo Fisher Scientific, Waltham, MA, USA) acquired data was filtered by choosing the most abundant precursor ions, with a range of 300-1800 m/z for higher-energy collisional dissociation (HCD) fragmentation. The target value was determined basing on predictive Automatic Gain Control (pAGC). Dynamic exclusion was used with 1 min. Survey scans were set as a resolution of 70,000 at m/z 200, and 17,500 at m/z 200 resolution for HCD spectra. The normalized collision energy was 30 eV and the underfill ratio, which specifies the minimum percentage of the target value likely to be reached at maximum fill time, was defined as 0.1%. The instrument was run with mode enabled peptide recognition. The Mascot search results for each SCX elution were further processed using Proteomics Tools (Matrix Science, Boston, MA, USA) (version 3.05), which includes the programs BuildSummary, Isobaric Labeling Multiple File Distiller, and Identified Protein iTRAQ Statistic Builder. The BuildSummary program was used for assembling protein identifications based on a target-decoy search in shotgun proteomics. All reported data were based on 99% confidence in protein identification as determined by the peptide false discovery rate (FDR) ď1% [27]. The programs Isobaric Labeling Multiple File Distiller and Identified Protein iTRAQ Statistic Builder were used to calculate protein ratios. The final protein ratios were then normalized by the median average protein ratio for unequal amounts of the labeled samples. Statistical and Bioinformatics Analysis Comparisons between treatment and control groups were performed using t-tests. Differentially expressed proteins were classified using the following scale: more than 1.2-fold (p < 0.05) or less than 0.83-fold (p < 0.05) [28]. Gene ontology (GO) analysis was performed using Blast2Go version 3.0; the detailed procedure was described by Stefan et al. [29]. Detection of Changes in DNA and RNA Content Culture samples (2 mL, collected at 0 h, 3 h, 6 h, 9 h, 12 h, 15 h, 18 h, 21 h, and 24 h) were centrifuged at 4000ˆg for 10 min. The precipitates were washed with PBS buffer three times using centrifugation at 4000ˆg for 5 min, and then dissolved in sterile water to prepare 1-mL bacterial suspensions. Next, 0.3 mL of each bacterial suspension was mixed with 0.9 mL of DAPI, and absorbance was detected at 364 nm (DNA) and 400 nm (RNA). Peptidoglycan Content Determination Glucosamine standard solution (50 µg/mL) was aliquoted into 6 burets (0 mL, 0.5 mL, 1.0 mL, 1.5 mL, 2.0 mL, and 2.5 mL), and 1 mL of acetylacetone was added, along with sterile water to 6 mL. After 30 min of incubation in boiling water, 4 mL of anhydrous ethanol and 1 mL of Ehrlich's reagent were added to the cooled solution. The resulting mixture was then incubated in 60˝C water for 1 h, and absorbance was detected at 530 nm. Absorbance values were used to construct a standard curve. Then, 10-mL culture samples (0 h, 2 h, 4 h, and 8 h) were subjected to five repeated cycles of freezing and thawing, and sonicated on ice. The resulting mixture was centrifuged at 1000ˆg for 10 min, and the supernatant was centrifuged at 10,000ˆg for 20 min. After three rounds of washing, the crude peptidoglycan pellets were collected, and 2 mg of dried material was added to 1.5 mL of hydrochloric acid (6 mol/L) and incubated in boiling water for 1 h. Sodium hydroxide was added to the cooled solution to reach pH 7, and sterile water was added to a total volume of 10 mL. Finally, the concentration of peptidoglycan was calculated as described in the previous paragraph, and the values were used to construct a standard curve, with the following modification: 2 mL of the mixture was used rather than the glucosamine standard solution. Determination of the Extent of Oxidative Damage Cultures (0 h, 2 h, 4 h, 8 h, and 24 h) were centrifuged at 4000ˆg for 10 min, and then washed with PBS three times. After resuspension in 2 mL of normal saline (NS), the mixture was sonicated on ice and centrifuged (12,000ˆg, 20 min, 4˝C) and the supernatant was collected. Concentrations of SOD, CAT, MDA, and proteins were determined according to the manufacturer's instructions (Beyotime, Shanghai, China). Phospholipid Extraction Cultures (2 h) were centrifuged at 10,000ˆg for 5 min, and the cell pellets were washed with PBS three times. After resuspension in NS, samples were sonicated on ice and two volumes of chloroform-methanol (2:1, v/v) were added; the mixture was vortexed for a further 30 min. After centrifugation (2,500ˆg, 10 min), the lower phase was transferred to a new tube. The mixture was added into a quarter volume of methanol-water (1:1, v/v), and the lower phase was concentrated to 1 mL. Finally, the concentrate was vacuum-dried and stored until use. Gas Chromatography Analysis of Fatty Acids Dried phospholipids (0.02 g) were added to 0.5 mL of benzene-petroleum ether (1:1, V/V), and then 1.5 mL of potassium hydroxide-methanol (0.4 mol/L) was added. After incubation in 50˝C water for 15 min, hexane was added to the cooled solution to a total volume of 10 mL. Samples were vortexed, and the lower phase was collected for gas chromatography analysis. The following parameters were used for gas chromatography: a 30 mˆ0.32 mmˆ0.25 µm cp-Sil 19 CB silica capillary column was used; the temperature program ramped from 40˝C (2 min hold) to 260˝C (1 min hold) at 3˝C per minute; the injector and detector were held at 250˝C and 300˝C, respectively, with nitrogen used as the carrier gas; the flow rate was 2 mL/min, and the injection volume and split ratio were 1 µL and 35:1 respectively. Finally, a standard mixture that included 37 types of fatty acid methyl esters (Sigma, St. Louis, MO, USA) was used to determine the relative content of fatty acids through area normalization processing.
2016-07-09T08:41:28.331Z
2016-06-01T00:00:00.000
{ "year": 2016, "sha1": "d709d95ed65a3408625d58bc42665ccb583842f2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/17/6/965/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d709d95ed65a3408625d58bc42665ccb583842f2", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
26519059
pes2o/s2orc
v3-fos-license
The Bendability of Ultra High strength Steels Automotive manufacturers have been reducing the weight of their vehicles to meet increasingly stringent environmental legislation that reflects public demand. A strategy is to use higher strength materials for parts with reduced cross-sections. However, such materials are less formable than traditional grades. The frequent result is increased processing and piece costs. 3D roll forming is a novel and flexible process: it is estimated that a quarter of the structure of a vehicle can be made with a single set of tooling. Unlike stamping, this process requires material with low work hardening rates. In this paper, we present results of ultra high strength steels that have low elongation in a tension but display high formability in bending through the suppression of the necking response. Introduction The UK manufactures over 1.6 m vehicles per annum, which is predicted to rise 9% annually. The sector accounts for 7.3% of manufacturing output and 5.2% of manufacturing employment [1]. Automotive manufacturers, however, are facing increasing demands to minimise the environmental impact of their products through lightweighting while improving safety standards. For example, the UK government passed the Climate Change Act (2008) to reduce its emissions by 80% from 1990 levels by 2050. The contradictory environmental and safety demands have led to increasing use of high strength materials such as ultra-high strength steel (UHSS) that have a high strength to weight ratio. These materials have ultimate tensile strengths above 1000MPa but low elongations, making them difficult to form with the conventional cold stamping process. One way to adopt low formability materials is to use alternative processing routes such as rollforming [2]. Roll forming increases formability by deforming material through incremental, localised bending [3]. Sheet material flows through a series of rollers rigidly mounted on stands that bend the sheet as it flows through the rollers. The number of stands depends on the complexity of the geometry and can range from 4 to 35. Roll forming is cheap to operate, involves high material utilisation and is environmentally friendly but is limited to processing parts with a single cross-section along its length. Even though the geometry of this cross-section can be more complex than what can be achieved by stamping, forming a fixed cross-section along the length of a part limits its appeal for processing automotive parts, which frequently requires a varying cross-section along its length. The limitation of the method is being addressed by the recent development of the flexible roll forming process [4]. With this method, the individual rolls are mounted on actuators that can change the position and orientation of the rollers as a sheet flows through. With correct control, parts with continually varying cross-sections along its length can be produced. Flexible roll forming, therefore, opens up the possibility of roll forming automotive components with high strength materials such as UHSS. Recent work on roll forming has looked into the effect of residual stresses on the springback on the final part. For example, Mendiguren et. al [5] and Weiss et. al [6] tried to relate bending behaviour of dual phase steels and aluminium respectively to their springback. Hosseini et. al [2] developed a strain gauge-based technique to measure residual stresses of the order of yield stress in UHSS. In this work, the behaviour of a UHSS dual phase grade is investigated by comparing its bendability to its stretch performance and by characterising its through-thickness strain distribution during a bending test. The results show that the material can sustain significantly higher strain in bending compared to stretching. Method The chemical composition of the dual phase UHSS is given in Bending formability was characterised with plane strain bending tests that were carried out on a purpose built bending rig mounted on an Instron 5800 frame. To assess formability, a DIC device was used to capture strains along the thickness of the sheet (Fig.1). Fig.1 Photo of the bending rig mounted on an Instron 5800 frame Results The results of the tensile tests are in Table 2 The results illustrate a material with a high strength to weight ratio (Table 1) but low formability, particularly in the plane strain region (Fig.2). Away from the plane strain path (Fig.2), formability increases so that in the biaxial region, failure takes place at around 0.25 strain. Its low formability is anticipated by a low work hardening rate of 0.14 (or high yield ratio of 0.67) in all directions with respect to its grain. In plane strain bending, the material in the outer ligament of the bend was able to support greater strain than in the stretch mode experienced in the tensile and FLC tests (Fig.3). Compared to the plane strain deformation failure strain that was measured to be about 0.1-0.11 (Fig.2), the strain the bending sample supported was 0.16-0.17 (Fig.3). Unlike FLC samples, which necked and failed, analysis of the bending samples using scanning electron microscopy (SEM) did not reveal significant damage (Fig.4). With the current design of the bending apparatus, bending of the sample was limited to about 80°. As a result, it was not possible to explore the limits of bending formability of the material. (a) Side view of the bending rig (b) Contour plot -major strain Conclusions Tensile tests and plane strain bending tests were carried out on a dual phase UHSS. The results showed that the material was able to support higher major strains in bending than in stretch. The full field major strain field showed that the high strain levels were found, as expected, in the outer ligament of the sample. However, despite the higher strains in bending, little evidence of damage was found in the microstructure of the material, suggesting that it is able to sustain even higher strains.
2018-01-08T09:55:14.680Z
2016-08-01T00:00:00.000
{ "year": 2016, "sha1": "6ab347c9fe30172d459121f2c65853cb098d202f", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/734/3/032097", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "ed13343526ba2306afdc6532f65f805745884837", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Engineering" ] }
209924236
pes2o/s2orc
v3-fos-license
Strong Scaling of Numerical Solver for Supersonic Jet Flow Configuration Acoustics loads are rocket design constraints which push researches and engineers to invest efforts in the aeroacoustics phenomena which is present on launch vehicles. Therefore, an in-house computational fluid dynamics tool is developed in order to reproduce high-fidelity results of supersonic jet flows for aeroacoustic analogy applications. The solver is written using the large eddy simulation formulation that is discretized using a finite-difference approach and an explicit time integration. Numerical simulations of supersonic jet flows are very expensive and demand efficient high-performance computing. Therefore, non-blocking message passage interface protocols and parallel input/output features are implemented into the code in order to perform simulations which demand up to one billion degrees of freedom. The present work evaluates the parallel efficiency of the solver when running on a supercomputer with a maximum theoretical peak of 127.4 TFLOPS. Speedup curves are generated using nine different workloads. Moreover, the validation results of a realistic flow condition are also presented in the current work. Introduction One of the main design issues related to launch vehicles lies on noise emission originated by the complex interaction between the high-temperature/high-velocity exhaustion gases and the atmospheric air. These emissions, which have high noise levels, can damage the launching structure or even be reflected upon the vehicle structure itself and the equipment onboard at the top of the vehicles. The resulting pressure fluctuations can damage the solid structure of different parts of the launcher or of the onboard scientific equipment by vibrational acoustic stress. Therefore, it is strongly recommended to consider the load resulted from acoustic sources over large launching vehicles during take-off and also during the transonic flight. The authors are interested in studying unsteady property fields of compressible jet flow configurations in order to eventually understand the acoustic phenomena, which are important design constraints for rocket applications. Experimental techniques used to evaluate such flow configuration are complex and require considerably expensive apparatus. Therefore, the authors have developed a numerical tool, JAZzY [17], based on the large eddy simulation (LES) formulation [11] in order to perform time-dependent simulations of compressible jet flows. JAZzY is a compressible LES parallel code for the calculation of supersonic jet flow configurations. The large eddy simulation approach has been successfully used by the scientific community and can provide high fidelity numerical data for aeroacoustic applications [3,36,21,37,18]. The numerical tool is written in the Fortran 90 standards coupled with Message Passing Interface (MPI) features [8]. The HDF5 [10,9] and CGNS libraries [30,19,27,26] are included into the numerical solver in order to implement a hierarchical data format (HDF) and to perform Input/Output operations efficiently. Large eddy simulations require a significant amount of computational resources to provide high fidelity results. The scientific community has been using up to hundreds of million degrees of freedom on simulations of turbulent flow configurations [6,12,18,32]. Researchers and engineers need to be certain that calculations are run with maximum parallel efficiency when allocating computational resources because the access to supercomputers is often restricted and limited. Therefore, it is of major importance to perform scalability studies, regarding an optimal choice of computational load and resources, before running simulations on supercomputers. The present work addresses the computational performance evaluation of the code when using an Hewlett Packward Enterprise (HPE) cluster from a computational center [5]. The high-performance computing (HPC) solution provides a maximum theoretical peak performance of 127.4 TFLOPS using CPUs, Nvidia GPUs and Xeon Phi accelerators. Simulations of a pre-defined perfectly expanded jet flow condition are performed using different loads and resources in order to study the strong scalability of the solver. More specifically, nine mesh configurations are investigated running on up to 400 processors in parallel. The number of degrees of freedom starts with 5.8 million points and scales to 1.0 billion points. The speedup and computational efficiency curves are measured for each grid configuration. The supercomputer is described after the introduction section. Then, numerical formulations and the implementation aspects of the tool are discussed. In the sequence, the strong scalability results are presented to the reader followed by the discussion on the validation of the solver. In the end, the one can find the concluding remarks and acknowledgements. Computational Resources The current work is included in the CEPID-CeMEAI [5] project from the Applied Mathematics department of the University of São Paulo. This project is addressed to four main research subjects: optimization and operational research; computational intelligence and software engineering; computational fluid mechanics; and risk evaluation. The CEPID-CeMEAI provides access to a high-performance computer server located at the University of São Paulo, named Euler. The system presents a maximum theoretical peak performance of 127.4 TFLOPS using a hybrid parallel processing architecture which has 144 CPU nodes, 4 fat-nodes, 6 GPU nodes and 1 Xeon Phi node with a total of 3428 computational cores. The detailed description of each node configuration is presented in Tab. 1. Two storage systems are available with 175Tb each one, the network file system (NFS) and the Lustre R file system [23]. The network communication is performed using Infiniband and Gigabit Ethernet. Red Hat Enterprise Linux [28] is the operating system of the cluster and Altair PBS Pro [1] is the job scheduler available. Large Eddy Simulation Formulation The numerical simulations of supersonic jet flow configurations are performed based on the large eddy simulation formulation [11]. This set of equations is based on the principle of scale separation over the governing equations used to represent the fluid dynamics, the Navier-Stokes formulation. A filtering procedure can be used to describe the scale separation in a mathematical formalism. The idea is to model the small turbulent structures and to calculate the bigger ones. Subgrid scale (SGS) closures are added to the filtered equations in order to model the effects of small scale turbulent structures. The Navier-Stokes equations are written in the current work using the filtering procedure of Vreman [35] as ∂ρ ∂t in which t and x i are independent variables representing time and spatial coordinates of a Cartesian coordinate system, x, respectively. The components of the velocity vector, u, are written as u i and i = 1, 2, 3. Density, pressure and total energy per unit volume are written as ρ, p, and e, respectively. The (·) and (·) operators are used in order to represent filtered and Favre averaged properties, respectively. It is important to remark that the System I filtering procedure [35] neglects the double correlation term, u i u j , which is present into the total energy per unit volume equation in order to write The heat flux, q j , is given by where T and κ stand for static temperature and the thermal conductivity coefficient, respectively. The last can be expressed as in which Cp is the specific heat at constant pressure, µ is the dynamic viscosity coefficient and P r is the Prandtl number, which is equal to 0.72 for air. The SGS thermal conductivity coefficient, κsgs, is written as where P rsgs is the SGS Prandtl number, which is equal to 0.9 for static SGS closures and µsgs is the eddy viscosity coefficient that is calculated by the SGS model. In the present work, the dynamic viscosity coefficient is calculated using the Sutherland Law which is given by One can use an equation of state to correlate density, static pressure and static temperature, in which R is the gas constant, written as and Cv is the specific heat at constant volume. The shear-stress tensor, τ ij , is written as, where the components of the rate-of-strain tensor,S ij , are given bỹ The SGS stress tensor components can be written using the eddy viscosity coefficient [31], In the present article, the eddy viscosity coefficient, µsgs, and the components of the isotropic part of the SGS stress tensor, σ kk , are not considered for the calculations. Previous validation results performed by the authors [17,18] indicate that the characteristics of numerical discretization of JAZzY can overcome the effects of subgrid scale models. The same conclusion can be found in the work of Li and Wang [20]. Therefore, one can write the large eddy simulation set of equations can be written in a more compact form as where Q stands for the convervative properties vector, given by and RHS, which stands for the right-hand side of equation, represents the contribution of inviscid and viscous fluxes terms from Eq. 1. The components of the right-hand side vector are written as in which E i and F i , are the components of inviscid and viscous flux vectors, respectively given by Spatial derivatives are calculated in a structured finite difference context and the formulation is re-written for the general curvilinear coordinate system [17]. The numerical flux is computed through a central difference scheme with the explicit addition of anisotropic scalar artificial dissipation model of Turkel and Vatsa [34]. The time marching method is an explicit 5-stage Runge-Kutta scheme developed by Jameson et. al. [16]. Boundary conditions for the LES formulation are imposed in order to represent a supersonic jet flow into a 3-D computational domain with cylindrical shape. used in the present work and the positioning of the entrance, exit, centerline, far field, and periodic boundary conditions. A flat-hat velocity profile is implemented at the entrance boundary condition through the use of the 1-D characteristic relations for the 3-D Euler equations in order to create a jet-like flow configuration. The set of properties, then, determined is computed from within and from outside the computational domain. Riemann invariants [22] are used in order to calculate properties at the far field surfaces where the normal-to-face components of the velocity are computed by a zeroorder extrapolation from inside the computational domain. The angle of flow at the far field boundary is assumed fixed. The remaining properties are obtained as a function of the jet Mach number, which is a known variable. The flow configuration is assumed to be subsonic at the exit plane of the domain. Therefore, the pressure is obtained from the outside, i.e., it is assumed given, the internal energy and components of the velocity are calculated by a zero-order extrapolation from the interior of the domain. Then, density, ρ, and total energy per unit volume, e, are computed at the exit boundary using the extrapolated properties and the imposed pressure at the output plane. The first and last points in the azimuthal direction are superposed in order to close the 3-D computation domain and create a periodicity boundary condition. An adequate treatment of the centerline boundary is necessary since it is a singularity of the coordinate transformation. The conserved properties are extrapolated from the adjacent longitudinal plane and averaged in the azimuthal direction in order to define the updated properties at the centerline of the jet. Furthermore, the fourthdifference terms of the artificial dissipation scheme of Turkel and Vatsa [34] are carefully treated in order to avoid five-point difference stencils at the centerline singularity. The reader can find further details about the spatial discretization, time marching scheme and implementation of boundary conditions in the work of Junqueira-Junior [17] and Junqueira-Junior et. al. [18] which present the validation of the large eddy simulation solver. Parallel Implementation Aspects The solver is developed to calculate the LES set of equations, Eq. 12, for supersonic jet flow configurations using the Fortran 90 standard. The spatial discretization of the formulation is based on a centered finite-difference approach with the explicit addition of anisotropic scalar artificial dissipation model of Turkel and Vatsa [34] and the time integration is performed using an explicit 2nd-order 5-stage Runge-Kutta scheme [16]. Parallelism is achieved using the single program multiple data, SPMD, approach [7] and the exchange of messages provided by MPI protocols [8]. The algorithm of the LES solver is structured in two main steps. Firstly, a pre-processing routine reads a mesh file and performs a balanced partitioning of the domain procedure. Then, in the processing routine, each MPI rank reads its correspondent grid file and starts the calculations. The pre-processing routine is run separately from the processing step. It reads an input file with the partitioning configuration and a 2-D grid file. Next, the pre-processing code calculates the number of points in the axial and azimuthal directions in order to perform the partitioning and the extrusion in the 3rd direction for each sub-domain. The segmentation of the grid points is illustrated in Figure 2(a). A matrix index notation is used in order to represent positions of each partition where NPX and NPZ denote the number of partitions in the axial and azimuthal directions, respectively. In the case of a non-exact domain division, the remaining points are spread among the partitions in order to have a well-balanced task distribution. Algorithm 1 presents the details of this division and Fig. 2(b) illustrates the balancing procedure in one direction, where LocNbPt, TotNbPt, NbPart, and PartIndex stand for local number of points in one direction, total number of points in one direction, number of partitions in one direction, and the index of the partitions in one direction. The same algorithm is used to perform the partitioning procedure in both axial and azimuthal directions. This preprocessing part of code is executed sequentially. The mesh files, for each domain, are written after the optimized partitioning procedure using the CFD General Notation System (CGNS) standard [19,26,27,30]. This standard is based on the HDF5 [9, 10] libraries which provide tools for hierarchical data format (HDF) and can perform Input/Output operations efficiently. The authors have chosen to write one CGNS grid file for each partition in order to have each MPI rank performing I/O operations independently, i.e. in parallel, during the processing step of the calculations. Moreover, each MPI rank can also write its own time-dependent solution to a local CGNS file. Such an approach avoids synchronizations during check-points, which can be a significant drawback in HPC applications. After the pre-processing routine, the solver can start the simulation. A brief overview of the computing part of the LES code is presented in Alg. 2. Primarily, Algorithm 2: Implementation of large eddy simulation formulation. every MPI process reads the same ASCII file with input data such as flow configurations and simulation settings, as indicated in line 2 of Alg. 2. In the sequence, lines 3 and 4 of the same algorithm, each rank reads a local-domain CGNS file, calculates Jacobian and metric terms, which are used for the general curvilinear coordinates transformation. Ghost points are added to the boundaries of local mesh at the axial and azimuthal directions in order to carry information of neighbor partition points, line 5 in Alg. 2. The artificial dissipation scheme of Turkel and Vatsa [34] implemented in the code [15] uses a five points stencil which requires information of the two neighbors of a given mesh point. Hence, two-layer ghost points are created at the beginning and at the end of each partition. Figure 3 presents the layer of ghost points used in the present code. The yellow and black layers represent the axial and azimuthal ghost points respectively. The green region represents the local partition grid points. The initial conditions of the flow configurations are imposed following the sequence of tasks of the processing algorithm. They are calculated using data from a previous checkpoint or, from the input data depending on if it is a restart simulation or not, as presented in lines 5 to 9 of Alg. 2. An asynchronous MPI communication of metric terms, Jacobian terms, and conservative properties, set as initial conditions, is performed between partitions which share neighbor surfaces in order to update all ghost points before starting the time integration. The core of the code consists of the while loop indicated in line 11 of Alg. 2. This specific part of the code is evaluated in the scalability study presented in Sec. 5, Computational Performance Study. This loop performs the Runge-Kutta time integration iteratively, for all grid points of the computational domain, until reaching the requested number of iterations. A succession of computing subroutines is performed at each call for the time marching scheme. It starts with synchronization of MPI ranks in order to avoid race conditions followed by the calculation of the inviscid flux vectors, artificial dissipation terms, viscous flux vector and right-hand side vector, chronologically. The boundary condition and the dynamic viscosity coefficient are updated at the end of each time-marching step followed by a nonblocking communication of the conservative properties at the partition boundaries. If necessary, a time-dependent solution is incremented to the CGNS output le at the end of the main loop iteration. Computational Performance Study The numerical discretization approach used in the present article requires very refined grids in order to reproduce high fidelity results for the simulation of supersonic jet flows configurations. Therefore, parallel computing with efficient interpartition data exchanges is mandatory. The parallel efficiency of the code is measured using different computational loads and the results are presented in the present section. The calculations performed in the current article are run using the 104 nodes of the Euler supercomputer with the Intel R Xeon R E5-2680v2 architecture and 128GB DDR3 rapid access memory. The Intel R Composer XE compiler, version 15.0.2.164, is used in the present work. A set of compiling flags which have been tested in previous work [17] are used in the present paper: -O3 -xHost -ipo -no-prec-div -assume-buffered -override-limits where O3 enables aggressive optimization such as global code scheduling, software pipelining, predication and speculation, prefetching, scalar replacement and loop transformations; xHost tells the compiler to generate instructions for the highest instruction set available on the compilation host processor; ipo uses automatic, multi-step process that allows the compiler to analyze the code and determine where you can benefit from specific optimizations; no-prec-div enables optimizations that give slightly less precise results than full division; assume-buffered io tells the compiler to accumulate records in a buffer; and override-limits deals with very large, complex functions and loops. Scalability setup Simulations of an unheated perfectly expanded jet flow are performed using different grid sizes and number of processors in order to study the strong scalability of JAZzY. The jet entrance Mach number is 1.4. The pressure ratio, P R = P j /P∞, and the temperature ratio, T R = T j /T∞, between the jet entrance and the ambient freestream conditions, are equal to one, i.e., P R = 1 and T R = 1 where the j subscript identifies the properties at the jet entrance and the ∞ subscript stands for properties at the farfield region. The Reynolds number of the jet is Re = 1.57 × 10 6 , based on the jet entrance diameter, D. The time increment, ∆t, used for the validation study is 1 × 10 −4 dimensionless time units. A stagnated flow is used as the initial condition for the simulations. The same geometry is used for the computational evaluation, where, the 2-D surface of this computational domain, as presented in Fig. 1, is 30 dimensionless length and 10 dimensionless height. Figure 4 illustrates a 2-D cut of the geometry coloured by velocity contours. The present work uses nine different mesh configurations whose total number of points doubles every time. Table 2 presents the details of each grid design where the first column presents the name of the mesh while the second and third columns present the number of points in the axial and radial directions, respectively. The last column indicates the total number of points of the mesh. The grid point distribution in the azimuthal direction is fixed at 361. The smallest grid, named as Mesh A, has approximately 5.9 million points while the biggest grid presents approximately 1.0 billion points. The solver is able to have different partitioning configurations for a fixed number of sub-domains since the division of the mesh is performed in the axial and azimuthal directions. Therefore, different partitioning configurations are evaluated for a given number of processors. Each simulation performs 1000 iterations or 24 hours of computation and the average of the CPU time per iteration of the while loop indicated in Alg. 2 is measured in order to calculate the speedups of the solver at the Euler supercomputer. The partitioning configurations which provides the fastest calculation is used to evaluate the performance of the solver. Table 3 presents the number of partitions in the azimuthal direction for each number of processors used to study the scalability of the solver. The first column stands for the number of computational cores while the second column represents the number of zones in the azimuthal direction used to evaluate the effects of the partitioning on the computation. Calculations performed in the present study are run using one single core up to 400 MPI processes. A sequential computation is the ideal starting point, s, for a strong scalability study. However, frequently, the computational problem cannot be allocated in one single cluster node due to hardware limitations of the system. Therefore, it is necessary to shift the starting point to a minimum number of resources in which the code can be run. The LES solver has shown to be capable to allocate the five smallest grids, from mesh A to mesh E, in one single node of the Euler computer. Aforementioned indicates that it is possible to run a simulation with 94.6 million grid points using 128GB of RAM. The starting points of mesh F and mesh G are 40 cores, allocated in two nodes, and 80 cores, allocated in four nodes, respectively. Meshes H and I start the strong scalability study using 200 cores in 10 nodes of the Euler computer. Table 4 presents the minimum number of computational cores used by each mesh configuration for the strong scaling study. Strong scalability study The speedup, Sp, is used in the present work to measure the strong scaling of the solver and compare it with the ideal theoretical case. There are different approaches are used by the scientific community to calculate the speedup [33,14] which is written in the current article as in which T stands for the time spent by mesh m to perform one thousand iterations, N represents the number of computational cores and s is the starting point of the scalability study. The strong scaling efficiency of a given mesh configuration, as a function of the number of processors, is written considering the law of Amdahl [2] as More than 300 calculations are performed when considering all the partitioning configurations and different meshes evaluated in the present paper. The averaged time per iteration is calculated for all numerical simulations in order to study the scalability of the solver. The evolution of speedup and efficiency, as a function of the number of processors, for the nine grids used in the current work are presented in Figs. 5 and 6. The investigation indicates a good scalability of the code. Meshes with more than 50 million points present efficiency bigger than 75% when running with 400 computing cores in parallel. Moreover, mesh E, which has ≈ 95 million degrees of freedom, presented an efficiency which equivalent to the ideal case, ≈ 100%, when using the maximum number of resources evaluated. One can notice a superlinear scalability for the cases evaluated in the present article. This behavior can be explained by the fact that cache memory can be accessed more efficiently when increasing the total number of processors for a given grid configuration since more computational resources for the same load means less cache miss [29,13]. Increasing the size of a computational problem can generate a better scalability study. The time spent with computation becomes more significant when compared to the time spent with communication with the growth of a problem. One can notice such effect for meshes A, B, C, D and E. The speedup and the efficiency increase with the growth of the mesh size. However, such scalability improvement does not happen from mesh E to meshes F, G, H and I. This behavior is originated because the reference used to calculate speedup and efficiency is not the same for all grid configurations. The studies performed using meshes F, G, H and I does not use the serial computation as a reference, which is not the case of calculations performed using mesh A, B, C, D and E. Compressible Jet Flow Simulation This section presents a compilation of results achieved from the simulation of a supersonic jet flow configuration. This calculation was performed in order to validate the LES code, and it is included here simply to demonstrate that the numerical tool is indeed capable of presenting physically sound results for the problem of interest. Results are compared to numerical [24,25] and to experimental data [4]. The details of this particular simulation are published in the work of Junqueira et. al. [18]. A geometry is created using a divergent shape whose axis length is 40 times the jet entrance diameter, D. Figure 7 illustrates a 2-D cut of the geometry and the grid point distribution used on the validation of the solver. The mesh presents approximately 50 million points. The calculation is performed using 500 computational cores. (a) A Two-dimensional surface, colored by velocity magnitude contours, extracted from the full geometry. (b) A Two-dimensional surface extracted from the full domain superimposed by grid points distribution. An unheated perfectly expanded jet flow is studied for the present validation. The jet entrance Mach number is 1.4. The pressure ratio, P R = P j /P∞, and the temperature ratio, T R = T j /T∞, between the jet entrance and the ambient freestream conditions, are equal to one, i.e., P R = 1 and T R = 1. The j subscript identifies the properties at the jet entrance and the ∞ subscript stands for properties at the farfield region. The Reynolds number of the jet is Re = 1.57×10 6 , based on the jet entrance diameter, D. The time increment, ∆t, used for the validation study is 1 × 10 −4 dimensionless time units. The boundary conditions previously presented in the Large Eddy Simulation Formulation section, are applied in the current simulation. The stagnation state of the flow is set as an initial condition of the computation. The calculation runs a predetermined period of time until reaching the statistically steady flow condition. This first pre-simulation is important to assure that the jet flow is fully developed and turbulent. Computations are restarted and run for another period after achieving the statistically stationary state flow. Hence, data are extracted and recorded in a predetermined frequency. Figure using the JAZzY code while square and triangular symbols represent numerical [24,25] and experimental [4] data, respectively. The averaged profiles obtained (a) X=2.5D (b) X=5.0D Fig. 9 Profiles of the averaged axial component of velocity, U , at 2.5D and 5.0D from the entrance: (-) JAZzY results; ( ) numerical data [24,25]; ( ) experimental data [4]. in the present work correlate well with the reference data at the two positions compared here. It is important to remark that the LES tool can provide good predictions of supersonic jet flow configurations when using a sufficiently fine grid point distribution. Therefore, efficient massive parallel computing is mandatory in order to achieve good results. Figures 11 and 12 present a lateral view of an instantaneous visualization of the pressure contours, in greyscale, superimposed by 3-D velocity magnitude countours and vorticity magnitude contours respectively, in color, calculated by [24,25]; ( ) experimental data [4]. the LES tool discussed in the present paper. A detailed visualization of the region indicated in yellow, at the jet entrance, is shown in Fig. 12. The resolution of flow features obtained from the jet simulation is more evident in this detailed plot of the jet entrance. One can clearly notice the compression waves generated at the shear layer, and their reflections at the jet axis. Such resolution is important to observe details and behavior of such flow configuration in order to understand the acoustic phenomena which is presnt in supersonic jet flow configurations. Concluding Remarks The current work is concerned with the performance of a computational fluid dynamics tool for aeroacoustics applications when using a national supercomputer. The HPC system, Euler, from the University of São Paulo presents more than 3000 computational cores and a maximum theoretical peak of 127.4 TFLOPS. The numerical solver is developed by the authors to study supersonic jet flows. Simulations of such flow configurations are expensive and need efficient parallel computing. Therefore, strong scalability studies of the solver are performed on the Euler supercomputer in order to evaluate if the numerical tool is capable of efficiently using computational resources in parallel. The computational fluid dynamics solver is developed using the large eddy simulation formulation for perfectly expanded supersonic jet flow. The equations are written using a finite-difference centered spatial discretization with the addition of artificial dissipation. The time integration is performed using a five-steps Runge-Kutta scheme. Parallel computing is achieved through non-blocking message passing interface protocols and inter-partition data are allocated using ghost points. Each MPI partition reads and writes its own portion of the mesh that is created on pre-processing routine. A geometry and a flow condition are defined for the scalability study performed in the present work. Nine point distributions and different partitioning configurations are used in order to evaluate the parallel code under different workloads. The size of the grid configurations start with 5.9 million points and rise up to approximately 1.0 billion points. Calculations perform 1000 iterations or 24 hours of computation using up to 400 cores in parallel. The CPU time per iteration is averaged when the simulation is finished in order to calculate the speedup and scaling efficiency. More than 300 simulations are performed for the scalability study when considering different workloads and partitioning configurations. The code presented a good scalability for the calculations run in the current paper. The averaged CPU time per iteration decays with the increase in the number of processors in parallel for all computation performed by the large eddy simulation solver evaluated in the present work. Meshes with more than 50 million points indicated an efficiency greater than 75%. The problem with approximately 100 million points presented speedup of 400 and efficiency of 100% when running on 400 computational cores in parallel. Such performance is equivalent to theoretical behavior in parallel. It is important to remark the ability of the parallel solver to treat very dense meshes as the one tested in the present paper with approx-imately 1.0 billion points. Large eddy simulation demands very refined grids in order to have a good representation of the physical problem of interest. Therefore, it is important to perform simulations of such configuration with a good computation efficiency and the present scalability study article can all be seen as a guide for future simulations using the same numerical tool on the Euler supercomputer.
2019-11-14T17:08:11.156Z
2019-11-06T00:00:00.000
{ "year": 2020, "sha1": "7682df815805d01195561559b9ebe1c34bacb75e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2003.08746", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "49caaae46ce818aaa843cbc9c0637852a3d3b875", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Computer Science" ] }
245313029
pes2o/s2orc
v3-fos-license
Stock Market Reactions on Shariah Indices Following Sukuk Issuances: CAAR Analysis on 2008-Financial Crisis Pg. Abstract The purpose of this study is to look at how stock markets in Malaysia reacted to sukuk issuance announcements from 2004 to the post-2008 financial crisis by examining 50 selected companies listed in The FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI), FTSE Bursa Malaysia Emas Shari’ah Index (FBM EMAS), FTSE Bursa Malaysia Hijrah Shari’ah Index (FBM HIJRAH), and Dow Jones Islamic Market Index (DJIM). The data gathered from Datastream, Bloomberg, the Securities Commission Malaysia, and the Bursa Malaysia stock exchange were used to compile this study. A three-year estimation period to investigate market reactions using cumulative average abnormal return (CAAR) was adopted in this study. To investigate market efficiency, this study looked at symmetric and asymmetric event windows. This study discovers that markets responded favourably before the crisis but negatively and significantly during and after the crisis. The findings of this study provide advice to policymakers on how to direct regulators, investors, and issuers to the most stable sukuk during a crisis, as well as useful information and suggestions to issuers, policymakers, regulatory organisations, and investors in Islamic bonds. Introduction Since the 2008 financial crisis, the COVID-19 pandemic has posed the greatest challenge to the global financial system (Dali et al., 2021). Though the economic picture will only be clearer once the COVID-19 has been completely eradicated, the first half of 2021 saw the sukuk market to remain resilient (IIFM, 2021). There are many factors that have contributed to the strong growth trend of the sukuk market. First is the favourable expectations for the global economy, relatively stable commodity prices, and the ongoing increase in sovereign sukuk issuance. The strong trend is also caused by the rising interest in sukuk upon the recent issuers of Formosa Sukuk from Taiwan, the sukuk issuing of an Egyptian business in January 2020, and the growing investment base (Ahmad et al., 2021). The recent modernisation of Islamic finance, which has changed the dynamics of the Islamic financial industry, has also resulted in the demand of sukuk to increase in the last few years, resulting in them gaining universal Looking at both the increasing expectations of this industry and the growing investor base, the study will investigate the performance of the sukuk industry from pre-, during, and post-2008 global financial crisis. By doing so, the main focus of this research is therefore to investigate stock market reactions following sukuk announcements in Malaysia using the event study methodology. This is because sukuk and equity have similar characteristics (Modirzadehbami & Mansourfar, 2011). Sukuk does not pay interest but generates returns through the commoditisation of capital gain. Therefore, it cannot be classified exclusively as debt because it also shares some stock features. The structure of the paper is organised as follows. Following the introduction is the section which discusses the background of the study and relevant literature. Section 3 demonstrates the research methodology and research findings. Finally, Section 4 brings the discussion to a close offering some recommendations at the end. Literature Review Definition of Sukuk Sukuk is a prominent element in the Islamic financial system, contributing approximately 90 per cent to the Islamic capital market (Haider & Azhar, 2010). The Islamic Development Bank (IDB) defines the sukuk as "an asset-backed bond which was designed or structured in accordance with the Shari'ah and which might be traded in the market" (IDB, 2006). The Accounting and Auditing Organization for Islamic Financial Institutions (AAOIFI, 2008), in its Shari'ah Standard 17 (2), defines sukuk as "certificates of equal value representing undivided shares in the ownership of tangible assets, usufructs and services or (in the ownership of) the assets of particular projects or special investment activity". Meanwhile, the Islamic Financial Services Board (IFSB, 2007), in its Capital Adequacy Standard (IFBS 2), defines sukuk as "certificates that represent the holder's proportionate ownership in an undivided part of an underlying asset where the holder assumes all rights and obligations to such asset". Sukuk is defined by the Securities Commission Malaysia (2011) as "certificates of equal value which evidence undivided ownership or investment in the assets using Shari'ah principles and concepts approved by the Shari'ah Advisory Council (SAC)". Having considered the numerous definitions of sukuk, the present study will employ the definition of sukuk as issued by the Securities Commission Malaysia. Background of Sukuk Sukuk, which is also referred to as an "Islamic bond", is a capital market instrument that enables the owner of the right to obtain income from this asset by considering the right of ownership over an asset and raising funds from the public investors and is among the fastestgrowing instruments in the world (Alpaslan, 2014). It is the most active Islamic debt market instrument in Malaysia because it covers almost 90 per cent of the local Islamic capital market (Haider, 2010). Sukuk was first issued in Malaysia in 1990 by Shell MDS Private Limited, a foreign-owned non-Islamic company. Since the world's first ringgit sukuk was issued, a various form of sukuk have been issued, such as sukuk mudharabah in 1994, sukuk ijarah in 2001, sovereign sukuk in 2002, sukuk musyarakah in 2005 and exchangeable sukuk in 2006 (Said, 2011). The deterioration of sukuk issuance in Malaysia, especially after the 2008 global financial crisis, has created a complicated situation among sukuk issuers (Ahmad & Radzi, 2011). Ahmad and Radzi (2011) also said that the reactions of the stock markets in Malaysia were never consistent, and the prices were unpredictable during the crisis. Since the decreased number of sukuk issuances in Malaysia during the 2008 financial crisis, the prices of stock markets also fluctuated, displaying either positive or negative reactions. Negative reactions, as reflected by low stock market index values, indicate a lack of confidence among sukuk investors to invest in sukuk. Therefore, the growth rate of sukuk issuance also deteriorated during the financial crisis. Based on the previous literature, it is found that the growth rate of sukuk issuance had decreased after the 2008 financial crisis from 1.89 per cent in 2007 to -0.64 per cent in 2008. However, the growth rate for conventional bonds increased from 0.11 per cent in 2007 to 1.58 per cent after the crisis in 2008. This suggests that the crisis has a higher impact on sukuk than on conventional bonds. The situation raises several questions. First, why did Malaysia experience the hardest hit in terms of sukuk issuance during the financial crisis? Second, why was the growth rate of sukuk issuance at a lower rate than that of conventional bonds? The deterioration of sukuk issuance and its impact on confidence level among sukuk investors during the crisis are the important issues investigated in this research. Since empirical work on sukuk with respect to stock market reactions and confidence effects are relatively scarce, this research contributes to the literature by providing new information to address the gap. The findings would be significant to regulators, policymakers, industry players, issuers, investors and researchers in the industry. Overview of Sukuk Development A look at the 2021 sukuk issuance pipeline as well as the current issuances suggest a good year for the sukuk market. Sukuk has continued to attract new issuers, with a greater emphasis on ESG-related issuances, an increase in issuances by relatively new entrants such as Nigeria and Egypt, and an expanding investor base, all of which are positively contributing to the market's development. Sukuk is now widely accepted as a viable source of financing for project financing, general-purpose corporate needs, capital adequacy, sovereign budgetary and fiscal requirements, liquidity management, and other purposes (IIFM, 2020). Global Sukuk issuance increased from around 19.84% p.a. or USD145.702 billion in 2019 to USD 174.641 billion in 2020. The steady issuance volume during 2020 was mainly due to sovereign sukuk issuances from Asia, GCC, Africa and certain other jurisdictions. Malaysia continued to dominate the sukuk market even though the share of countries like Indonesia, UAE, Saudi Arabia and Turkey increased with good volume. (2021) According to Table 2, around 57.62% of the international issuance in 2020 came from 5 out of 6 GCC countries. Malaysia was the most active region in International Sukuk issuances with a share of 29.36%, followed by Turkey of 7.01% and Indonesia of 5.90% market share. Previous Studies on Stock Markets Reactions Many studies have been conducted to examine how market participants react to bond announcements and how they affect firm value. A substantial amount of literature has focused on the group of bonds that have both equity and bond components. A noteworthy first finding was by Mikkelson and Partch (1986), who recorded the absence of any significant reaction of the stock markets to conventional bond announcements. This was evident that stock markets do not react to debt announcements, including bond issuances, even if some studies also found evidence for a negative reaction as the reaction of stock markets to the issue of bonds was affected by opposing influences (Spiess & Affleck-Graves, 1999). Brown and Warner (1980) said that event studies are frequently used to test market efficiency. An event study is a statistical method used to gauge the impact of a corporate event, such as stock splits, earnings announcements and acquisition announcements. Several studies for the United States market document a significantly negative (on average -1.5 per cent) market response to convertible bond issues, confirming the hybrid nature of these financial instruments. The announcement effect of different corporate securities has been the subject of numerous studies, such as Mikkelson and Partch (1986) for equity, Eckbo (1986) for bonds, and Dann and Mikkelson (1984) for convertible securities. These support the models proposed by (Myers and Majluf, 1984). However, the results of the effect of issuance analysed in several studies present a mixed picture. For example, Dann and Mikkelson (1984); Mikkelson and Partch (1986); Billingsley et al (1990) found significantly negative stock market reactions on the issuance date for the United States domestic market. However, Kang and Stulz (1996) discovered a significantly positive market reaction in the Japanese market. In general, the stock market does not appear to react very strongly on the date of issue. Miller and Rock (1985) showed that a larger than expected external financing reveals a lowerthan-expected operating cash flow, which is negative news to investors. This implied a negative stock price effect of an unanticipated debt issue as well as a negative correlation between the price effect and the amount of unanticipated new financing. Thus, the difference between the market's reactions to straight debt and equity issues was broadly consistent with the Myers and Majluf (1984) model. In their framework, the market reacted negatively to unanticipated external financing as relatively uninformed investors account for the possibility that the firm was attempting to take advantage of a situation in which it knows the security offered was priced above its "intrinsic" value. Eckbo (1986) found significantly negative average abnormal returns to firms offering convertible debt. However, straight debt offering, with the exception of a subsample of public utility offerings, was on average associated with zero abnormal performance. Shaheen (2006) recorded that preliminary evidence showed that acquiring firms did not experience significant abnormal returns around the announcement date. Market participants received no signal on the acquisition announcement day regarding the acquiring firm. Cakir and Raei (2007), who examined the risk-reduction advantage of issuing sovereign sukuk found that adding sukuk to the portfolio of fixed-income securities reduced the VaR, demonstrating that these investment certificates created diversification benefits to investors. They suggested that there was no significant market reaction to conventional bond issues but a significant negative stock market reaction to sukuk issues. Between 2000 to 2006, Ibrahim and Minai (2009) found that the market reaction was significantly positive during event windows [-3, 0] and [-3, 3] during the announcements of Islamic debt issuance in Malaysia. The wealth effect of Islamic bond issuance announcements was positively influenced by the issuer's investment opportunity and negatively influenced by the size of the issue, the size of the firm and whether the announcement was accompanied by the Securities Commission (SC) approval. The finding implies that the positive reaction was not due to investors' preference for Islamic compliant activities, but it was due to similar factors found in studies on conventional bonds. The negative influence of SC approval on the wealth effect indicates that many listed companies issuing Islamic debt were not complying with the information disclosure requirement. Ashhari et al (2009) found that there was a wealth effect on the announcement of Islamic bonds issued for the period 2001 to 2006 in Malaysia. The early market reaction to Islamic bond announcements was positive. Regardless of the reactions, a possible reason for the early response could be the fact that information about Islamic bond offerings often leaks out to the market before the announcement. Ameer and Othman (2010) found significant negative abnormal returns near the announcement days in Malaysia over the period of 2001 to 2007. They found that the average abnormal return of the subordinated bonds was significantly positive compared to other types of bonds. The average abnormal return (AAR) for the subordinated bonds was significantly positive and larger than AAR for the medium term and straight bonds, whereas zero-coupon bonds had the most significant negative returns. Since there was no risk of expropriation from the current bondholders, the stock market would react positively to such announcements. According to Abdul Qoyum (2011), there was a significant positive market reaction just prior to a firms' positive surprise earnings announcements. When a firm announced positive surprise earnings, investors appeared to perceive a positive signal about the firm's future which resulted in an increase in the firm's stock price. Therefore, positive surprise earnings announcements did indeed send a positive signal about the profitability and future success of a firm. By doing so, stock prices rose, and the market reacted quickly to the available information. On the other hand, according to Modirzadehbami and Mansourfar (2011), a significant negative abnormal return occurred one day before the announcement date in a sample of 45 listed companies on Bursa Malaysia involved in the issuing of Islamic debts during 2005 to 2008. The event window was -15 to +15 days around the announcement date (22 working days). The negative abnormal return of the day before the announcement was highly significant at the 5% level and insignificant on day +1. Significant negative abnormal return of the day before indicated that the announcement of Islamic bonds in the market reflected bad news in the Malaysian market between the years 2005 to 2008. According to the MENA Sukuk Report (2009), the sukuk market remained positive because of the existing and strong demand for sukuk. It had also been supported by the higher level of surplus savings and reserves in Asia. The recovery of the sukuk market depended largely on the global financial industry rehabilitation process. This recent crisis in the financial industry led to calls to rely more on Islamic principles as Islamic financial institutions were impacted less than conventional institutions during the crisis. The restrictions placed by Islamic laws on financial transactions had a cushioned impact on Islamic institutions. The Impact of Financial Crisis and Stock Market Reactions According to Asshari et al (2009), news of the Malaysian economy entering a recession following a second-quarter of gross domestic product (GDP) did not profoundly weaken the stock market. This investigation could provide additional insights and further evidence on the effects of debt announcements on stock returns in the emerging capital market in Malaysia. The evidence obtained was useful to international investors who wished to invest and can help to reduce investment risk. Standard and Poor's (2009) reported that the market's anticipated growth of sukuk, including those for infrastructure and project finance, failed to materialise in 2008, with total sukuk issuance falling 56 per cent compared to the previous year. Sukuk fund structures provided an alternative to traditional bank financing that showed no immediate signs of a return in the financial markets. The global financial crisis in 2008 had a significant impact on stock market reactions. Therefore, it was hard to predict the sukuk markets. It was critical for sukuk holders and investors in other fixed-income financial products to have access to hedging solutions to counter challenges during the crisis. in line with the growth of financial products in the primary market, more attention must be given to the development of Shari'ah-compliant hedging solutions. Theoretical Framework Efficient Markets Theory The efficient markets theory covers how the market price reflects the available information, whether the price adjusts quickly and accurately in response to news. According to Frederic (2001) in his book "The Economics of Money, Banking, and Financial Markets", efficient markets theory is the application of rational expectations to the pricing of securities in financial markets. Current security prices will fully reflect all available information because in an efficient market, all unexploited profit opportunities are eliminated. Efficient markets theory also views expectations of future prices as equal to optimal forecasts that use all currently available information. In other words, the market's expectations of future securities prices are rational, which implies that the expected return on the securities is equal to the optimal forecast of the return. In this theory, current prices in a financial market will be set such that the optimal forecast of a security's return when all available information is used is equal to the security's equilibrium return. Consequently, in an efficient market, a security's price fully reflects all available information. Frederic (2001) says that the term 'random walk' describes the movements of a variable whose future changes are unpredictable because the variable is just as likely to fall as it is to rise. An important implication of efficient market theory is that stock prices should approximately follow a random walk, which means that future changes in the stock prices should for all practical purposes, be unpredictable. Market efficiency refers to how quickly and precisely security prices adjust to news. As the random walk theory states that news arrives at random, security price changes, therefore, cannot be forecasted. Event Studies Theory Event studies theory explains how the cumulative average abnormal returns (CAAR) are calculated and how a market responds to either positive or negative news. According to Ana (2002), event studies are an important tool in finance for the valuation of firms and for estimating the changes in firm value resulting from, for example, changes in its capital structure. In general, the value of a firm is difficult to measure. However, if there is an efficient market for the firm's stock, the impact of a decision can be measured by the change in the stock price around the time when the decision becomes public knowledge. Although such events can be studied in many ways, the empirical finance literature has taken a particular approach based on statistical tests of the significance of abnormal stock returns around the event dates. The reaction of a stock price to news, which will also change the security price, cannot be predicted, as shown in Figure 2. (2001) In an event study, it is crucial to test for any evidence of (1) under reaction, (2) overreaction, (3) early reaction, or (4) delayed reaction around the event. If the market is "semi-strongform efficient", the effects of an event will be reflected immediately in the security prices. Thus, a measure of the event's economic impact can be constructed using the security prices that are observed over a relatively short time period (Frederic, 2001). Fridson (1994), in his book "Advances in Behavioral Finance", mentions that three versions of the Efficient Markets Hypothesis (EMH) can be distinguished depending on the level of available information: (1) weak form EMH, (2) semi-strong form EMH and (3) strong form EMH. The weak form EMH states that current asset prices already reflect past prices and volume information. The information contained in the past sequence of prices of a security is fully reflected in the current market price of that security. It is named weak form EMH because the security prices are the most publicly and easily accessible information. In comparison, the semi strong form EMH states that all publicly available information is already incorporated in the asset prices. All publicly available information is fully reflected in a security's current market price. The public information includes not only past prices but also the various data reported in a company's financial statements, company's announcement, economic factors and others. This indicates that the company's financial statements cannot help in forecasting future price movements and in securing high investment returns. Finally, the strong form EMH stipulates that private information or insider information is quickly incorporated in the market prices. Therefore the information cannot be used to generate abnormal trading profits. Thus, all public or private information are fully reflected in a security's current market price. This implies that even the company's management or the insider is neither able to profit nor make gains from the inside information that they have. According to Frank de Jong (2007), the main differences among the models are the chosen benchmark return model and the estimation interval. An abnormal return (AR) is defined as the return (R) minus a normal return (NR). The determination of the normal return requires the estimation of some parameters. This estimation is typically performed over an estimation period, [T 1 ; T 2 ], which precedes the event period, [ 1 ; 2 ]. The event is typically defined to occur at t = 0. The time index t counts "event time" which is the number of periods (days, months) that have passed from the event does not represent the usual calendar time. Figure 3 shows the time-line of an event study. Frank de Jong (2007) says that in analysing abnormal returns, it is conventional to label the event date as time t = 0. Hence, denotes the abnormal return on the event date and denotes the abnormal return t periods after the event. If there is more than one event relating to one firm or stock price series, they are treated as if they affect separate firms. They consider an event period, running from to In order to study stock price changes around events, each firm's return data can be analysed separately. However, this is not very informative because many stock price movements are caused by information that is unrelated to the event being studied. The effect of this unrelated information could be reduced by averaging the information over several firms, thus improving the accuracy of the study. The average abnormal returns from zero indicate abnormal performance because they are all centred around one event. The average of abnormal returns should reflect the effect of that event. The usual way to study performance over longer intervals is by means of cumulative abnormal returns, where the abnormal returns are aggregated from the start of the event period, , up to time . In event studies, the cumulative abnormal return (CAR) is aggregated over the cross-section of event studies to obtain the cumulative average abnormal returns (CAAR). The CAAR estimates can be obtained by aggregating the's over time. Methodology This research employs the event study methodology to analyse the reaction of stock markets to the announcement of a sukuk issuance using the Cumulative Average Abnormal Return (CAAR). Data Collection Sukuk issuance data in Malaysia were obtained from the Bloomberg database, the Securities Commission of Malaysia, Bursa Malaysia, and Zawya Sukuk. The period of the study ran between 2004 and 2011, with three-year estimation windows, because a longer estimation period produces more accurate and robust beta value estimated. The data for stock markets are collected from the historical prices available on the DataStream database, excluding Saturdays and Sundays, giving a total of about 265 days a year. This research proceeded with the investigation on the stock market reactions to the issuance of sukuk in Malaysia in the FTSE Bursa Malaysia Kuala Lumpur Composite Index (FTSEKLCI), the FTSE Bursa Malaysia Emas Shari'ah Index (FTSE EMAS), the FTSE Bursa Malaysia Hijrah Shari'ah Index (FTSE HIJRAH) and the Dow Jones Islamic Market Index (DJIM). The reactions of different stock markets were compared using the domestic index, the global index, and the Islamic index. For the domestic index, this study used the FTSEKLCI, that covered the period from 2004 to 2011. This study opted the DJIM index for the global Islamic index, and adopted both the FTSE EMAS and the FTSE HIJRAH indices for the local Shari'ah index. The KLCI is now known as the FTSE Bursa Malaysia KLCI after enhancements were implemented on Monday, 6 July 2009. It was enhanced to ensure that KLCI remains robust in measuring the national economy with growing linkages to the global economy, as well as to provide global relevance, recognition and reach. The The following are the introductions to these four indices: (2012), is a capitalisationweighted stock market index. This index, which includes basic material, health care, technology, consumer goods, consumer service, financial, oil and gas, telecommunications and utility industries, was first introduced in 1986 and is now known as the FTSE Bursa Malaysia KLCI. The FTSE KLCI consists of 100 companies that cover around 81 per cent of the full market capitalisation of the FTSE Bursa Malaysia EMAS Index as of 30 th April 2009. In accordance to the KLCI enhancement, FTSE KLCI is integrated with the internationally recognised index calculation formula, which increases transparency and makes the index more tradable. (2012) states that the FTSE Bursa Malaysia Emas Shari'ah Index has been designed to provide investors with a broad benchmark for Shari'ah-compliant investments. This index includes general industries, mobile telecommunications, electricity, food producer, chemical, fixed line telecommunication, and oil and gas industries. Constituents are screened according to the Malaysian Securities Commission's Shari'ah Advisory Council (SAC) screening methodology. The index is designed for the creation of Shari'ah-compliant investment products and as a benchmark. The Shari'ah-compliant companies must not be involved in any financial services based on riba or interest, gambling, manufacture or sale of non-halal products or related products, conventional insurance, entertainment activities that are not permissible according to Shari'ah, manufacture or sale of tobacco-based products or related products, stock broking or share trading in Shari'ah non-compliant securities and other prohibited activities according to Shari'ah. iii. FTSE Bursa Malaysia Hijrah Shari'ah Index FTSE Group (2012) states that the FTSE Bursa Malaysia Hijrah Shari'ah Index has been designed to be used as a basis of Shari'ah-compliant investment products that meet the screening requirements of international Islamic investors. This index includes general industries, mobile telecommunications, electricity, food producers, fixed line telecommunications, oil and gas producers, automobiles and parts, construction and materials, health care, travel and leisure, utilities, and real estate industries. Companies on the index are screened by the Malaysian Securities Commission's Shari'ah Advisory Council (SAC) and a leading global Shari'ah consultancy, Yasaar Ltd, against a clear set of guiding principles. Constituents in the index are not permitted to be involved in any of the following core activities: banking or any other interest-related activities such as lender and brokerages (excluding Islamic financial institutions), alcohol, tobacco, gaming, arms manufacturing, life insurance, pork and non-halal production, packaging and processing, or any other activities related to pork and non-halal food. iv. Dow Jones Islamic Market Index (DJIM) The DJIM was established on 31 December 1995 and serves as an Islamic equity benchmark index. It is a subset of the Dow Jones Global indices (DJGI) family, which includes stocks from 34 countries and covers 10 economic sectors, 18 market sectors, 51 industry groups and 89 subgroups defined by the Dow Jones Global Classification Standard. The DJIM excludes any stock that belongs to a company with a primary business that is impermissible according to Shari'ah law. The purpose of the DJIM is to provide a definitive standard for measuring stock market performance for Islamic investors on a global basis, in accordance with Dow Jones Index's established index methodology and the Islamic investment guidelines established by the index's Shari'ah Supervisory Board. During the component selection process, each company in the index universe is examined based on its revenue allocation. If the company has business activities in any one of the following industry groups or subgroups defined by the Dow Jones Global Classification Standard, it is considered inappropriate for Islamic investment purposes and is excluded from the index. Method: CAAR Measuring Return (R mt ) In this model, R mt is the return on the market portfolio, and the model's linear specification follows from the assumed joint normality of returns. This study defines a return as the difference between the stock market daily price at closing on that day and the stock market daily price at closing on the previous day, divided by the stock market daily price at closing on the previous day. The formula for measuring the return is as follows: R mt = [(P (t) − P (t−1) ) / P (t−1) ] (1) where P (t) is the stock market daily price at closing. P (t−1) is the stock market daily price at closing on the previous day. [-40,+20] for asymmetric event windows. The average abnormal daily return was calculated and the cumulative average abnormal return (CAAR) is found by summing daily excess returns over the respective event windows. Sixty-one days, 30 days before the announcement day and 30 days after the announcement day, are chosen to facilitate the event window analysis in the emerging market. This time period is chosen because any period shorter than 61 days is insufficient to test the effect of the event, as the volatility of the stock is low. However, in a period of more than 61 days, the effect of the event could not be seen, as there may be other factors that may trigger the effect (Ashhari, et al., 2009). Daily Return of Stock Market The daily return of any stock was calculated using the following formula: R it = ln (P it P i(t−1) ⁄ ) (2) where R it is the return on security i for day t. P it is the price of share i for day t and P i(t−1) is the price of share i on the day before day t. Market Model Expected Stock Return This research also filtered the sample size to reduce the selected companies to those that had at least 100 days of stock return observation. The following formula was used to calculate the market model's expected stock return: where α i is a market model parameter, β i is a market model parameter, R mt is the return on market index for day t, E(R it ) is the market model's expected stock return and is the error time. The parameters for the estimation period were estimated using the ordinary least squares (OLS) method. This study used standard OLS regressions to estimate the market model which represents a potential improvement over the traditional constant-mean-return model because by removing the portion of the return that is related to variation in the market's return, the variance of the AR is reduced. This can increase the ability to detect event effects. To test for the existence of abnormal returns, a benchmark for normal returns is required. Therefore, a parameter estimation period as suggested by Brown and Warner (1985) was used to calculate a stock's value. The value is the slope coefficient obtained by regressing the index's returns to the stock's returns, and is also a measure of the stock's volatility compared to the market. The value of needs to be adjusted to avoid biasness. The information on the true value of for a security is important to forecast the future , which enables market risk for a future time period to be estimated. Abnormal Return ( ) To calculate the difference between the actual returns and the expected returns predicted by the market model, the abnormal return (AR it ) was obtained from the following formula: R it is the return on share i in period t, R mt is the return on market index during period t, E(R it ) is the market model's expected stock return, AR it is abnormal return and ∈ it is the error time. Average Abnormal Return ( ) The average abnormal return (AAR t ) is calculated after computing the abnormal returns for all stocks in the sample. In this study, it was calculated by taking the cross-sectional mean of the daily abnormal return: AAR t is the average abnormal return for day t , AR it is the abnormal return of share i for day t and N is the number of securities in the sample. Cumulative Average Abnormal Return ( ) After the (AAR t ) is known, the cumulative average abnormal return (CAAR t ) is calculated. This research obtained the cumulative average abnormal return (CAAR) by summing the daily excess returns over the respective event windows. CAAR was calculated using the following formula: Where k is the number of event days before day t , CAAR t is the cumulative average abnormal return and AAR t is the average abnormal return. CAAR needs to be tested for their statistical significance by using t-test. CAAR is important to define whether the Malaysian stock market and the global Islamic index reacted positively or negatively when sukuk was issued after the 2008 financial crisis. Results and Discussion The results of stock market reactions to sukuk issuance in Malaysia for the period under this study on the 50 selected companies are presented in Table 3. The reactions are categorised based on symmetric (six events) and asymmetric (13 events) event windows. The event windows range from 3 to 60 days in length to capture both the immediate and the long term responses, respectively. The analysis for each index is further divided into three distinct periods, each representing pre-crisis events (2004 -2006), the crisis period (2007 -2008) and the post-crisis period (2009 -2011). Table 4 summarises the findings of Table 3 based on the average values of significant findings to compare the reactions of the different indices. Table 3 shows 20 event windows separated by symmetric and asymmetric events. The minimum event was 3 days [-1,+1] and the maximum event was 61 days: [-30,+30], [-20,+40] and [-40,+20]. The announcement day (day 0) is defined as the day the sukuk offering was first made known to the public. This is supported by Ashhari et al (2009), that mentioned the effects of the events may not be visible for periods of more than 61 days, as other factors may trigger the effects. In this study, there were six symmetric 3, 5, 7, 15, 31 and 61-day events. A symmetric event is when there is the same number of days before and after the announcement of sukuk issuance. There were 14 asymmetric events from 5 days to a maximum of 61 days. The events were separated in order to study market efficiency in Malaysia. In an efficient market, the closing price of the stock market fully reflects all available information. The stock prices should approximately follow a random walk as future changes in stock prices should be unpredictable. Table 3 shows that there are similar patterns in the results following the information of sukuk issuances across the four indices. The FTSE KLCI and DJIM indices show they were sharing the same asymmetric event [-10,+20], with a maximum value of CAAR 10 days before and 20 days after the sukuk announcement. The maximum value of CAAR before the crisis on FTSE KLCI was 0.0184, which was lower than the maximum value of DJIM at 0.0349, with both showing positive but insignificant results. The positive response to these two indices before the crisis showed the sukuk investors' confidence about investing in sukuk in both local and global markets. Table 4 shows two positive and significant FTSE KLCI results and five positive and significant DJIM results before the crisis. Both indices had a larger number of positive significant results compared to negative significant results before the crisis. Thus, the stock markets reacted positively before the 2008 financial crisis based on the CAAR estimated, is accepted. During the crisis, the four indices shared the same event for the maximum value of CAAR. All indices showed positive and significant results of 1% on the asymmetric event [-40,+20]. The FTSE KLCI had the highest value among these indices which was at 0.1737. The FBM HIJRAH yielded the lowest result of 0.1361. These results show that in all four indices, an asymmetric event with more days before the announcement is the most significant. The results also indicate that short events show negative results in both symmetric and asymmetric events. However, long term events, both symmetric and asymmetric events, produce positive and significant results. This means the markets react negatively following negative information, such as during the 2008 global financial crisis. According to Table 3, all four indices show the same pattern with negative results in short events and positive results in long events during the crisis. Overreaction or delayed reactions occur in longer events which show an inefficient market or weak-form efficiency that do not react to all public information. These overreactions, which presumes that investors overreact to positive and negative shocks and correct their behavior, may suggest that the market took longer to process the crisis information. This is because the previous empirical evidence shows that significant negative returns are associated with negative news. These results support Cahyadin and Milandari (2009) who found that the Dow Jones Islamic Market Index and FBM Emas Shari'ah Index had weak-form efficiency. All four indices also share the same event as the minimum value of CAAR which was the asymmetric event [-5,+3]. Among the four indices, FBM HIJRAH scored the lowest at -0.0461 with 1% significance and the highest was FTSE KLCI with 5% significance. The asymmetric event [-5,+3] had the worst result of all four indices. These negative and significant results were due to negative information during the crisis. Accordingly, Table 4 also shows that all four indices respond more to negative than positive significant results during the crisis. The FBM EMAS and FBM HIJRAH show the highest number of negative and significant results compared to the other two indices, with 13 such results. Thus, the stock markets reacted negatively and significantly during the 2008 financial crisis based on the CAAR estimated. The period after the crisis showed that all event windows, both symmetric and asymmetric events, reacted to negative results on all four indices. All events showed no positive results after the crisis. After the crisis, markets showed negative results following negative information. All indices indicated negative and significant results, with no positive and significant results. All indices also shared the same event as the maximum results of CAAR, which was on the asymmetric event [-3,+1]. This showed that short events yielded the maximum results after the crisis. Moreover, after the crisis, the same event had a minimum value of CAAR on all the four indices on the asymmetric event [-40,+20]. All indices were negative with significant results of 1% on this event. The lowest value of CAAR after the crisis was on FBM HIJRAH, which was -0.0771 with 1% significance. This meant the markets took a longer time to process negative news. After the crisis, sukuk investors' confidence was lower than before the crisis happened. These results are acceptable for hypothesis that stock markets responded negatively and significantly following sukuk issuance after the 2008 financial crisis based on the CAAR estimated. A summary of eight indicators is shown in Table 5. During the crisis, all four indices showed positive results in all indicators except the minimum indicator. Nevertheless, after the crisis, all four indices showed negative results in all indicators, hence sharing the same pattern. However, before the crisis the FTSE KLCI had showed negative results except on the maximum indicator and the DJIM had showed positive results except on the minimum indicator. None of all these four indices showed positive and significant results after the crisis. The results show that asymmetric events reacted better than symmetric events considering all the maximum values of CAAR came from the asymmetric events in all three periods, before, during and after the crisis. Conclusion This research has discovered that stock markets reacted positively before the 2008 financial crisis based on CAAR estimates. They then, based on CAAR estimates, reacted negatively and significantly during and after the global financial crisis. All indices showed the same pattern of results and thus the thesis hypothesis was accepted. The researcher found that the FTSE KLCI was the best index compared to the other indices. Although the FTSE KLCI combined Islamic and conventional listed companies, this study focused on 50 listed companies that had issued sukuk in Malaysia. As the new Islamic benchmark in Malaysia, the FBM EMAS and the FBM HIJRAH did not cover the early period of study before the crisis happened, hence was not entitled to be the best index. Meanwhile, the DJIM index as the global Islamic index was used to show the effect of the financial crisis on the global market. By doing so, the researcher suggests referring to the reactions of FTSE KLCI as the indicator index following sukuk issuances in Malaysia. Considering that the FTSE KLCI covered all periods of study, it could be used as the main index in Malaysia to investigate market reaction on sukuk issuances. All indices in this study showed weak-form efficiency. Weak-form efficiency occurs when stock prices reflect all the information found in past stock prices. Stock prices reacted so fast to past information that no investor could earn an above-average risk-adjusted return by acting on this level of information. Thus, the security market was inefficient and that resulted in stock prices to not accurately reflect new information. The researcher finds that this might have resulted from: 1) investors were unable to interpret the new information correctly; 2) investors had no access to new information; 3) the transaction cost in trading security was an obstruction to free trading; 4) investors were affected by short-sale restrictions; and finally, 5) investors might have been misled by the change in accounting principles. The results showed that during the 2008 crisis, the market reacted negatively as it was impacted by negative information. There were overreactions in the market which took a longer time to absorb negative news because of the lack of information among sukuk investors and issuers. Nevertheless, the results showed positive reactions after the crisis, indicating that the overreactions during the crisis had recovered slowly. In conclusion, this analysis provides valuable information and guidelines to issuers, policy makers, regulatory bodies, and investors, both Muslim and non-Muslim, and it has potential to draw them to Islamic bonds.
2021-12-19T17:14:11.370Z
2021-12-11T00:00:00.000
{ "year": 2021, "sha1": "6f2cc895dfb5bb1e81dec438abc81d3b3d5c9d1f", "oa_license": "CCBY", "oa_url": "https://hrmars.com/papers_submitted/11825/stock-market-reactions-on-shariah-indices-following-sukuk-issuances-caar-analysis-on-2008-financial-crisis.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "c6f637c7f3c944046f8bc46274da7a4862e52a3c", "s2fieldsofstudy": [ "Economics", "Business" ], "extfieldsofstudy": [] }
232384243
pes2o/s2orc
v3-fos-license
Genetic Approaches Using Zebrafish to Study the Microbiota–Gut–Brain Axis in Neurological Disorders The microbiota–gut–brain axis (MGBA) is a bidirectional signaling pathway mediating the interaction of the microbiota, the intestine, and the central nervous system. While the MGBA plays a pivotal role in normal development and physiology of the nervous and gastrointestinal system of the host, its dysfunction has been strongly implicated in neurological disorders, where intestinal dysbiosis and derived metabolites cause barrier permeability defects and elicit local inflammation of the gastrointestinal tract, concomitant with increased pro-inflammatory cytokines, mobilization and infiltration of immune cells into the brain, and the dysregulated activation of the vagus nerve, culminating in neuroinflammation and neuronal dysfunction of the brain and behavioral abnormalities. In this topical review, we summarize recent findings in human and animal models regarding the roles of the MGBA in physiological and neuropathological conditions, and discuss the molecular, genetic, and neurobehavioral characteristics of zebrafish as an animal model to study the MGBA. The exploitation of zebrafish as an amenable genetic model combined with in vivo imaging capabilities and gnotobiotic approaches at the whole organism level may reveal novel mechanistic insights into microbiota–gut–brain interactions, especially in the context of neurological disorders such as autism spectrum disorder and Alzheimer’s disease. Introduction The microbiota-gut-brain axis (MGBA) is a bidirectional signaling cascade in which efferent signaling pathways originating from the central nervous system (CNS) regulate the activities of the intestine and the microbiota, while afferent signaling originating from the microbiota and the intestines affects the development and the function of the CNS [1]. The MGBA mainly consists of gut microbiota residing in the intestinal lumen, intestinal cells including enterocytes, enteroendocrine cells (EECs), goblet cells, and neurons and glia in the CNS. Gut microbiota have been shown to be required for normal CNS homeostasis. For example, germ-free (GF) mice have been reported to display hypermyelination in the prefrontal cortex [2] and to have defective microglial maturation and functions [3]. The actions of the MGBA are known to be mediated by metabolites and cytokines that are generated by members of the gut microbiota or released from immune cells and intestinal cells activated by them, or by the streamlined direct connections between the brainstem and intestines via the vagus nerve [4]. Imbalances of the gut microbiota, referred to as dysbiosis, and any associated malfunctions of the MGBA have been implicated in a variety of neurodevelopmental, neuropsychological, and neurodegenerative diseases. These dysbiotic malfunctions have been closely associated with aberrant systemic inflammatory responses and have been shown to culminate in the brain defects that lead to behavioral defects and neuronal dysfunctions [5]. Thus, a more detailed understanding of the underlying mechanisms and physiological roles of the MGBA in the etiopathology in diseases will help to design novel therapeutics based on modulating MGBA activities. For these purposes, the zebrafish has emerged as an excellent animal model system to address the host-microbe interactions for both normal physiology and pathogenesis in vivo. In this topical review, we summarize recent human and other animal model findings regarding the MGBA and discuss the characteristics and utility of using zebrafish as an animal model to study the MGBA. MGBA Pathways In a pioneering study, GF mice were shown to display exaggerated stress responses and enhanced stress hormone levels that were reversed by the colonization of beneficial bacteria or commensal microbiota, indicating that the gut bacteria play a critical role in the regulation of brain function [6]. Subsequent studies showed that GF mice exhibited hypermotor activity and reduced anxiety, concomitant with changes in gene expression profiles that are important for brain development, and with changes in essential neurotransmitters, specifically in the striatum [7]. The permeability of both the gastrointestinal barrier and the blood-brain barrier (BBB), separating luminal contents from the intestines or blood vessel contents from the brain, respectively, should be tightly regulated by the host to maintain their functional integrities, as the unchecked translocation of bacterial components and metabolites could elicit detrimental inflammation in both the intestine and the brain. Pro-inflammatory cytokines, such as tumor necrosis factor (TNF) and interferon γ (IFNγ), have been shown to regulate the functions of tight junctions [8][9][10]. The MGBA pathways and important signaling mediators are summarized in Figure 1. Regulation of Brain and Intestinal Permeability by the Microbiota In GF mice, their intestinal barriers exhibited decreased permeability, with increased expression of several tight junction proteins and immature structural features, which were all reversed after associations with human commensal microbiota [11]. In addition, the composition of gut bacteria can directly affect gut permeability by determining its mucus layer properties [12], and pathogens such as Bacteroides fragilis and Vibrio cholerae or probiotic strains such as Lactobacillus plantarum can change intestinal permeability by regulating tight junction proteins [13], all indicating critical roles for gut microbiota in the regulation of gut permeability. As a result, the changes in intestinal permeability affected by infections or by exposure to dysbiotic products such as bacterial toxins and metabolites can result in low-level chronic and systemic inflammation, eventually developing into a variety of diseases [10,14]. The BBB in several brain areas is also regulated by the gut microbiota. However, in contrast to the intestinal barrier, it became more permeable in GF mice, with reduced expressions of endothelial tight junction proteins, and these were rescued after associating with the commensal microbiota [15]. Exposure to short chain fatty acid (SCFA)-producing bacteria or to butyrate treatment also reversed BBB permeability defects via the upregulation of tight junction proteins, indicating the crucial involvement of bacterial SCFAs in BBB permeability [15] as well as gut permeability [16]. The bidirectional pathways of the microbiota-gut-brain axis (MGBA) involving the microbiota, the intestine, and the brain. Intestinal dysbiosis and derived metabolites in neuropathological conditions induce the barrier permeability defects and local inflammation with increased pro-inflammatory cytokines, MAMPs (e.g., LPS), and activation of immune cells in the gastrointestinal tract. These signaling mediators as well as microbial metabolites (e.g., SCFAs and Trp derivatives) and hormones (e.g., GLP-1, PYY) can distantly affect the brain function via the humoral pathway. In addition, dysregulated regulation of the vagus nerve can also directly modulate the brain function. Together, these MGBA pathways culminate in regulating neuroinflammation and neuronal defects of the brain and behavioral abnormalities. Refer to the text for details. 5-HT; 5-hydroxyltryptamine; LPS, lipopolysaccharides; MAMPs, microbe-associated molecular patterns; SCFAs, short chain fatty acids; Trp; tryptophan. Immune Cell Infiltration into the Brain and Inflammatory Cytokines Several types of innate immune cells are found in the brain, including residential microglia, perivascular macrophages, and infiltrating macrophages. The infiltrating macrophages of the brain are derived from peripheral bone marrow pro-inflammatory monocytes that transmigrate across the BBB and differentiate into activated macrophages [19]. Neuroinflammation can provoke this type of infiltration, the hallmarks of which are activated microglia as well as the increased expressions of pro-inflammatory cytokines and chemokines such as interleukin-1β (IL-1β), IL-6, IL-17, TNFα, and monocyte chemoattractant protein 1 (MCP1, also called CCL2) in the inflamed brain [20,21]. The selective depletion of infiltrating monocytes worsened the Aβ load in the Alzheimer's disease (AD) brain [22] and infiltrating macrophages were shown to have a higher phagocytic capacity against toxic molecules (i.e., Aβ) in AD compared to residential microglia [20,21], indicating that monocyte infiltration plays a critical role in Aβ clearance. Neutrophils have also been shown to infiltrate into the brain using mouse AD models, and were attracted to amyloid plaques in an LFA-1 integrin-dependent manner [23,24]. Neutrophil depletion, or LFA-1 blockade, was reported to reduce the neuropathological phenotypes and behavioral deficits in AD mouse models [24]. Furthermore, adaptive immune cells (e.g., clonally expanded CD8 + T cells) have been found in the cerebrospinal fluid (CSF), perivascular regions, and the parenchyma of AD brains [25,26], although any functional significance and detailed infiltration routes remain to be elucidated. The neuroinflammation that promotes the infiltration of peripheral immune cells into the brain is likely associated with MGBA function because systemic inflammation elicited by acute infections or by gut dysbiosis, such as inflammatory bowel disease (IBD), has been shown to be strongly associated with neuroinflammatory responses [27]. During dysbiosis, changes in beneficial or pathogenic bacteria and their metabolites (e.g., reduction of Faecalibacterium prausnitzii and its metabolite butyrate) may affect the integrity of intestinal barriers by regulating tight junction protein expressions [28,29], eventually resulting in the mobilization of activated immune cells in the intestine and the increased expression of inflammatory cytokines [14]. Infiltration of peripheral immune cells into the brain may be further facilitated by systemic inflammation because BBB permeability is also compromised as a result [10]. Consistent with this process, gut dysbiosis was closely associated with the infiltration of pro-inflammatory T helper 1 (Th1) cells into mouse brain using the 5XFAD model and with microglia differentiating into the pro-inflammatory M1 type [30]. The infiltration of immune cells into the brain due to intestinal inflammation has also been observed using a non-mammalian AD model. In a Drosophila model overexpressing amyloid β42 (Aβ42), a non-lethal enterobacterial infection promoted the infiltration of hemocytes (invertebrate phagocytic immune cells) and induced neurodegeneration via the TNF/JNK pathway due to enhanced oxidative stress [31]. MGBA Metabolites: SCFAs and Tryptophan Derivatives, Including Serotonin Microbial metabolites have been proposed to be the mediators that link microbiota changes to host metabolism in both normal physiology and disease [32]. The effects of these metabolites on the CNS are via the activation of nerves innervating the intestine, activation/mobilization of immune cells residing in the intestine, or by activating the release of molecules (i.e., endocrine peptides and cytokines) via humoral pathways [33]. The SCFAs and tryptophan/serotonin (also known as 5-hydroxytryptamine or 5-HT) are among the best characterized microbial metabolites that are important for MGBA function and are discussed below. SCFAs SCFAs mainly consist of acetate, propionate, and butyrate derived from anaerobic fermentation of dietary fibers by the intestinal microbiota. SCFAs are primarily transported into colonocytes using monocarboxylate transporters (MCTs) or sodium-coupled monocarboxylate transporters (SMCTs) where the majority of SCFAs are metabolized as part of energy production. Non-metabolized SCFAs can be released systemically and may have distant functions in other organs, including the brain [33,34]. Not only serving as colonocyte energy sources, SCFAs generated from microbiota metabolic activity can modulate enteric functions. For example, butyrate and acetate have been shown to protect intestinal barrier integrity via an AMPK-mediated reassembly of tight junction proteins [28,35] and to prevent bacterial internalization and translocation [36]. In addition, SCFAs have also been reported to control the maturation of intestinal immune cells: butyrate produced from F. prausnitzii regulated the balance of pro-inflammatory Th17 and anti-inflammatory Treg in IBD [37] by inhibiting the histone deacetylase (HDAC) activity for the expression of Foxp3, a critical regulator of T-cell differentiation [38,39]. A failure of intestinal homeostasis regulation by SCFAs could result in both aberrant intestinal inflammation and systemic inflammation. In addition, SCFAs may distantly influence the brain function by increasing the mobilization of peripheral immune cells or the expression of inflammatory cytokines to modulate systemic inflammation [14]. Alternatively, SCFAs have been shown to activate their receptors (the free fatty-acid receptors, FFAR2 and FFAR3) and associated downstream cAMP-PKA signaling in intestinal EECs, resulting in the release of humoral factors such as GLP-1 and PYY into the circulation [40]. Via both humoral and vagal pathways, these released peptides from EECs can then regulate cognitive function and emotional responses as well as satiety in the brain [41]. Small proportions of SCFAs released into the circulation have been shown to directly enter the brain via the BBB [42] owing to MCT expression in the endothelial cells [33,43]. However, for the regulation of neuroinflammation, it is still controversial whether SCFAs are pro-or anti-inflammatory signals as their influence may be context-dependent [44]. Tryptophan Metabolites and Serotonin An essential amino acid, tryptophan can be synthesized by several bacteria or provided in food and metabolized to serotonin or kynurenine [32,45]. While serotonin is used as an essential neurotransmitter for multiple biological functions (e.g., mood and cognition) in the brain, it also has crucial roles in the transmission of motor and sensory signals in the intestine. Indeed, approximately 95% of serotonin is produced in the gastrointestinal tract (GIT), indicating its functional dominance in the GIT [46]. The serotonin released into the surrounding regions of the intestine after many types of luminal stimulation regulates GI motility and reflexes via local intrinsic primary afferent neurons and regulates extrinsic primary afferent neurons (including vagal afferents) to affect distant CNS functions such as feeding behaviors [47]. The bioavailability of serotonin must also be tightly regulated using the serotonin reuptake transporter on both nerve endings and in intestinal epithelial cells [48]. This enriched serotonin production in the GIT is based on interactions between the microbiota and intestinal cells, as demonstrated by the reduced serotonin levels seen in both plasma and colon samples from GF mice [49]. In fact, serotonin is produced predominantly by enterochromaffin cells (ECs)-a major EEC subtype-in the intestine, and can be activated by the metabolites of spore-forming bacteria [50] and by SCFAs from the gut microbiota [51], concomitant with the increased expression of tryptophan hydrolase 1 (Tph1), a rate-limiting enzyme for 5-HT synthesis in ECs [46,48]. Myenteric interneurons can also produce 5-HT by expressing Tph2 within the enteric nervous system, although this source is quantitatively minor compared to EC production [46,48]. Although direct evidence for 5-HT regulation of intestinal barrier permeability is rare, it has been reported that treatment with 5-hydroxytryptophan (5-HTP), the precursor of 5-HT, regulated the redistribution of the tight junction proteins and sugar permeability in specimens from healthy humans, but not in specimens from IBS patients [52]. In addition, both indole and indole-3-propionic acid (tryptophan derivatives catabolized by specific gut bacteria) were shown to protect gut barrier function [53], suggesting the involvement of tryptophan metabolites in the modulation of intestinal permeability. Although serotonin can be released directly into the systemic circulation, it cannot penetrate the brain. Instead, it is the availability of BBB-crossing tryptophan and the levels of Tph enzymes in the brain that determine the biosynthesis levels of brain serotonin [54], consistent with a positive correlation between the level of tryptophan in the circulation and that in the hippocampus [55]. Serotonin synthesis can be shunted toward the kynurenine pathway to produce neuroactive kynurenic pathway products such as kynurenic acid and quinolinic acid [45,56], especially when the expression of indoleamine-2,3-dioxygenase 1 (IDO1), the rate-limiting enzyme for tryptophan catabolism, has been increased due to inflammatory cytokines, producing the kynurenic pathway metabolites that can also be served as competitors for regulating the serotonin level [54]. In the brain, kynurenic acid is generally considered to be neuroprotective and quinolinic acid to be excitotoxic, while in the intestine, kynurenic acid is considered to be anti-inflammatory and quinolinic acid to Cells 2021, 10, 566 6 of 25 be pro-inflammatory via the regulation of NMDA receptors [57]. However, their exact roles are likely to be dose-and context-dependent and remain to be explored further, especially for microbiota interactions. Interestingly, 5-HT-expressing neurons in anamniotes such as zebrafish are found in several areas of the brain, including the pretectal area, basal forebrain, and hindbrain, but in amniotes they are only found in the hindbrain, although the functional significance of this difference is unclear [54,58]. MGBA-Associated Neurological Disorders Not only does it regulate the normal physiology in the brain, but the gut microbiota is also implicated in pathological conditions of the nervous system via the various gut-brain interactions described above. Acute systemic inflammation due to pathogen infections of the intestine and low-grade chronic inflammation due to intestinal dysbiosis and its derived products of bacterial toxins, metabolites, and cytokines [14] are known to be closely associated with the progression of neurological disorders [59]. Below, the diverse roles of the gut microbiota in autism spectrum disorder (ASD), representing a neurodevelopmental disorder, and AD, representing a neurodegenerative disease, will be discussed in detail. Autism. Spectrum Disorder: A Neurodevelopmental Disorder ASD represents a group of neurodevelopmental disorders often diagnosed in childhood and characterized by impaired social interactions and repetitive/restrictive behaviors that are frequently accompanied by intellectual disabilities and epilepsy [60]. The etiology of ASD is both complex and heterogeneous, as ASD encompasses a range of symptoms with genetic and environmental factors contributing to its pathogenesis. Large-scale exomic and genomic sequencing of ASD patients has identified a number of candidate genes implicated in neuronal connections, synaptic function, chromatin remodeling/transcription, and RNA splicing [61][62][63]. Notably, a gestational inflammatory environment, such as an infection during pregnancy, can dramatically affect neurodevelopmental and autistic outcomes [64]. Based on this idea, inflammatory challenges during pregnancy in animal models, such as viral mimetic poly polyinosinic:polycytidylic acid (I:C) exposure, have been used successfully to establish ASD models for mechanism studies, such as the maternal immune activation mouse model (e.g., [65]). For the pathobiology of ASD, neurodevelopmental defects in the brain have been the main focus due to both the implication of ASD-susceptible genes in neuronal function and the known morphological abnormalities of ASD brains [66,67]. However, it is also well known that ASD patients have elevated systemic and brain inflammation indicated by increased inflammatory cytokines (e.g., IL6, MCP1, IFN1, and TNFα), activated microglia, and autoantibodies [64,68,69]. Considering that such abnormal inflammatory regulation is thought to be closely linked to the gut microbiota, dysbiosis has been suggested as a main factor responsible for ASD etiology, and this is also consistent with the comorbid gastrointestinal disturbances of ASD patients and their vulnerability to infections [60,[70][71][72]. In support of this idea, GF animals have been reported to exhibit ASD-like traits (e.g., social avoidance and repetitive behaviors) which were rescued by association of the conventional microbiota [73], and both human patients and several ASD mouse models have been shown to have gut dysbiosis [65,[74][75][76]. In ASD dysbiosis, altered microbial populations and decreased diversity have both been documented, with Clostridium spp. proposed to be the main culprit by producing neurotoxins that are transported via the vagus nerve (e.g., [77]), but consistent changes in microbiota profiles have yet to be identified [75]. The most direct evidence for a causal relationship between the gut microbiota and ASD has come from recent "humanized" ASD mouse models: the offspring from GF mice harboring a gut microbiota from ASD-patients exhibited ASD-like phenotypes, and further transcriptomic analyses identified aberrant alternative splicing of ASD-risk genes similar to that seen in human ASD brains [78]. In support of a microbiota role (and their metabolites) for such ASD-related phenotypes, ASD model mice were rescued by a variety of approaches, including the use of prebiotics/probiotics (Lactobacillus reuteri), antibiotics (vancomycin), metabolites (GABA agonists taurine and 5-aminovaleric acid), specific diets (a ketogenic diet), and microbial transfer therapy [74,[78][79][80]. The etiology of ASD, especially as it relates to GI problems, is postulated to begin with increased gut permeability ("leaky gut") in the offspring due to gut dysbiosis under conditions of elevated inflammation such as during maternal immune activation (MIA) or with a high-fat diet (HFD) [8,65,81]. An increased level of toxic metabolites, such as 4ethylphenylsulfate (4-EHP) and 5-HT [65,82], or a decreased level of beneficial metabolites, such as butyrate during dysbiosis [28], may also impact gut permeability by modulating the expression of tight junction proteins in the intestine [75]. Such a dysfunctional intestinal permeability may allow for the unrestricted entry of dietary components, bacterial components, and metabolites that can elicit increased local intestinal inflammation that is coupled with increased plasma levels of pro-inflammatory cytokines (e.g., IL-6), resulting in systemic inflammation [14] or the permeability changes may directly disturb vagus nerve-dependent signaling to the brain [17]. Notably, IL-6 has also been shown to impair tight junctions by inducing pore-forming claudin-2 [83], further worsening gut permeability. It is well established that systemic inflammation can disrupt BBB integrity in neurological disorders [10]. As a result, the compromised BBB may be permissive to peripheral pro-inflammatory cytokines (e.g., IL-6), metabolites (e.g., p-cresol and propionate), or the infiltration of peripheral immune cells in ASD patients [84,85], with ensuing increased neuroinflammation. Dysregulated neuroinflammation in ASD is known to play a crucial role in its pathogenesis and is mediated by activated microglia and astrocytes that impact dendritic branching, spine density, and neuronal connectivity as well as elevated cytokine expression, finally leading to both cognitive and behavioral abnormalities [86,87]. Therefore, treating ASD by gut microbiota manipulation may offer more efficient therapeutic approaches which include antibiotics, prebiotics, probiotics, beneficial metabolites, a gluten-free diet, or fecal microbiota transplantation. For example, butyric acid administration is known to ameliorate ASD phenotypes in both mouse models and human patients, presumably by modulating mitochondrial function and neurotransmitter gene expression; interestingly, other SCFAs (e.g., acetate and propionate) did not show any rescue effect; rather, propionate has been used to reversibly induce ASD phenotypes and to generate ASD animal models [75]. Furthermore, in several ASD models where mice exhibited gut dysbiosis (e.g., maternal HFD-fed mice, BTBR mice, and shank3 knockout mice) supplementation with L. reuteri was able to rescue the ASD-like social interactions, although this was achieved by vagus nerve stimulation that activated oxytocin release from specific neurons in the ventral tegmental area of the brain, and not by microbial changes [74,88]. Both gluten-free and ketone-rich diets have been reported to rescue GI abnormalities and ASD symptoms, probably by regulating mitochondria-associated energy metabolism and oxidative stress [89] as well as by modulating the gut microbiota [90,91]. In addition, direct "microbial transfer therapy", using two weeks of vancomycin treatment followed by eight weeks of fecal microbiota transplantation, has been shown to improve these gut abnormalities and autistic phenotypes and to change the abundance of beneficial bacteria in ASD patients [92]. However, as these interventions may often have unwanted side effects such as constipation, and sometimes have conflicting therapeutic effects, more detailed mechanistic studies and optimization protocols are still required for personalized applications. Alzheimer's Disease: A Neurodegeneration Problem There are two forms of AD: early onset (EOAD, or familial) and late onset (LOAD), with the latter accounting for more than 95% of cases that usually occur after the age of 65. AD is pathologically characterized by the accumulation of amyloid β (Aβ) species and phosphorylated tau protein, leading to the formation of neurotoxic amyloid plaques and neurofibrillary tangles in the brain, respectively, in addition to neuroinflammation, loss of synapses, cognitive impairment, and the eventual loss of neurons [93,94]. The non-genetic risk factors that are well known for AD include aging, some metabolic disorders, and diets [95][96][97], and both large-scale genome-wide associated studies (GWAS) and wholegenome sequencing of AD patients have identified more than 30 high-risk genetic loci for the disease [98]. Previous studies have suggested that these risk factors-aging along with some metabolic syndromes and diets-may contribute to AD pathogenesis through interactions with the gut microbiota. It has been well established that these factors are tightly linked to changes in the microbiome [99][100][101]. In addition, the host's APOE alleles (especially APOE4), representing the strongest AD genetic risk factor, have been correlated with both an elevated innate immune response and changes in the butyrate-producing bacterial populations and their metabolite profiles in human and mouse models [102,103]. Similarly, several genetic risk genes for AD, including TREM2, CD33 (also known as SIGLEC-3), and SHIP1 have also been implicated in intestinal inflammation [104][105][106], which can directly or indirectly interact with the gut microbiota. The close relationship between gut dysbiosis, systemic inflammation, and AD pathogenesis has been shown in a clinical study where increases or decreases in the bacterial populations harboring either proinflammatory (e.g., Escherichia/Shigella) or anti-inflammatory (e.g., Eubacterium rectale) activities, respectively, were associated with systemic inflammation status and brainamyloid deposition in cognitively impaired patients [107]. Similarly, a population study of IBD patients revealed that increased intestinal inflammation (likely coinciding with gut dysbiosis) was associated with both a higher early onset risk of AD and a higher overall incidence of AD [101]. Numerous correlational studies of gut dysbiosis in AD patients and in AD mouse models (e.g., APP/PS1, 5XFAD, and P301L transgenic mice) have extensively characterized changes in microbial populations, and these results have been summarized in recent reviews [59,108,109]. The importance of the gut microbiota as a contributing factor to AD has been functionally validated using GF and antibiotic-treated transgenic AD mouse models that exhibited reduced AD-like symptoms [110,111]. In the multifactorial etiology of LOAD, brain inflammation, proposed as one of the most crucial processes for its development [112], is known to be closely associated with microbiota and permeability changes in the gut, as described above. In neurological disorders, the GIT barrier has been found to be defective due to gut dysbiosis and altered microbial metabolites such as SCFAs and tryptophan metabolites [14], and the integrity of the BBB has also been shown to be compromised during the early stages of AD, irrespective of amyloid β and tau accumulations [113][114][115]. Neuroinflammation-related activated microglia and the increased expression of pro-inflammatory cytokines in the brain may both originate from such permeability defects in the GITs and brains of AD patients. Such defects may also allow the direct entry of pro-inflammatory bacteria such as oral pathogens (e.g., Porphyromonas gingivalis) and their endotoxic components/metabolites into the brain [116,117]. For example, infiltrated lipopolysaccharide (LPS), a representative microbe-associated molecular pattern molecule (MAMP), potentially originated from those pathogens, has been shown to promote Aβ production and aggregation as well as neuroinflammation [118,119] or to indirectly activate cells lining these barriers and the peripheral nervous system to elicit systemic inflammation and facilitate immune cell infiltration into the brain and inflammatory cytokines [120]. Interestingly, specific microbiota members in humans can also produce bacterial amyloid proteins, such as Curli produced from the csgA gene in E. coli. These bacterial amyloid proteins can prime amyloidosis and brain proteinopathies in neurodegenerative diseases through cross-seeding activity to form β-sheet structure-mediated toxic aggregates in a "prion-like" fashion via the autonomic nervous system, or by eliciting gut inflammation and releasing inflammatory mediators [121]. Although interactions between Curli and amyloid β of AD have yet to be tested directly, it has recently been demonstrated that Curli promoted the aggregation and pathology of α-synuclein, the protein responsible for Parkinson's disease progression in vitro and in vivo [122]. Zebrafish as a Model System for MGBA Studies The zebrafish model became popular for genetic studies due to its large clutch size, small body size, genomic and physiological similarities to humans, its body transparency optimal for in vivo imaging, powerful forward and reverse genetics, and the possibility of in vivo chemical screening. These advantages allowed the zebrafish to become the main model organism for studying developmental, physiological, and pathological processes, and as a model with a large collection of mutants and transgenic animals that can visualize both tissues of interest and signaling pathways, it can mimic a variety of human disease conditions for understanding their underlying mechanisms [123]. Although the molecular components and signaling pathways responsible for MGBA functionality (i.e., those of dysbiosis, dysfunctional immune cell activities, and aberrant metabolites) have been identified mainly from mouse model studies, the zebrafish model has emerged as an alternative and attractive vertebrate model with its advantages of being amenable to genetic manipulations, allowing for real-time in vivo imaging capability at the whole organism level, and the availability of powerful and easy GF-rearing, gnotobiotic conditions. Indeed, host gut microbiota interactions [124] and neurological disorders [125,126] that mirror human conditions have been independently investigated using zebrafish. However, the combined efforts of the characterization of recently developed zebrafish disease models for ASD and AD, the establishment of microbial approaches including GF culture [127,128] and metagenomic analyses of the microbiota [129,130], and the expansion of their neurobehavioral repertoires [131][132][133][134] may allow dissection of the signaling pathways for the host-bacteria interactions in neurological disorders, discussed in the following sections. The key advantages of the zebrafish model for studying the MGBA are summarized in Figure 2. The Zebrafish GIT and Associated Cell Types Both the development and the organization of the zebrafish intestine have been well documented in previous studies: although the zebrafish GIT lacks an acidic stomach, crypts, Paneth cells, and classical microfold (M) cells, the anatomical, molecular, and func- The Zebrafish GIT and Associated Cell Types Both the development and the organization of the zebrafish intestine have been well documented in previous studies: although the zebrafish GIT lacks an acidic stomach, crypts, Paneth cells, and classical microfold (M) cells, the anatomical, molecular, and functional features of its GIT are largely conserved between mammals and zebrafish [135][136][137]. The major cell types identified in the zebrafish GIT include absorptive enterocytes, mucinsecreting goblet cells, hormone-releasing EECs, immune cells, smooth muscle cells, and an enteric nerve system [135,138]. These cell types in the zebrafish GIT play pivotal and distinct roles in coping with diverse environmental changes and can relay this information to other organs, including the brain. For example, their EECs are capable of sensing luminal contents while still producing and releasing hormones or signaling molecules that can regulate physiological and homeostatic functions of the intestine and the brain [139]. These EECs can be dysfunctional, with morphological alterations after HFD feeding in a larval model [140], and can secrete 5-HT in response to a pathogen infection (Edwardsiella tarda) to activate enteric neurons and promote bacterial clearance mediated by the Trpa1 receptor [141]. A zebrafish vagus nerve has also been described in both embryos and adults, forming at approximately 3-4 dpf [142], and an afferent sensory circuit of the vagus (projecting to the hindbrain via the nodose ganglion) was elegantly shown to be activated by both an E. Tarda infection or its tryptophan metabolites [141]. The zebrafish enteric nervous system (ENS), a neural crest-derived peripheral nervous system, consists of enteric neurons, a submucosal/myenteric plexus, associated glia, and muscle layers. It has been shown to regulate intestinal motility and to mediate connectivity between the CNS and the intestine, with neurons secreting neurotransmitters such as serotonin, dopamine, histamine, acetylcholine, and GABA, similar to mammals [143]. The developmental programs of the zebrafish ENS have been well described using forward and reverse genetic studies, revealing the intricate signaling pathways governing ENS genesis and function in both normal and pathological contexts that are relevant to human conditions [144,145]. In addition, the zebrafish immune system has shown a high level of homology to that of mammals: most immune cell lineages in mammals (e.g., macrophages, neutrophils, and lymphocyte B/T cells) have also been identified in zebrafish [146] and many mammalian immune-signaling pathways, immune receptors (pattern recognition receptors such as TLRs), and inflammatory mediators such as cytokines, interleukins, and complement are conserved in zebrafish [147]. Loss-of-Function Approaches For both efficiency and convenience, morpholino oligonucleotides have been commonly used in zebrafish to transiently knock down the endogenous expressions of genes of interest in loss-of-function studies [148]. These morpholinos can be designed to disturb translation by binding to translation initiation sites, or to disturb RNA splicing by binding to splicing sites, which affects zygotic or maternal zygotic expression of the mRNA, respectively. Although a downside of morpholino administration is potential toxicity which can give rise to unanticipated phenotypes and misinterpretations of the gene functions [149], the technique can still be a useful loss-of-function genetic tool to quickly test gene function as long as one carefully follows the guidelines for proper usage [150]. Permanent loss-of-function studies require zebrafish lines with mutant knockouts, and the zebrafish model represents an ideal system to apply recent genome-editing technology. Traditionally, collections of zebrafish mutant lines with diverse phenotypes were obtained by forward genetic screening using random mutagenesis, with large-scale mutagenesis screening generating approximately 1500 mutant lines [151]. The generation of gene-targeted zebrafish knockouts using reverse genetic approaches became available and popular when genome-editing technologies were adopted, beginning with zinc finger nucleases (ZFNs), transcription activator-like effector nucleases (TALENs), and most recently, clustered regularly interspaced short palindromic repeats (CRISPR) editing which was originally identified as a bacterial defense system against phage infection and was adapted to allow rapid and accurate target gene editing in vivo [152]. Ever since the CRISPR/Cas9 system was successfully validated to produce zebrafish knockouts in vivo [153], the technique has transformed zebrafish reverse-genetic approaches to allow for the generation of mutants at will, and has also been applied to the introduction of knockin mutations, the target-specific transcriptional regulation, and induction of precise base-pair editing [154,155]. Gain-of-Function Approaches Gain-of-function studies usually require the overexpression of genes of interest. Transient overexpression can be achieved by the direct in vitro microinjection of transcribed mRNA encoding the protein of interest into one-cell zebrafish embryos. Although this is not cell type-specific overexpression, mRNA-injected overexpression is a fast and valuable tool for functional gene analysis, and also can be used to verify a loss-of-function phenotype in rescue experiments (e.g., [156]). Both long-term and tissue-specific gene expression effects require permanent/stable expression in transgenic animals with cell type or tissue specificity. Such stable transgenic animals, with germline transmission, can be created with high efficiency by microinjecting plasmid DNA made using swappable promoters, the genes of interest, and effectors (e.g., fluorescence proteins and recombination cassettes) using a transposon-based Tol2 kit [157] which can direct gene expression in a tissue-specific manner. To make the use of cell type-specific expression even more flexible, the Gal4/UAS binary system has been adopted because it permits any gene to be expressed in a desired place and time by crossing tissue-specific Gal4 transgenic lines with a UAS gene of interest [158]. However, Gal4 cytotoxicity and UAS methylation have hampered any broader application of the Gal4/UAS system in zebrafish [159]. Attempts to circumvent such GAL4/UAS system problems, such as fusing the Gal4 DNA-binding domain to heterologous transactivation domains, and optimizing the 5X UAS sequence [160], or the use of alternative flexible binary systems such as the Cre/loxP system, the Flp/FRT system, and the QF/QUAS system have also been developed and are still evolving in the zebrafish research field [161][162][163]. Zebrafish Transgenic Lines and In Vivo Imaging of Host-Bacterial Interactions Due to the optical transparency of the zebrafish embryo throughout its development, and the capability for real-time in vivo imaging at high resolution, reporter transgenic lines that can visualize immune cells and immune signaling will become great assets for dissecting interactions between host immune cells and bacteria, in addition to dissecting gut-brain signaling. Several fluorescently tagged transgenic lines have been established that have labeled myeloid cells (neutrophils, macrophages, and microglia), lymphocytes (B cells and T cells), and intestinal cells (enterocytes and EECs) that are summarized in Tables 1 and 2. The use of these same cell type-specific promoters can also drive the expression of a variety of advanced genetically encoded fluorescent proteins (e.g., photoconvertible Kaede or Dendra) and genetically encoded neurotransmitter sensors (e.g., dopamine and norepinephrine) that have been developed by coupling neurochemical-sensing G-protein-coupled receptors (GPCRs) with a circular-permutated fluorescent protein (Table 3). Such transgenic zebrafish lines have been suitable for in vivo cell tracking to reveal direct communications between the gut microbiota and the brain by UV photoconversion of Kaede in a non-invasive manner [164] and for the in vivo monitoring of dynamic zebrafish larval brain responses to dopamine stimulation in real time [165]. In addition, a tissue-specific ablation strategy, where nfsB encoding nitroreductase B (NTR) converts prodrugs (e.g., metronidazole) into cytotoxic metabolites and ablates cells expressing NTR, can also be applied to explore the functional roles of specific gut-derived cell types during brain development [164]. Therefore, the exploitation of these powerful genetic approaches combined with a variety of in vivo imaging techniques for monitoring both cellular behavior and signaling path-ways in whole zebrafish may reveal novel mechanistic insights into microbiota-gut-brain interactions, especially in the context of neurological disorders such as ASD and AD. Dendra2 converted by UV from green to red for cell tracking [170] Tg(mpeg1: GFP-CAAX) Expression of membrane-GFP in macrophages [166] mfap (microfibril associated protein 4) Tg(mfap4:dLanYFP-CAAX) Expression of membrane-YFP in macrophages [170] csf1ra/fms (colony stimulating factor 1 receptor, a) TgBAC(csf1ra:GFP) GFP expression [171] Tg(fms::nfsB-mCherry) mCherry expression conditional cell ablation by metronidazole treatment [172] irg1 (immunoresponsive gene 1) Tg(irg1:EGFP) GFP expression Expression of activated macrophages [173] Microglia (the resident innate immune cells in the brain) apoeb (apolipoprotein Eb) Tg(apoeb:lyn-EGFP) expression of membrane-GFP in microglia [174] slc7a7 (solute carrier family 7 member 7) Tg(slc7a7:Kaede) The Gut Microbiota of Zebrafish in MGBA Studies In zebrafish, intestinal colonization by microbes from the surrounding environment begins at approximately 3-4 dpf as the mouth opens and the GIT matures [129]. The core microbiota indigenous to the zebrafish GIT, dominated by phyla Proteobacteria and Fusobacteria, have been shown to be determined by selective pressure from the zebrafish host based on comparative composition analyses of the gut microbiota in mouse-zebrafish reciprocal transplantation, domesticated strains, and wild-caught zebrafish [185,186]. Although the exact taxonomies of the microbial communities in humans, mice, and zebrafish differ, there is a trend towards interspecies conservation at the phylum level [187,188]. In addition, host responses to gut colonization by microbiota have been reported to be significantly shared between zebrafish and mouse models [189], and zebrafish, mouse, and human microbiomes are known to have similar abundances of functional pathways (i.e., DEG pathways) [130] which suggests that functional and mechanistic findings using the zebrafish microbiota may have direct translational implications for understanding human microbiota functions. Roles for gut microbiota in GIT development and function have been investigated using several zebrafish models, and our mechanistic understanding of host gut microbiota interactions has benefited from using both GF and microbial reassociation approaches. The protocol for rearing zebrafish larvae under GF conditions for up to 8-9 dpf has been well established [127,189,190]. Although gross morphology was normal, GF zebrafish larvae exhibited defective proliferation and differentiation of intestinal epithelial cells (e.g., depletion of goblet cells and EECs, immature glycan expression, and a lack of intestinal alkaline phosphatase activity at the brush border) which could be rescued by reassociation with gut microbes [189,190]. It was also found that the residential gut microbiota decreased their own potential toxicity by inducing the expression of the intestinal alkaline phosphatase (iap) gene localized in the brush border of the intestine to detoxify LPS, and because GF zebrafish larvae failed to induce iap, they were rendered hypersensitive to LPS [191]. The gut microbiota can also contribute to the functional regulation of the immune system, as shown by the NFkB-dependent and tissue-specific immune-signaling changes seen after microbial colonization compared to GF controls that were visualized in vivo in real-time using Tg(NFkB:EGFP) transgenic zebrafish larvae [181] and by the dramatic induction of the intestinal serum amyloid A (saa) gene upon microbial colonization that contributed to innate immunity by suppressing the aberrant activation of neutrophils [178]. Novel mechanistic findings about how the microbiota can regulate host metabolism have also been made using zebrafish models. For example, metabolically, the presence of the microbiota promoted fat uptake and the formation of lipid droplets upon feeding that was concomitant with phylum Firmicutes enrichment [192] and "silenced" EECs by damaging their nutrient-sensing ability under HFD conditions, concomitant with the genus Acinetobacter blooming [140]. In addition, probiotic Lactobacillus treatment modulated the gut microbiota composition and the gene expression signature of glucose metabolism in 8 dpf zebrafish larvae, resulting in both reduced glucose levels and feeding behaviors, presumably through the increased formation of SCFAs by gut microbiota activity [193]. For modelling type 2 diabetes in adult zebrafish, changes in both microbial compositions and gene expression profiles towards more diabetes-related patterns have also been reported, consistent with mammalian models [194]. For toxicology research, the GF zebrafish model has been used to address interactions between the gut microbiota and the metabolism of xenobiotics such as chemical drugs and toxic pollutants [195], which has broadened the applicability of zebrafish microbial studies in combination with the power of both GF and reassociation experimental designs. Similar to GF mice exhibiting increased locomotion, decreased anxiety, impaired memory, and decreased social behavior in neurodevelopmental and neurobehavioral studies [7,196], GF-reared or antibiotic-treated zebrafish larvae have consistently shown hyperactivity and anxiolytic activity based on locomotion and thigmotaxis assays, respectively, which could be rescued by their exposure to commensal microbiota ("conventionalization") or to specific bacteria [197,198]. In addition, supplementing larval and adult zebrafish with Lactobacillus species demonstrated that these probiotic bacteria conferred anxiolytic effects and increased shoaling, and these behavioral effects were accompanied by changes in both microbial composition and gene expression profiles in the brain related to neurotransmitter signaling and Brain-Derived Neurotrophic Factor (BDNF) [199,200], suggesting this zebrafish model as a platform for validating the potential use of probiotics for neurobehavioral effects. Zebrafish models for ASD have been well established using chemical treatments or genetic manipulations followed by neurobehavioral assays to assess a range of core autism-like phenotypes, including disturbances of social behavior, inhibitory avoidance, aggression, anxiety, and repetitive behaviors [201,202]. For chemical treatments, the glutamatergic N-methyl-D-aspartate (NMDA) receptor antagonist MK-801, the dopamine D1 receptor antagonist SCH23390, and the epilepsy and bipolar disorder drug valproic acid have been used to evoke ASD-like symptoms in rodents [203] and have also been applied to zebrafish for modeling ASD [204,205]. For genetic modeling analogous to rodent ASD models which were established by overexpressing ASD-risk genes such as kctd13, bbs7, and cep290, or by knocking down/knocking out dyrk1a, coro1a, fam57ba, gdpd3, hirip3, kif22, maz, ppp4ca, shank3, fmr1, and cntnap2, zebrafish ASD models have been successfully generated by targeting a similar set of ASD-susceptibility genes using genetic manipulations (reviewed in [126,206]). As an example, the zebrafish dyrk1aa knockout mutant (an ASD-causative gene also implicated in Down syndrome) exhibited a smaller brain size with increased cell death, abnormal brain activity, and social interaction deficits reminiscent of typical ASD symptoms [207]. The shank3b knockout mutant, another zebrafish ASD model, also showed impaired and repetitive locomotion behavior and social interaction defects, together with decreased expressions of synaptic proteins [208], while shank3a/b double knockouts exhibited gastrointestinal motility defects with reduction of serotonin-positive EECs [209]. However, despite extensive experimental evidence in both human and mouse models supporting the involvement of microbial composition dysbiosis in ASD pathogenesis (reviewed in [78,210], zebrafish studies on the role of the gut microbiota underpinning ASD pathogenesis are surprisingly lacking, except for a very recent report that tested the requirement of early microbial colonization for acquiring social interaction behaviors: zebrafish reared under GF conditions, followed by colonization at 7 dpf, exhibited social behavior defects as a result of aberrant axonal arbor complexity in the ventral nucleus of the ventralis telencephali (Vv) and neuronal connectivity defects. These defects were also accompanied by a decreased abundance of microglia in the forebrain, highlighting a key role for microglia in the mediation of microbiota signaling and neuronal development [211]. The Zebrafish AD Model Representing a Neurodegenerative Disease Similar to their use in rodent models of AD, the accumulation of Aβ42 peptides and the aggregation of phosphorylated tau protein have been employed to generate zebrafish AD models either by direct Aβ42 peptide injections into the brain or by the overexpression of amyloid β42 or a phosphorylation-prone mutant form of tau protein to create stable transgenic zebrafish. Initial studies showed that direct injections of Aβ42 into the hindbrain ventricle induced cognitive deficits and tau phosphorylation which could be rescued by lithium, a GSK3β inhibitor [212]. Since then, research has shown that Aβ42 injections into the brain ventricle during both larval and adult stages have validated the chaperone activity of gold nanoparticles to mitigate Aβ toxicity using a larval locomotion assay and an avoidance test [213] and has also revealed Aβ42 molecular pathways regulating the sleep/awake cycle [214]. By taking advantage of the regenerative capacity of the adult zebrafish brain after Aβ42 brain injection, IL4 was identified as an activator of neural stem cell proliferation for regeneration after Aβ-induced neuronal death in the adult brain by suppressing serotonin levels to disinhibit the BDNF-NFkB signaling axis for regeneration [215,216]. As the tryptophan metabolic pathway is well conserved in teleosts, including zebrafish [54], zebrafish models may be useful tools to better understand the contributions of tryptophan metabolism to brain tissue regeneration in neurological diseases at a systemic level and in the context of the MGBA axis. Stable zebrafish AD transgenic lines have been generated mostly to model tauopathy, with neuron-specific overexpression of human TAU mutations. Among these, a representative zebrafish tauopathy model overexpressed human TAU with a P301L mutation in the nervous system using the Gal4-UAS system [217]. This model recapitulated tau pathology, with aberrant phosphorylation, neurofibrillary tangle formation, increased neuronal cell death, and locomotion defects. Furthermore, the candidacies of GSK3β inhibitors were successfully validated using this model by assessing the amelioration of tau phenotypes with candidate treatments [217]. Another zebrafish tauopathy model, using the overexpression of TAU with a rare mutation (A127T), exhibited defective motor axons and proteasome dysfunction concomitant with insoluble tau protein accumulation that could be rescued by activating autophagy [218]. It is noteworthy that even with identical mutations, TAU expressions do not always induce tauopathy phenotypes, even with tau protein phosphorylation as reported by [219]. Importantly, given the central importance of AD within the scope of neurodegenerative diseases, and the long history of using several model organisms in AD research, the wide uses of zebrafish AD and tauopathy models are still to be established and further improved in order to recapitulate human AD pathology more precisely. These factors currently hamper the application of zebrafish AD models to MGBA studies. Future Directions Over the last few decades our understanding of the interactions between the microbiota, the gastrointestinal tract, and the brain that can affect both the physiology and pathology of neurodevelopmental and neurodegenerative conditions has been significantly broadened, with rodents predominantly being used as animal models. As a model organ-ism, the zebrafish is beginning to be used for MGBA studies, based on their molecular, genetic, and neurobehavioral advantages and the ability of zebrafish models to recapitulate neurological disorders. One could imagine the development of a platform for "humanized" zebrafish ASD models, where gut microbiomes from human ASD patients would be exposed to an array of GF-reared knockout and knockin zebrafish and a high-throughput chemical screening would be performed based on ASD-relevant phenotypes for personalized medicine targeting. By taking advantage of improved genetic zebrafish models that can better recapitulate the diverse neurological symptoms of human patients with faster, more reliable, and convenient assays, we expect that the combination of a high-resolution in vivo imaging at the whole animal level and established microbial manipulations will lead to novel mechanistic insights into the neuronal circuits and MGBA components in the near future that will lead to a better understanding of neurological disorders and innovative therapeutic approaches. Acknowledgments: Authors thank Binna Lim for helping to prepare the illustrations of the graphical abstract. Conflicts of Interest: The authors declare that they have no competing interests.
2021-03-29T05:20:13.746Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "1c75a716150e936445587c103124364ff8395c59", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4409/10/3/566/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1c75a716150e936445587c103124364ff8395c59", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
226551453
pes2o/s2orc
v3-fos-license
Serum bone profile and cathepsin K expression as a prognostic factor in patients with and without breast cancer metastasis pathophysiology will highly likely to lead to the discovery of an effective treatment option. This study aims to test whether the serum bone profile and expression level of cathepsin K (CTSK) in breast carcinoma is associated with metastasis. Methods: In this study, 116 participants, 58 patients who had been diagnosed with breast cancer (n=22 without metastasis and n=36 with metastasis) and 58 healthy controls were included. Serum biochemical profile and immunostaining of CTSK in the breast carcinoma were investigated. Results: The mean values of calcium, 25-OH Vitamin D, ALP, albumin, phosphorus, magnesium, TSH, cholesterol, PTH and CRP (mg/L) were 11.5±2.03, 28.12±10.5, 93.3±7.9, 3.9±0.3, 3.7±1.64, 1.8±0.24, 2.5±1.5, 165.1±28.02, 63.4±18.9 and 7.7±4.9. Individual data revealed that 70% of patients without metastasis has PTH above normal while 65% has calcium and 62% patients has ALP above normal levels, which were further increased in metastasis. Low Mg levels were detected in 13/58 of the patients with breast cancer and 3/58 of the control group.13/58 of the patients with breast cancer showed low total calcium, and 32/58 of the breast cancer group showed high calcium levels. Conclusion: The present study results suggest that CTSK expressions are associated with a higher tumour stage and distant metastasis, suggesting serum bone profile and level of CTSK expression are significant parameters in the disease diagnosis and monitoring of breast cancer metastasis. disease.However, the role of serum bone profile as a risk of skeletal metastasis has been under-researched. CTSK is a papain-like cysteine protease, involved in bone remodeling, produced by cancer cells that metastasize to the bone where it acts in proteolytic pathways that facilitate the invasion of cancer cells and has been widely used as an immunohistochemical marker for osteoclasts in situ detection [6,7].CTSK expression has shown to be increased considerably in primary cases of breast cancer with skeletal metastasis [8].CTSK expression has also been very well associated with tumour proliferation and progression in colorectal, gastric, prostate, oral squamous and glioblastoma cancers [9][10][11][12]. There are several studies concerning the diagnostic value of breast cancer bone turnover markers for bone metastases.However, their uses in the diagnosis are not yet fully validated.Much of the studies were derived predominantly from retrospective analyses.Bone markers for the diagnosis and management of bone metastasis are significantly hindered by biological and analytical variability multiple confounding factors (tumour burden, malnutrition, chemotherapy, radiotherapy, immobility) causing variations in their concentrations [13].Therefore, this study aimed to assess the serum bone profile in patients with breast cancer in comparison with healthy controls and to determine the relationship between CTSK expression, including mild, moderate and high levels and breast cancer metastasis.Further, we compared the CTSK expression in different types of breast cancer based on histopathology and receptor status to evaluate the association with specific subtypes. Materials and Methods The present study included 58 clinically established breast cancer female patients ranging in age from 34 to 74 years with the mean age of 58.6±12.4. Ethical approvals for this study were taken from Saveetha Medical College & Hospital Chennai and Gleneagles Global Health City Hospital Chennai, India.Patients were prospectively identified and registered.All samples were taken after institutional ethical committee permissions and personal consent of the patients or guardians.All patients had histologically confirmed breast cancer.The histopathological diagnosis of breast cancer, grade, stage of the tumour, and hormone receptor status (estrogen receptor ER, progesterone receptor PR and Her2neu) were recorded from the pathology reports of breast cancer patients. The blood samples were collected from the patients in heparinized tubes.The collected samples were analyzed for age, weight, and body mass index (BMI) and biochemical profile of blood.The parameters like total cholesterol, triglycerides, HDL-C, LDL-C, C-reactive protein (CRP), calcium, 25-OH Vitamin D and tumour markers CEA, CA 15-3, Vitamin D, parathyroid hormone (PTH), serum alkaline phosphatase (ALP), albumin, phosphorus (Phosphomolybdate), magnesium were included in this study. Inclusion criteria The confirmed cases of breast cancer by mammography and histological examination were chosen for this study.Controls were individuals without clinical cases who were seen at the same hospital for an annual physical examination. Exclusion criteria Patients suffering from any other cancer as well as diabetes mellitus and dyslipidemia, osteoporosis with drug treatment were excluded from this study. Immunohistochemistry studies (IHC) Invasive ductal carcinoma was diagnosed with low to the moderate distinction among donors.The tissue microarray (TMA) slides were made from tissue donors and contained at least two cores per patient (1 mm in diameter).Samples were examined for classification by the vendor's pathologist regarding histopathology, class, the involvement of the lymph node, and tumour grade.Samples were classified based on tumour-node-metastasis (TNM) classification of: the size of the primary tumour (T), degree of regional lymph node involvement (N), and the existence of distal metastasis (M).Endogenous peroxidase was quenched by incubation of tissues in 0.3% hydrogen peroxide in PBS for 10min.Nonspecific binding was blocked for 1h at room temperature with serum (5% goat sera) in phosphate-buffered saline.Endogenous biotin was blocked with an avidin/biotin blocking kit.An affinity pure goat antibody against human CTSK was applied at 40 ng/ml, and the part were incubated in a humidified chamber at 4°C overnight.Sections in Harris hematoxylin and blue in ammonia water were counterstained before mounting.Giant cell tumour tissues were used for CTSK staining as positive regulation.The criteria used for assessing the immunostaining of the breast tumour were as follows.The degree of staining was taken as a sum of the strength of staining and the percentage of stained cells: negative/mildly stain (-)=0-1; moderately positive (2+)=2-3; strongly positive (3+)=4.Almost all strongly positive had a widely stained area >4. Statistical analysis Data were presented as mean±standard deviation (SD).The normality of the data was checked using the Shapiro-wilk test.In case of data not following a normal distribution, the median was presented.For data following normal distribution, differences between groups were assessed by one way ANO-VA.Pearson correlation was performed to evaluate correlation analysis between the tumour markers with a bone mineral profile in the breast cancer group.The chi-square test was performed to evaluate the relationship between CTSK expression and clinicopathological features and metastasis.All statistical analyses were conducted with graph pad prism 6.0 software package for windows. Biochemical profile analysis The average age of the breast cancer group (n=58) and of the control group (n=58) was 59.1±8.03median 57.5 and 58.6±12.4median 56.5.Clinical, demographic, and biochemical characteristics of the study groups are are presented in Table 1.According to baseline parameters, there was no significant difference in the Body Mass Index (BMI) (p<0.0381),fasting plasma glucose (p<0.0844),age (p<0.8409),total cholesterol (p<0.12). Bone profile analysis Individual data revealed that 70% of patients without metastasis had PTH above normal while 65% had calcium and 62% of the patients had ALP above normal levels which further increased in metastasis.The patients with breast cancer and control subjects showed significant differences in calcium, PTH and C-reactive protein level, demonstrating an appropriate match in the risk factors for breast cancer.25-OH Vitamin D deficiency was considered at serum level less than 20 ng/ml, suboptimal 25-OH Vitamin D levels were considered between 21 and 39 ng/ml and optimal levels were more than 40 ng/ml (27).25-OH Vitamin D deficiency was seen in 36.2% (21/58) patients with breast cancer, while 45.4% (26/58) of the control group was deficient.Phosphorus was deficient in 5/58 and was high in 1/58 of the breast cancer group and 6/58 of the normal control group.Deficient Mg levels were detected in 13/56 of the patients with breast cancer and 3/22 of the control group.Regarding calcium level, 13/58 of the patients with breast cancer showed low total calcium and 32/58 of the breast cancer group showed high calcium levels (Table 1).Correlation analysis between the breast cancer tumour markers and bone profile in the breast cancer group are presented in Table 2. Serum calcium was significantly high in the patients with breast cancer reflecting the tight control of serum calcium by calcium-regulating hormones such as PTH and 25-OH Vitamin D. Concerning serum PTH, the concentration of PTH in serum was significantly higher in patients with breast cancer than in control subjects (Table 2).The results showed that patients with breast cancer and control subjects were both 25-OH Vitamin D deficient.In this study, the protein albumin level significantly reduced in patients with breast cancer as compared with normal (p<0.05). Bone profile with some prognostic factors profile between pre and post-menopausal women in breast cancer metastasis and non-metastasis group Comparing the studied bone profile with some prognostic factors in the breast cancer group is shown in Table 3.The se- rum levels of PTH, total calcium and tumour markers CA 15-3, CEA showed a significant difference between the pre and post-menopausal women within the non-metastatic group; however 25-OH vitamin D and ALP did not show any significant difference between both the groups.The serum levels of total calcium, ALP did show a significant difference between the pre and post-menopausal women within the metastasis group; however, PTH and tumour markers CA 15-3, CEA and 25-OH vitamin D did not show any significant difference between both the groups. Correlations of serum tumour markers with the bone profile in patients with breast cancer Correlations of serum tumour markers with biochemical markers of bone in patients with breast cancer were made.The differences of the results in the groups were (i) age significantly correlated with both breast cancer markers (r=0.1494, p=0.0030) (ii), whereas ALP, TSH, Mg and phosphorous did not significantly correlate with any markers in the breast cancer group, (iii) whereas calcium level was significantly correlated with both breast cancer markers (p<0.001) (Table 3). Bone profile and tumour marker comparison between metastasis and non-metastasis breast cancer patients As shown in Table 4, ALP and PTH showed a significant rise in metastatic cases as compared to non-metastatic group (p<0.01),whereas there is no significant difference in the level of vitamin D (p>0.13), albumin (p>0.5),phosphorous (p>0.3),Mg (p>0.4).However, there was a significant difference in the level of serum calcium between the two groups (p<0.001). Discussion The present study has shown that women with breast cancer have higher levels of total serum calcium and higher levels of ALP and PTH than control subjects.Other malignancies also reported hypercalcemia and high ALP and PTH activity [14][15][16].Hypercalcemia has been linked to osteolytic bone metastases, responsible for 20-30% of breast cancer metastasis. According to the previous studies, increased skeletal invasion and tumour destruction triggered by tumour development of various cytokines, such as growth factors (TGF-β), tumour necrosis factor-α (TNF-α), interleukin-1 and interleukin-2 leads to increasing bone osteolysis and modification of the reabsorption, excretion and resorption of calcium and phosphate ion, causes a high level of calcium [17]. ALP has many isoenzymes located in the liver, bones, and smaller amounts in intestines, placenta, kidneys, and leucocytes [18].Another ALP isoenzyme called Regan isoenzyme has also been identified in various malignancies [19], which may contribute to increased ALP activity in breast cancer patients.This enzyme elevated activity seen in study participants can also be linked to osteolytic bone metastases in breast cancer, leading to increased osteoclastic activity and bone resorption.However, the rise in serum ALP levels is non-specific, as it is also frequently associated with many other diseases.Also, the elevation of ALP activity to less than three times the normal level is usually not considered significant [14].In the present study, ALP and calcium showed a non-significant rise in non-metastatic cases and registered a significant rise (p<0.001) in metastatic patients, respectively, which is consistent with the findings of Uemura et al. [20] and Mulholland et al. [21] which indicate no significant difference in ALP levels in non-metastatic breast cancer.Atoe et al. [22] revealed a significant rise in ALP and calcium in metastasis and no change in non-metastatic groups, which suggested the involvement of bone in cancer metastasis. Multiple epidemiological studies have shown the link between C-reactive protein (CRP) and the risk of breast cancer [23].Nonetheless, the findings of multiple studies testing the interaction of CRP with breast cancer in different ethnic groups have shown inconsistencies [24].Some studies showed an association between elevated CRP and poor prognosis, while other studies found no association [25].Guo et al. [26] conducted the largest study, which involved 5.286 patients with breast cancer.This meta-analysis found that elevated levels of CRP are correlated with increased risk of breast cancer [26].Another study reported a high level of CRP at the time of breast cancer diagnosis which was associated with decreased overall survival and disease-free survival and increased breast cancer death [23].In the present study, we found that CRP levels in patients with breast cancer significantly elevated at the time of diagnosis (p<0.001) compared with healthy controls. Carcinogenesis causes magnesium mobilization through blood cells and magnesium depletion in non-neoplastic tissue.At the same time, Mg deficiency seems to be carcinogenic.It has been found that supplementation of a high level of magnesium inhibits carcinogenesis in case of solid tumours [27].Serum magnesium lower than 1.8 mg/dL is considered low.In the present study, magnesium was deficient in 42% of the patients with breast cancer and in 5% of the control group.Magnesium levels were significantly lower in the breast cancer group (p<0.001), which is in agreement with Sartori et al. [23] and Atoe et al. [22], which suggests that serum Mg was significantly lower in the patients with breast cancer compared to the control group and contradictory to the findings by Arinola et al. [28] who reported slight hypomagnesaemia in patients with breast cancer.Abdelgawad et al. [29] found no significant difference when comparing Mg levels between the breast cancer and the control group.According to previous study, magnesium deficiency has been found to be involved in both cancer risk and prognosis, including breast cancer [26,[30][31][32].Several studies also indicate the impact of dietary magnesium on breast cancer prognosis [24,33].Their results suggest that a higher dietary intake of magnesium among patients with breast cancer is inversely linked to mortality [34].To date, the link between dietary magnesium consumption and the risk of breast cancer has been investigated by few epidemiological trials.A case-control study from Italy found that the serum magnesium level among patients with breast cancer was significantly lower than among control subjects [26], in line with our result. 25-OH Vitamin D status is known to be inversely related to serum PTH.The higher level of PTH in patients with breast cancer in the present study does not appear to be as a result of lower circulating 25-OH Vitamin D because there was no difference in serum 25-OH Vitamin D between patients with breast cancer and control subjects.The mean serum 25-OH Vitamin D in both groups was below 28.3 ng/mL, which is consistent with previous reports suggesting that 25-OH Vitamin D deficiency or insufficiency is prevalent across the globe in almost all age groups and geographic areas [35]. The total cholesterol observed among the patients with breast cancer in the present investigation was within the normal range, but there is a significant change in the level of HDL and VLDL and no change in LDL level.This finding is consistent with the studies of Ramaswamy et al. [36] and Damodar et al. [37] which reported a non-significant change in total serum cholesterol of breast cancer cases.However, this is in contrast to the studies of Qi and Owiredu et al. [38], which reported that elevated total serum cholesterol with increased breast cancer risk. Several studies have consistently reported the prognostic value of serum albumin in patients with breast cancer.Low levels of albumin have been associated with increased cancer risk, and elevated levels of albumin (>3.5 g/dl) are significantly associated with improved overall survival among patients with breast cancer [39].The present results are consistent with previous findings of Boonpipattanapong et al. [40], Win et al. [41] and Neal et al. [42].The result provides strong evidence that lower serum albumin level is a prognostic factor for poor survival in early-stage patients with breast cancer regardless of stages.We observed an independent association between low baseline levels of serum albumin and survival.It is likely that serum albumin is a marker for patients with severe disease.Interestingly, our analysis suggests that low levels of serum albumin identify patients with the most severe disease within each tumour stage. In the present study, serum TSH and phosphorus level in breast carcinoma women were within a normal range.This finding is in agreement with the report of non-significant change in total serum level of TSH and phosphorous [43].Elevated levels of serum TSH and phosphorus are associated with advanced breast cancer [44]. The predominant expression of CTSK resulted in acute increase in serum calcium level and CTSK inhibition by SI-591 decreased serum calcium level in a rodent in-vivo study [45]. The current study reported that positive CTSK staining was detected in 55% of the breast tumours.There have been many studies concerning CTSK expression in breast carcinoma by immunohistochemistry [46,7]. Evaluating the association of elevated CTSK with various histological grade (p<0.001),presence or absence of distant metastasis (p<0.05),we found that elevated CTSK levels associated significantly with histological grade I, II and III.This is in contradiction with the recent study that reported a significant association of negative ER status with elevated CTSK levels [46].However, this study was carried out involving only 58 patients with breast cancer and, therefore, needs to be confirmed involving a larger cohort of patients.In addition, there was no significant difference in CTSK levels among premenopausal and postmenopausal patients with breast cancer (p<0.5).Higher and moderate CTSK levels were associated with a significantly presence or absence of distant metastasis (p<0.05).Therefore, moderate and high CTSK levels were possibly associated with significantly with poor outcomes, including death, recurrence and metastasis.To our knowledge, there is no published literature on CTSK in association with histological grade and distance metastasis and this will be the first study.The levels of CTSK were assessed at the time of disease diagnosis and the outcome measures were robust.However, limitations of this study are that we could not collect detailed data on receptor measurements, organ specific metastasis and lymph node status. Conclusion The estimation of serum bone profiles has a potential role in the early detection and monitoring of patients with breast cancer.The present study suggests that CTSK expression is not only correlated with metastasis but also related to the progression of breast carcinoma, and its overexpression could be potential prognostic factor for human breast carcinoma. The present study indicates CTSK as a potential molecule for diagnosis and therapeutic target for the treatment of breast cancer metastasis.However, none of the previous studies, as well as this study, determine whether this association has diagnostic value.If CTSK plays a critical role in breast cancer outcomes, then the future researchers need to focus on understanding how interventions can reduce the concentration of this bone resorption marker.In accordance with our study, if CTSK is a novel prognostic marker, then future studies are required to understand if it is responsive to drug and lifestyle interventions need to be designed to reduce the risk of skeletal metastasis in breast cancer women. TSH:Figure 1 . Figure 1.(a-c) Depicted is the representative immunostaining pattern of CTSK (a) mildly positive stain benign breast tissues, (b) moderately positive stain grade II breast cancer tissues, (c) strongly positive stain grade III breast cancer tissues (Magnification 40x). Table 2 . Serum tumour marker and bone profile levels in patients with breast cancer (value are mean±SD) *p<0.05 and # p<0.01, $ p<0.001 and ns =non-significant Table 3 . Correlation analysis between the tumour markers with biochemical marker of bone in the breast cancer group *p<0.05 and # p<0.01, $ p<0.001
2020-10-28T18:04:21.399Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "73da0c045336b8ebd1e050a1ab8dea7507ab15f3", "oa_license": "CCBYNC", "oa_url": "https://jag.journalagent.com/z4/download_fulltext.asp?pdir=ijmb&plng=eng&un=IJMB-19870", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "205219167f88533fb15afe7cab2197ae52f914ac", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
113530752
pes2o/s2orc
v3-fos-license
Path Tracking for Automated Driving: A Tutorial on Control System Formulations and Ongoing Research The superscript “*” is used to indicate the complex conjugate transpose. a: front semi-wheelbase a: longitudinal distance between the center of gravity and the front end of the vehicle ax, ax,max: longitudinal acceleration, maximum longitudinal acceleration ay: lateral acceleration A, B, C, D, E: generic state-space formulation matrices Ar, Br, Cr: state-space matrices for path profile modeling Av, Bv, Cv, Dv: state-space matrices for vehicle modeling A0, B0: state-space matrices for modeling the tracking dynamics at the centers of percussion b: rear semi-wheelbase b: longitudinal distance between the center of gravity and the rear end of the vehicle bı: multiplicative factor of steering angle in the yaw acceleration error formulation used for backstepping control design B1, B2, B3: matrices of the state-space single-track model formulation B: box used in the formulation of the tube-based model predictive controller c: half of vehicle width c COP,f , c COP,r : coefficients used in the definition of the sliding variables for the front and rear centers of percussion c: constant gain for sliding mode controller C f , C r ,: front and rear cornering stiffnesses C f; c 0 , C r; c 0 : front and rear cornering stiffnesses for the nominal tire-road friction coefficient c 0 D 1 Cˇ, C , C , C Ä : coefficients used for backstepping controller design C 1 , C 2 : controller formulations 1 and 2 d: distance from the summit of the bend d min k;t : minimum distance between the vehicle and the obstacle points calculated at time t and associated with the time k within the tracking horizon d k,t,j : distance between the vehicle and the obstacle point j calculated at time t and associated with the time k within the tracking horizon d 1 , d 2 : denominators of controller formulations C 1 and C 2 e, e k : error, discretised error D r : damping ratio D (s L ): denominator of the transfer function f a : function expressing the system dynamics f dt s k;t ; c k;t : system model function for the coordinate s k,t and the tire road friction coefficient c k;t calculated at time t and associated with the time k within the tracking horizon F b,l , F b,r : braking forces on the left-hand and right-hand sides of the vehicle F y,f , F y,r : lateral forces at the front and rear axles F FB y;f , F FB y;r : feedback contribution to the reference lateral forces on the front and rear axles F FFW y;f , F FFW y;r : feedforward contribution to the lateral forces on the front and rear axles F TOT y;f : sum of the feedforward and feedback contributions to the lateral forces on the front axle g: gravity g x,PP , g y,PP , g x,S , g y,S : longitudinal and lateral coordinates of the goal points according to the pure pursuit and Stanley path tracking methods g( ): nonlinear term within the model formulation for the robust tube-based controller design set of parameters in the parameters space approach i: imaginary unit I z : yaw moment of inertia J: cost function for optimal control J obs k;t : cost function at time t and associated with the time k to the predicted distance between the vehicle and the obstacle J 1 , J 2 : tracking performance criteria k: discretization step number or step time k c ,k int : controller gain and integrator to keep the steadystate tracking error small k ch : control gain of the chained-form controller used for calculating k 1 CC , k 2 CC , and k 3 CC k d : multiplicative factor of the state vector in the discontinuous control law k D : derivative gain k DD : g a i n u s e d i n t h e P I D D 2 controller k I : i n t e g r a l g a i n k LK : control gain in the feedback force contribution on the rear axle, F FB y;r k P : proportional gain k PP : tuning gain of the pure pursuit algorithm k S : tuning gain of the Stanley method k w1 ,k w2 , k e : gains used in the optimal preview steering control law k U : understeer gradient k P : multiplicative factor of P k y CG : multiplicative factor of y CG k yl d : multiplicative factor of yl d k , k P : multiplicative factor of the heading error, multiplicative factor of the yaw rate error k ,w , k P ;w , k y,w : multiplicative factors of the scalar errors w ; P w ; y w in the linear quadratic regulator with preview k ,w , k P ;w , k y,w : gains of the preview controller, to be multiplied by the weighted values of the heading angle error, yaw rate error and lateral displacement error k 1 CC , k 2 CC , k 3 CC : gains of the chained controller k 1 LC ; k 2 LC ; k 3 LC ; k 4 LC : gains of the limit cornering controller k 1 LQ ; k 2 LQ ; k 3 LQ ; k 4 LQ : gains of the linear quadratic controller K: rate of decay of y l d K d : gain multiplying y l d K LC : linear quadric matrix gain of the limit cornering controller K LQ : matrix gain of the linear quadratic regulator K LQ P : matrix gain of the linear quadratic regulator with preview K OBS : collision weight l: wheelbase l d : look-ahead distance L: vehicle wheelbase L Lipschitz : Lipschitz constant L Lyapunov : Lyapunov function m: vehicle mass M: constant big enough to disregard obstacles that do not lie within the vehicle line of sight M COP,f , M COP,r : gains to provide robustness against the variation of cornering stiffness M u : tuning parameter of the sliding mode controller M z : yaw moment n: counter n 1 , n 2 : numerators of controller formulations C 1 and C 2 N: matrix of the Riccati equation for linear quadratic regulator design p: characteristic polynomial p x k;t;j , p y k;t;j : coordinates of the j-th point of the obstacle in the body frame, calculated at time t and associated with the time k within the tracking horizon p X t;j , p Y t;j : coordinates of the j-th point of the obstacle in the inertial frame at time t P L : Lyapunov matrix P t,j : j-th point of the obstacle in the inertial frame at time t P 1 , P 2 : transfer functions calculated from a linear singletrack model of the system, used in the feedforward contribution ı FFW,1 b q: observer output q a y , q y l d , q , q i : parameters of the filters of the frequency-shaped linear quadratic controller q 1 , q 2 , q 3 , q 4 : extreme operating points for the -stability controller Q, R: weighting matrices of the cost function formulation Reach f .S; W/: one-step robust reachable set from a given set of states .S/ r: input of the model reference system obtained by scaling the input of the desired trajectory generator R tr : trajectory radius s, P s: trajectory coordinate and its time derivative s L : Laplace operator S: matrix of the Riccati equation for linear quadratic regulator design t: time t l d : time corresponding to the look-ahead distance (at the current vehicle speed) t r : time delay due to the driver's reaction t d : time delay t 1 ,t 2 : initial and final time values T: matrix describing the dynamics of the system considering the center of percussion T b,lf , T b,rf , T b,lr , T b,rr : braking torques of the left front, right front, left rear, and right rear wheels T i : time constant of the first order filter in the derivative term of the PID controller' and delete the existing line T s : sampling time u: input vector in the state-space formulation u: vector of the front and rear steering angles u k , u k , b u k .e k /: control law, nominal controller, and state feedback control action used in the robust tube-based model predictive controller u 1 , u 2 : control outputs of the chained controller U, U: polyhedra used in the tube-based model predictive control constraints v, v x , v y , v max : vehicle speed, longitudinal and lateral components of vehicle speed, maximum speed v k;t : speed of the vehicle at time k predicted at time t P v y;path;CG : time derivative of the lateral velocity of the path v 0 : speed at which the four-wheel-steering controller changes the sign of the steering angle of the rear axle (transition from opposite signs to the same signs of the front and rear steering angles) V 1 , V 2 : vehicle dynamics transfer functions w: system disturbance Q w: element of e W W d ; W P : weighting function adopted in the backstepping steering control law W: polyhedron used in the robust tube-based model predictive control constraints e W: Minkowski sum of the two polytopes W and B x, h: elements of the polytopes used in the definition of Pontryagin difference and Minkowski sum x COP f , x COP r : coordinates of the front and rear centers of percussion x P : longitudinal position of a generic point P in the vehicle reference system P x ref : reference longitudinal speed x v ,y v : vehicle positions in the tracking coordinates X,Y: coordinates in the inertial reference system X r , Y r ; P X r , P Y r : positions and velocities of the rear wheel according to the inertial reference system y,y k : output and discrete output of the state-space formulation y ri : disturbance in the form of white noise in the linear quadratic regulator with preview R y l d : lateral acceleration at the look-ahead distance l d R y ref : reference lateral acceleration Y ref : reference lateral position in the inertial frame z: complex number used in the z-transform representation z 1 , z 2 , z 3 , z 4 : augmented states for modelling the filter dynamics z MPC , z MPCref x,f ,ˇy ,f ,ˇr: normalized longitudinal force on the front axle, normalized lateral force on the front axle, normalized longitudinal force on the rear axle : sensitivity parameter for LQR design , @: desired region for locating the poles of the closedloop system, with the corresponding boundary ı; P ı: steering angle and its time derivative. In the case of absence of any subscript, this notation refers to the front axle. In the case of a four-wheel-steering vehicle, the subscript 'f ' is used to indicate the front steering angle and the subscript 'r' is used to indicate the rear steering angle. The additional subscript 'ss' is used to indicate steady-state conditions ı eq : equivalent steering angle in the super-twisting sliding mode formulation ı FB,LC,1 , ı FB,LC,2 , ı FB,LC,3 : feedback steering angle contributions according to different formulations of the path tracking controllers for limit cornering ı FFW,LC : feedforward steering angle contribution according to the path tracking controller for limit cornering ı FFW,1 , ı FFW,2 , ı FFW, 3 : feedforward contributions to the reference steering angle according to different formulations ı min , ı max : minimum steering angle, maximum steering angle ı ST , ı ST,1 , ı ST,2 : steering angle contribution of the super-twisting controller, consisting of the contributions ı ST,1 and ı ST,2 P ı y f : steering rate contribution depending on the lateral deviation error at the front end of the vehicle P ı P : steering rate contribution depending on vehicle yaw rate a y : difference between the reference and the actual lateral acceleration u : control input variation U t : optimization vector at time t y, P y, R y: lateral position error and its first and second time derivatives. They can be calculated at the front axle (hence the subscript "f"), at the rear axle (hence the subscript "r"), at the vehicle center of gravity (hence the subscript "CG"), or at any other point along the longitudinal axis of the vehicle reference system (e.g., at the center of percussion or at the look-ahead distance) y CG,SS : steady-state value of y CG Y rms , Y max : root mean square error on vehicle lateral position, maximum error on the vehicle lateral position ı min , ı max : minimum and maximum variation of the steering angle , P , R : yaw angle (i.e., heading) error and its first and second time derivatives. They can be calculated with respect to the reference path at the front axle (hence the subscript "f"), at the rear axle (hence the subscript "r"), at the vehicle center of gravity (hence the subscript "CG"), or at any other point on the longitudinal axis of the vehicle reference system (e.g., at the centers of percussion) CG,SS : steady-state value of the yaw angle error rms , max : root mean square error on the yaw angle, maximum error on the yaw angle w ; P w ; y w : scalar values of the weighted average of the heading errors and lateral position error along the preview distance ": small imaginary part of the hyperbola in s L -plane ! L 0 : parameter of the hyperbola in s L -plane «: Pontryagin difference of two polytopes L : Minkowski sum of two polytopes Introduction In automated driving system architectures (see the classification according to [1]), three layers can be typically defined [2]: (i) The perception layer, aimed at detecting the conditions of the environment surrounding the vehicle, e.g., by identifying the appropriate lane and the presence of obstacles on the track; (ii) The reference generation layer, providing the reference signals, e.g., in the form of the reference trajectory to be followed by the vehicle, based on the inputs from the perception layer; (iii) The control layer, defining the commands required for ensuring the tracking performance of the reference trajectory. These commands are usually expressed in terms of reference steering angles (usually on the front axle only) and traction/braking torques. This chapter focuses on the control layer and, in particular, the steering control for autonomous driving, also defined as path tracking control. The foundations of path tracking control for autonomous driving date back to well-known theoretical and experimental studies on robotic systems and driver modeling, detailed in several papers and textbooks (e.g., see the driver model descriptions in [3][4][5][6][7][8][9]). Moreover, automated driving experiments with different controllers have been conducted since the 1950s and 1960s, by using inductive cables or magnetic markers embedded in roadways to indicate the reference path [10,11]. This contribution presents a survey of the main control techniques and formulations adopted to ensure that the automatically driven vehicles follow the reference trajectory, including analysis of extreme maneuvering conditions. The discussion will be based on a selection of different control structures, at increasing levels of complexity and performance. The focus will be on whether complex steering controllers are actually beneficial to autonomous driving. This is an important point, considering that Stanley and Sandstorm, the vehicles that obtained the first two places at the DARPA Grand Challenge (2004)(2005), used very simple steering control laws based on kinematic vehicle models. In contrast to this, Boss, the autonomous vehicle winning the DARPA Urban Challenge (2007), was characterized by a far more advanced model predictive control strategy [12][13][14][15]. The main formulas for the different steering control structures will be concisely provided as a tutorial on the control system implementations, so that the reader can actually appreciate the characteristics of each formulation, and ultimately refer to the original papers in the case of specific interest. Also, the main simulation and experimental results obtained through the implementation of each control structure will be reported and critically analyzed. The chapter is organized as follows: • Section 5.2 presents path tracking methods based on simple geometric relationships, and a chained controller relying on a vehicle kinematic model, i.e., developed under the approximation of zero slip angles on the front and rear tires. • The first part of Sect. 5.3 deals with conventional feedback controllers designed with a simplified dynamic model of the vehicle system, i.e., the well-known linear single-track vehicle model. The second half of Sect. 5.3 discusses relatively simple optimal control formulations, e.g., linear quadratic regulators, without and with feedforward contributions, and including the concept of preview in their most advanced declination. The layout of Sects. 5.2 and 5.3 mostly follows the guideline of a very relevant previous survey work [16], dating back to 2009, which critically assessed path tracking control methods through vehicle simulations with the software package CarSim. • Section 5.4 discusses a couple of sliding mode formulations, one of them based on the important concept of center of percussion, and briefly mentions other examples of path tracking controllers, e.g., based on H 1 control and backstepping control. • Section 5.5 presents in detail the latest developments in the subject area, through a selection of examples of advanced controllers (i.e., path tracking controllers for autonomous racing and model predictive controllers) from recently published papers, including critical analysis of their specific benefits. • Section 5.6 provides concluding remarks and ideas for future research on the subject. Pure Pursuit Method The most basic path tracking method is represented by the pure pursuit formula, derived by geometrically calculating the curvature of a circular arc (describing an angle 2˛in a top view of the single-track model of the system, see Fig. 5.1) that connects the rear axle location to the goal point on the reference trajectory and by applying the well-known Ackerman steering formula, 1/(R tr ı) D 1/L. The goal point has coordinates (g x,PP , g y,PP ) and is located at a look-ahead distance l d on the reference trajectory, measured from the rear axle. This brings a reference steering angle ı(t) equal to: It can be shown that the resulting curvature is Ä D 1=R tr D 2y l d =l 2 d , i.e., this controller acts like a proportional controller, with gain 2=l 2 d , on the error y l d , defined as the lateral distance between the x-axis of the vehicle reference system and the goal point (g x,PP , g y,PP ) in Fig. 5.1. As shown in the right term of Eq. (5.1), the look-ahead distance l d is often scaled as a function of vehicle speed, v x (t), i.e., l d (t) D k PP v x (t). In general, low values of l d result in high precision tracking and low stability. As Eq. (5.1) is based on vehicle kinematics, it can generate significant tracking errors, caused by the absence of consideration of the vehicle sideslip. Stanley Method Another geometry-based path tracking method is the Stanley method, usually more suitable for medium-high speed driving conditions than the pure pursuit method and adopted by the Stanford University's entry (called Stanley) to the DARPA Grand Challenge. According to this method (see Fig. 5.2), the steering angle consists of: (i) a component equal to the heading error (i.e., the yaw angle error), f D path;f , where path,f is measured at the goal point (g x,S , g y,S ) on the reference path; and (ii) a term based on the lateral distance error at the front axle, y f (or any other point in the front part of the vehicle), ensuring that the intended trajectory intersects the target path at an approximated distance x (t)/k S from the front axle, with k S being the tuning parameter of the controller: Many other variants of geometric path tracking methods can be found in the literature. For example, Wit [18] proposes a vector pursuit path tracking method (for specific details refer directly to [18]). Chained Controller Based on Vehicle Kinematics Similarly to the previous controllers, the chained controller formulation in [17] (see also [19,20]) is based on the single-track model of vehicle kinematics, i.e., considering zero slip angles for the front and rear tires. In particular, the hypotheses are that (see Fig. 5.2): (i) the rear wheels move along a direction with angle (i.e., the yaw angle) with respect to the X-axis of the inertial reference system; (ii) the front wheels move along a direction with an angle ı (where the signs are according to the conventions in [17]) with respect to the same X-axis; and (iii) a By expressing the kinematic model in path coordinates, with s being the trajectory coordinate and Ä its curvature, the model formulation becomes: with r D path;r . Through an appropriate transformation of the system coordinates (the general formulation of the coordinate transformation and the theory are reported in [17] and [19,20]), it is possible to express this system into a typical two-input chained form with the following structure: where the change of coordinates is: and the input transformation is: with˛1 D @ 4 @s C @ 4 @y r .1 y r Ä.s// tan . / C @ 4 @  tan.ı/.1 y r Ä.s// L cos. / Ä.s/ à and˛2 D L cos 3 . / cos 2 .•/ .1 y r Ä.s// 2 . The controller design in chained systems is carried out in two phases. According to [16], "the first phase assumes that one control input is given, while the additional input is used to stabilize the remaining sub-vector of the system states. The second phase simply consists of specifying the first control input so as to guarantee convergence while maintaining stability." In practical terms, if vehicle speed v is imposed, from Eq. (5.7) it is possible to calculate u 1 , and then the steering input is a function of both u 1 and u 2 , where u 2 depends on u 1 . In the proposed two-input chained structure for path tracking, the two controllers are designed so that for any piecewise-continuous, bounded, and strictly positive (or negative) u 1 , u 2 is expressed as: De Luca et al. [17] suggest imposing k 1 CC D k 3 ch , k 2 CC D 3k 2 ch , k 3 CC D 3k ch , so that the control system tuning consists of selecting a single gain. The main (and possibly only) benefit of this convolute control formulation is that its straightforward extension allows the automated control of articulated vehicles, including vehicle systems with multiple trailers. Simple Feedback Formulations A significant body of literature adopts simple feedback control structures, such as proportional integral derivative (PID) controllers, to solve the steering control problem for autonomous vehicles. These feedback controllers are usually designed starting from simplified models of the system dynamics, accounting for the fact that the actual vehicle behavior is different from the one predicted by a geometric model, because of (i) the slip angles on the front and rear tires, generally different among the two axles, with the subsequent vehicle understeer gradient in steadystate conditions [21] and (ii) vehicle inertia in transient conditions, providing second order yaw dynamics, with variable equivalent stiffness and damping characteristics as functions of vehicle speed [22,23]. On the one hand, the relatively weak link between the basic feedback formulations and the respective linearized dynamic models used for control system design, and also the significant level of approximation of these models, do not automatically guarantee improved performance for all driving conditions with respect to the algorithms based on the system geometry alone, presented in Sect. 5.2. On the other hand, despite the existence of multiple state-of-the-art contributions focused on advanced control techniques, e.g., based on model predictive control, relatively simple feedback controllers for path tracking can provide good performance for a variety of operating conditions, if carefully tuned. For example, the ARGO autonomous vehicle prototype developed by the team of Prof. Broggi of the University of Parma was characterized by a proportional (P) controller for steering control [24,25]. Even in a recent paper [26] authored by the investigators of the European Union FP7 V-Charge project, the path tracking controller is based on a simple proportional controller, with an advanced algorithm for the reference path generation, using an optimization considering the current vehicle position with respect to equivalent electric field lines. The following paragraphs provide an overview of some relatively simple control structures designed through linear single-track vehicle models. Already in the 1960s in Japan [10,27], experiments were conducted on a path tracking controller based on a Proportional Derivative (PD) structure on the lateral displacement error between the reference path and the actual path of the vehicle, and a P controller on the yaw angle error. The two contributions were summed together to provide ı. The yaw angle error contribution allowed an improvement in tracking performance with respect to the path deviation feedback contribution alone. This is further confirmed in a more recent paper by Nissan [28], in which a PD controller exclusively based on lateral position error is designed through pole placement, with clear issues in the results, caused by the effects of yaw rate and yaw angle, not addressed by the position error controller. More systematically, [29] critically examines the limitations of pure output feedback on the lateral vehicle displacement error measured at the front bumper, . This was the control practice (dictated by necessity) in the initial look-down automated driving system implementations based on devices installed on the road and not based on vision systems implemented on the vehicle, intrinsically allowing look-ahead path tracking control. The analysis of [29] is based on the lateral acceleration and yaw rate frequency response characteristics for a steering wheel input, obtained from the equations of the linear single-track vehicle model, with the lateral acceleration being considered at a distance l d in front of the vehicle. In [29], l d is used to indicate both the installation position of the lateral displacement error sensor and a virtual look-ahead distance. The transfer functions of vehicle response (i.e., V 1 .s L / D P .s L / =ı .s L / and V 2 .s L / D R y l d .s L / =ı .s L /) are characterized by the same second order denominator, i.e., D .s L / D I z mv 2 s 2 L Cv˚I z . with the corresponding damping coefficient being a decreasing function of vehicle speed. In [29], the effect of the tire-road friction coefficient is modeled by imposing C f D C f; c 0 c and C r D C r; c 0 c , which, according to the authors of this chapter, can be the object of discussions. The lateral acceleration transfer function has a second order numerator, while the yaw rate transfer function has a first order numerator. In formulas: Equations (5.9) and (5.10) are the essential plant transfer functions to be considered for linear path tracking control design in the frequency domain. The inclusion of the transfer function of the specific steering actuator is also recommended by the authors of this chapter for any path tracking control design activity, consistently with the practice of many sources in the literature, some of them indicating a typical actuation bandwidth of 5-10 Hz. Figure 5.4 shows the Bode plots of V 2 (s L ) for different vehicle speeds and tireroad friction conditions. In particular: (i) the natural mode of vehicle dynamics is negligible for low speed but significant for high speed; (ii) the frequency of the natural mode is almost independent of speed, but decreases as a function of the friction coefficient; (iii) the steady-state gain depends on both v and c (the latter is true especially at high speed); (iv) the high-frequency gain depends on c but not on v; and (v) the contribution of yaw motion to lateral acceleration diminishes by the inverse of vehicle speed, in favor of the sideslip contribution. According to [29], realistic control system specifications should be in terms of maximum lateral displacement error at any vehicle speed and robust behavior for 0.5 Ä c Ä 1. The variations of c can happen very quickly; therefore, the same controller must be capable of providing robustness for the indicated range of friction conditions. For very low friction values, i.e., for c Ä 0.5, it is expected that the longitudinal controller imposes much lower values of vehicle speed than in normal friction conditions. The important conclusion of the analysis in [29] is that for relatively high vehicle speeds look-down controllers are not sufficient to meet the expected performance requirements. As a consequence, [29] proposes: (i) A feedforward steering contribution based on the reference path curvature; (ii) Feedback control of vehicle absolute motion, i.e., control of yaw rate and lateral acceleration; (iii) Modification of the system zeros in Eq. (5.10) by increasing the look-ahead distance l d , which can be done even in look-down systems by having two position error sensors, installed at the front and rear bumpers. With respect to (iii), Fig. 5.5 shows the increase of the damping ratio of the numerator of the transfer function in Eq. (5.10) as a function of l d , for different velocities and c D 0.5. The zero pair in the numerator of Eq. (5.10) determines "the undershoot in Fig. 5.4 between 1 Hz and 2 Hz and the distribution of phase lag/lead in this frequency range. Conversely, prescribing a fixed maximum phase lag : : : yields look-ahead requirements for different speeds," which is outlined in Fig. 5.6, i.e., the look-ahead distance should be an increasing function of vehicle speed. Section 5.5 will show that the most recent path tracking controllers, specifically implemented for high lateral acceleration conditions, actually include the combination of (i)-(iii). Hsu and Tomizuka [30] extend the general analysis of [29] to include the specificities of vision-based control systems. In particular, the main conclusions are that (i) Look-ahead distance enhances stability but increases the error and decreases the closed-loop bandwidth; (ii) Higher vehicle speed increases the cross-over frequency and reduces stability; (iii) The time delays caused by the vision system decrease Damping ratio [-] Look-ahead distance stability, through a reduction of the phase margin and equivalent effect of the right-hand-plane zeros appearing in the loop transfer function. Hsu and Tomizuka [30] propose a path tracking controller, consisting of a feedforward contribution of the form ı FFW;1 D P 1 1 .s L / P 2 .s L / e t d s L Ä .s L / (with P 1 (s L ) D y CG (s L )/ı(s L ) and P 2 (s L ) D y CG (s L )/Ä(s L ) being calculated from a linear single-track model of the system), and a PID feedback contribution on the lateral displacement error, receiving y CG as input, with k P D 0:01, k I D 0, k D D 0:0074 and T i D 0.0001. Consistently with the conclusions of [29,30], Tachibana et al. [31] propose a PD controller on the lateral position error at the look-ahead distance l d , i.e., the position error at a single point in front of the vehicle is monitored, assuming that the heading angle of the vehicle does not change with distance ( Fig. 5.7). The control law in discretized form is: Vehicle experiments showed that the control gains had to be varied as functions of vehicle speed and that at 50 km/h optimal look-ahead distances are in the range of 20 m, in the case of a curved trajectory, and 25 m, for a straight reference trajectory. In [32], based on the Japanese Ministry of Construction Automated Highways System (AHS) project (1995)(1996), the path tracking controller ( Fig. 5.8) consists of a PID controller on y CG , whose output is summed to that of a PI controller on y l d , and to a feedforward contribution of the form: The feedforward contribution accounts for the steady-state vehicle understeer and brings a reduction of the maximum value of y CG from about 110 cm (with the feedback contribution only) to about 40 cm (with the feedback and feedforward contributions) during the case study tests. The comment of the authors of [32] is that the feedforward contribution allows achieving a good compromise between stability and tracking characteristics, without large gains of the feedback part. Interestingly, very recent papers by Prof. Gerdes of Stanford University reach the same conclusion, and the respective controllers significantly rely on nonlinear feedforward contributions (see Sect. 5.5). A similar combination of feedforward and feedback contributions was adopted for the anticollision system developed during the PRORETA research project, within a collaboration between Darmstadt University of Technology and Continental [33]. Marino et al. [34] propose a PID control architecture with two nested control blocks. The outer one calculates the reference yaw rate starting from the lateral position error, while the inner loop calculates the reference steering angle in order to track the reference yaw rate. According to the authors of [34], this architecture allows "to design standard PID controls in a multi-variable context." Singular values analysis is used to assess the robustness of the controller with respect to the variation of the main vehicle parameters. The simulation results show improved performance with respect to a model predictive driver model. The experiments on a Peugeot 307 prototype vehicle confirmed the adequate performance of the controller in normal driving conditions (i.e., for relatively low values of lateral acceleration). Simple feedback formulations with constant gains can be designed to provide the required level of robustness for normal operating conditions. For example, [35] is an important study, focused on the design of a linear controller for an automated bus, with the main specifications being: (i) jıjÄ 40 deg; (ii)ˇP ıˇÄ 23 deg/s; (iii) lateral displacement error not exceeding 0.15 m in transient conditions and 0.02 m in steady state; (iv) lateral acceleration not exceeding 2 m/s 2 for passenger comfort with an ultimate limit of 4 m/s 2 for preventing vehicle rollover; and (v) a natural frequency of the lateral motion not exceeding 1.2 Hz. The control system is designed through a linear single-track vehicle model, under the hypothesis of a lateral displacement sensor located on the front end of the vehicle, measuring the distance from the reference road path. The steering controller has the following formulation, where the benefit of the yaw rate-related contribution k P P is discussed in [35] through root-locus analysis: The fixed controller design must be robust with respect to the variation of vehicle mass (very significant for a bus) and speed. In particular, the controller must provide stability for the four design points indicated in Fig. 5.9. The contribution P ı y f is designed from the frequency domain analysis. In the Laplace domain, P ı y f .s L / is defined as which, according to the authors of [35][36][37], is a PIDD 2 controller. This control structure was already recommended for high-speed path tracking by the authors of [29]. A parameter space approach is used for the design of the compensator in Eq. (5.14). This method allows "to determine the set of parameters H, for which the characteristic polynomial p(s L , h p ), h p 2H, is stable. The plant is robustly stable if the operating domain is entirely contained in the set of stable parameters." For the specific problem, Hurwitz stability is not considered sufficient. A hyperbola in the s L -plane is used to provide the required performance characteristics, i.e., the eigenvalues of the closed-loop system should lie in the region on the left of the boundary @ defined as: Values of L 0 equal to 0.12 and 0.35 are selected, respectively, for low and high velocities, with ! L0 L0 D 5. The -stability boundaries for each of the four extreme operating points (q 1 ,q 2 ,q 3 ,q 4 ) in The initial selection of the controller parameters [k DD k D k P k I ] was further refined in a simulation-based optimization procedure minimizing tracking performance criteria such as: A similar control design methodology, using the concept of -stability, is presented in [37], this time based on the control of the errors of the front and tail lateral positions, instead of the front position and yaw rate. The option of a feedforward steering angle considering dynamic curvature preview was included in the controller, which was validated with experimental tests in collaboration with the Californian PATH (Partners for Advanced Transportation Technology) center. Another very relevant study [38], including experiments, assessed the performance of an automated Fiat Brava 1600 ELX. The control system design was based on the commonly used single-track vehicle model (the detailed formulation is provided in the following lines), together with the transfer function of the steering actuator. Similarly to [35], the objective was to design fixed controllers (e.g., without any form of gain scheduling), capable of robustly stabilizing the plant for: (i) v ranging between 60 km/h and 130 km/h; (ii) m ranging between 1226 kg and 1626 kg; (iii) I z ranging between 1900 kgm 2 and 2520 kgm 2 ; and (iv) C f and C r ranging between 51 kN/rad and 69 kN/rad, and between 81.6 kN/rad and 110.4 kN/rad, respectively. The performance specifications included jy CG jÄ0.2 m, jv y (t)jÄ1.5 m/s, ja y jÄ3.3 m/s 2 , and consideration of the steering actuator saturation. Based on previous experimental analyses of human drivers [39] showing that the steering wheel action is applied "on the basis of the distance between the lane and the longitudinal axis of the car at a look-ahead point, " the controller in [38] uses feedback output on y l d D y CG C y path;l d , with l d D11.5 m. The controller is designed through classical loop-shaping techniques. In particular, two controllers, C 1 (z) D n 1 (z)/d 1 (z) and C 2 (z) D n 2 (z)/d 2 (z), were experimentally assessed, with numerical values of the controller parameters, in descending powers of z, given by: The values of the gains are reported here for a quick implementation of the controller and reproduction of the results. C 1 (z) was designed with the purpose of providing good tracking performance, while C 2 (z) was specifically aimed at ride comfort. C 1 (z) and C 2 (z) were assessed along a curve with 1000 m radius followed by a straight section, at vD100 km/h. The tracking performance results are in Figs. 5.11 and 5.12, which show that C 2 (z) provokes higher amplitude and lower frequency oscillations of the lateral offset between the path and the vehicle center of gravity. Overall, the test drivers provided a better assessment for C 2 (z), which demonstrates the extreme importance of the human factor in the evaluation of path tracking algorithms. Tan et al. [40] includes a wide set of experimental results obtained during public demonstrations at various sites and in very different operating conditions, ranging from docking to high speeds and relatively high lateral accelerations. The control system design is based on a simple but reliable controller, with the following form: where the measured inputs are y and . The tuning of the controller is carried out through a linear vehicle model, including roll dynamics. This peculiar (and interesting) choice is justified by the fact that a good match between the model and the experimental results is achievable, in the case of a comfort-oriented tuning of the vehicle suspension system, only through the inclusion of the roll dynamics and actuator dynamics in the model for control system design. The formal design specifications are: (i) the maximization of the gain margin (with the requirement of being > 2) and phase margin (with a requirement of being > 50 deg) on the open-loop transfer function through the optimal selection of the gain k c (v) and the preview distance l d (v) and (ii) to guarantee that the lateral displacement deviation of the closed-loop system shall not exceed a given threshold (reported in Fig. 5.13) for a 1 m/s 2 step input of the reference road acceleration. In the control structure, G c (s L ) mainly compensates for the actuator dynamics and consists of a low-frequency integrator and high-frequency roll-off, "to reduce the effects of the steady-state tracking bias and the unwanted excitation of the highfrequency unmodeled actuator dynamics. G l d .s L / is made of a high-frequency rolloff portion and a mid-frequency lead-lag filter to limit the look-ahead amplification and to provide extra look-ahead between 0.5 and 2 Hz." In formulas: Based on these experimental proofs and also on the other references mentioned so far, the conclusion of this section is that in the absence of uncertainty and significant disturbances in the road scenario, simple, conventional, reliable, and easily tunable control structures with appropriate gain scheduling are sufficient for providing good performance in all operating conditions, and are actually recommended by the authors of this chapter for vehicle implementation. Basic Linear Quadratic Formulation This subsection focuses on the basic mathematical formulation of linear quadratic regulator (LQR) control structures for path tracking. LQRs for path tracking are based on the state-space formulation of the system dynamics. The main building block is represented by the well-known single-track model of the vehicle [4,21,22], already used in the previous subsection, with a linear model of the tires, parameterized by their cornering stiffness, and adopting the lateral slip velocity of the vehicle center of gravity, v y , and the yaw rate, P , as system states. This model is suitable for describing vehicle dynamics at moderate lateral acceleration levels with respect to the available tire-road friction conditions, and at small longitudinal accelerations/decelerations (its equations are actually derived for the condition of constant speed). In formulas: The single-track vehicle model has to be expressed in the states relevant to the implementation of the path tracking controller. The following control variables are usually defined (minor variations are present in the different papers and reports): (i) the lateral position error, y CG , i.e., the length of the segment perpendicular to the symmetry plane of the vehicle and connecting the center of gravity with the corresponding point on the reference path (see Fig. 5.2, according to [16]). In some of the papers, the distance is measured along a segment perpendicular to the reference path, rather than perpendicular to the vehicle symmetry plane (i.e., a reference system aligned with the road is adopted) and (ii) the heading angle error, CG D path;CG (indicated in Fig. 5.2). The controller formulation would not significantly change if the errors were measured from any other point (different from the center of gravity) located on the longitudinal axis of the vehicle reference system or the road reference system (e.g., in front of the vehicle, in order to generate a basic preview effect). By considering the approximated system kinematics, it is R y CG D P v y C v x P P v y;path;CG .s/ D P v y Cv x P P path;CG .s/ D P v y Cv x P CG and P y CG D v y Cv x CG . Equation (5.21) can be transformed into the path coordinates, thus obtaining the state-space formulation directly applicable to the path tracking LQR control system . which corresponds to the canonical form: where D y CG P y CG CG P CG T , and the steering angle ı is the control output. The term in Eqs. (5.22) and (5.23) including the reference yaw acceleration, R path;CG .s/, is usually neglected in the literature. The term B 2 P path;CG represents a disturbance within the control system design. The system is controllable as the controllability matrix [ has full rank. The LQR formulation for path tracking is normally based on state feedback regulation, i.e., on the control of the lateral position and velocity errors at the center of gravity, and the control of the heading angle and heading rate errors, with all references set to 0. This means that ı D K LQ D k 1 LQ y CG k 2 LQ P y CG k 3 LQ CG k 4 LQ P CG . The LQR controller minimizes the following quadratic cost function J: where the diagonal 4 4 weighting matrix Q is selected to define the relative importance of the tracking performance for the different states of the system, while the weighting factor R defines the relative importance of control effort and tracking performance. The gain K LQ can be designed through the well-known algebraic Riccati equation [41]. In formulas: thus bringing the following closed-loop formulation of the controlled system dynamics: The continuous form of the LQR path tracking controller was presented here. The continuous system in Eq. (5.27) can be easily subject to discretization for the design of a discrete LQR controller. An LQR implementation was experimentally assessed by Nissan in [28] and compared with the performance provided by the PD controller on the lateral position error mentioned in Sect. 5.3.1. The tests (Fig. 5.14) were conducted at 80 km/h and consisted of a step change of 20 cm in the lateral coordinate of the target path. The study included a sensitivity analysis as a function of D Q(1, 1)/R, i.e., the ratio between the lateral deviation weighting and the control effort weighting in Eq. (5.24). When commenting their experimental and simulation results, the authors of [28] state that "with PD control, large overshoot occurred when response was improved and the system became more susceptible to noise. This made it impossible to set the control constants at larger values." In any case, the same authors observe that the model for LQR design exhibits "large modeling errors on the curved segments of the test road, making it unable to track the path accurately on curves." A possible solution to this limitation is to adopt a feedforward contribution based on road curvature, which will be discussed in the next subsection. The alternative proposed in [28] is a Kalman filter based on the vehicle motion equations, combined with a curvature approximation model. The output of the Kalman filter is the curvature estimation, b Ä, which is used as a state variable in the augmented LQR scheme. Linear Quadratic Regulator with Feedforward Contribution The feedback LQR formulation of the controller can be enhanced through a feedforward contribution, aimed at canceling the steady-state lateral position error of the center of gravity, which is a problem especially in the case of curved paths. Hence, the control output and closed-loop system dynamics assume the following shape: By manipulating Eq. (5.29), it is possible to obtain the analytical expression of the steady-state errors of the controlled system. By imposing y CG,SS D 0 and under the hypothesis of a constant radius trajectory, it is: where the resulting steady-state value of the heading error becomes: The important conclusion is that CG,SS is not controllable through the feedforward contribution of the steering angle, if this is aimed at achieving y CG,SS D 0. From a practical viewpoint, as the yaw dynamics of the vehicle are strongly dependent on vehicle speed, i.e., yaw damping is a decreasing function of vehicle speed, a careful scheduling of the feedback controller gains is also required as a function of v x , in order to provide consistent tracking performance at different vehicle speeds, as already discussed in Sect. 5.3.1. Linear Quadratic Regulator with Preview The LQR formulations in the previous sections can be significantly enhanced through a preview scheme, i.e., by augmenting the system to include the future profile of the reference path. To this purpose, the reference path is discretized, and a vector with its lateral coordinates is progressively updated at each time step. The vehicle system model in the tracking coordinates (Eq. 5.22) can be discretized as: The future reference path profile, represented by the vector y r kC1 , is modeled as: The reference lateral position of the vehicle is located into a register that is progressively updated at each time step. y ri is considered as a disturbance in the form of a white noise. The discretized model of the vehicle system, including the reference position coordinates, can be expressed as the combination of the models in Eqs. (5.32) and (5.33): with augmented state vector k D OE v k y r k T (see [16] for the details regarding the formulation of the different matrices). The state feedback control law becomes u k D K LQ P k , where the states related to the future values of the lateral coordinates of the road have an effect on the control action. The control gain matrix K LQ P can be calculated by using the well-known LQR discrete formulation and the respective Riccati equation. It is interesting to observe that the path tracking control formulation deriving from Eq. (5.34), presented in [16], is very similar to the driver model with preview in [5] (originally not conceived for automated driving), based on the reference steering angle ı D k C k y CG y CG C P n kD1 y k , where the index k refers to points located in front of the vehicle. Actually, the formulation in Eq. (5.34) is identical to the motorbike driver model in [7]. An extension of the linear quadratic formulation with preview could be based on the driver model in [8], which is an enhancement of the driver model in [5], with consideration of the future heading angle errors and their time derivatives, to give ı D k ;w w C k P ;w P w C k y;w y w . In this case, the errors w , P w , and y w are scalar variables, calculated as the linear combination of the errors at different points k. Frequency-Shaped Linear Quadratic Control A relevant contribution to the science of LQR path tracking control was provided by Peng and Tomizuka in the PATH framework in the 1990s [42][43][44]. Their frequency-shaped linear quadratic controller is based on the linear equations of the system, according to the formulation in Eqs. (5.21) and (5.22). The output is represented by the lateral deviation measured by a sensor located in the front part of the vehicle. The main advantages of the frequency-shaped LQR formulation with respect to conventional LQR control are: (i) the robustness on measurement noise at high frequencies and (ii) the possibility of including ride quality explicitly in the performance index (which is very important, as pointed out in some of the references including actual experimental tests and not only computer simulations). The adopted performance indicator is: where a y is the difference between the reference and the actual lateral acceleration. The coefficients q a y and a y are chosen to provide the expected ride quality, while the coefficients of the other three terms are selected to provide responsiveness to the road curvature and robustness with respect to the measurement noise. The disturbance term in the vehicle model, B 2 P path (i.e., the effect of the curvature, see Eq. (5.27)), is previewed with a preview time t l d . The problem is solved as a conventional linear quadratic controller after augmenting the system state variables with the states z 1 , : : : , z 4 corresponding to the four filters in Eq. (5.35), so that the augmented state vector is T augm: D y CG P y CG P z 1 z 2 z 3 z 4 . The minimization of J requires the knowledge of the disturbance, i.e., P path , from the current time to infinity. As this is not practically possible, an exponential decay of the curvature-related disturbance w(t) D 1/Ä(t) beyond the preview region is assumed. The resulting optimal preview steering control law is expressed by: where the first term is the state feedback controller, and the second and third terms are the preview control terms. Other Control Structures for Path Tracking and Remarks The controllers discussed in Sects. 5.2 and 5.3 are only an arbitrary selection of the very wide literature on the subject. This section presents an overview of the variety of less conventional control structures used for path tracking control. Sliding Mode Controllers Many papers (e.g., [35] and [45,46]) present sliding mode controllers for path tracking. Utkin in [35] proposes a relatively simple first order sliding mode control structure (Fig. 5.15), resulting in a yaw control law of the form: where the sliding variable, , is given by D c P C R , with P D P P ref . As a consequence, the sliding variable is a function of P , R , P ref and R ref . The reference yaw rate is based on the lateral position error, having a time derivative P y l d D v .ˇC / C l d P . According to traditional feedback linearization, P ref can be expressed as: with K determining the rate of decay of y l d . In order to estimate the term b q D v .ˇC /, a dynamic observer (Observer 1 in Fig. 5.15) is implemented. As R is not directly measured but is necessary for the computation of the sliding variable, the time derivative of P is computed through a robust observer (Observer 2 in Fig. 5.15), as a simple differentiator would imply a significant risk of chattering. The comparison of the performance of this sliding mode formulation with the continuous controller discussed in Eqs. (5.13)-(5.16), carried out in [35], does not allow clear conclusions. Particularly relevant, also considering the recent experimental developments at Stanford University to be discussed in Sect. 5.5, is the sliding mode path tracking controller developed in [46], for a four-wheel-steering vehicle, according to the strong tradition of Japanese vehicle engineering in four-wheel-steering systems. The controller is based on the important concept of centers of percussion of the front and rear axles. The centers of percussion (COPs) are located on the symmetry plane of the vehicle. Their longitudinal position with respect to the center of gravity is defined by (see also Fig. 5.16): The four-wheel-steering vehicle layout of [46] allows the independent control of two variables, i.e., in the specific case the lateral position errors at the front and rear centers of percussion, respectively, y COP,f and y COP,r . The relationship between the vehicle state variables and the time derivatives of the lateral position errors at the centers of percussion is: The substitution of Eq. (5.41) into the single-track vehicle model equations leads to the following tracking position dynamics at the centers of percussion: Fig. 5.16 Vehicle model for four-wheel-steering path tracking based on the centers of percussion (adapted from [46]). The sign convention is the one in [46] where P path;COP D P path;COP;f P path;COP;r T is the vector formulation of the time derivatives of the target path heading angles at the front and rear centers of percussion. With a proper selection of the feedback controller configuration and by imposing that the two control points are the centers of percussion of the vehicle, Eq. (5.42) simplifies into the form: where: The diagonal matrix B 0 very importantly indicates that the position tracking problems at the front and rear centers of percussion are decoupled. Hence, each center of percussion path deviation can be independently controlled by the front and rear steering angles. As a consequence, the control laws for the front and rear axles can be separately designed, as single-input single-output controllers. This means that the lateral displacement dynamics of the front center of percussion are independent from the lateral force of the rear tires, while the lateral displacement dynamics of the rear center of percussion are independent from the lateral force of the front tires. A different selection of the longitudinal position of the control points would imply the design of a multivariable controller. In the specific case of [46], the resulting sliding mode control laws have the following shape: ( with the sliding variables f and r defined as i D c COP;i y COP;i CP y COP;i , with i D f,r. The formulation in Eq. (5.45) requires an estimation of sideslip angle, while P and P y COP;i can be easily measured or estimated. The sliding mode formulation of the specific paper is designed, through the appropriate definition of the gains, M COP,f and M COP,r , to provide robustness against the variations of cornering stiffness and target path radius, and cross-wind disturbances. In order to prevent chattering, the following approximation of the discontinuous part of the control law is used: The resulting steady-state values of the front and rear steering angles, ı f,ss and ı r,SS , are given by: In practice, the sign of ı r,SS changes at the vehicle speed v 0 (about 55 km/h for the case study vehicle of [46]), i.e., at low speeds the rear wheels steer in counterphase with respect to the front wheels, while the opposite happens for The four-wheel-steering controller based on the COP concept was validated through CarSim simulations of a cross-wind disturbance situation. Figure 5.17 reports a sample of the simulation results, showing that the maximum deviations at the front and rear control points of the four-wheel-steering vehicle are not influenced by vehicle velocity, while the rear path deviation of the two-wheel-steering vehicle used as a term of comparison increases at exponential rate. Overall, the main benefit of sliding mode control is the low-complexity of the resulting control law (see Eqs. (5.37) and (5.45)), however: (1) "it needs knowledge about the bounds of the disturbances and uncertainties in advance" [47]; (2) "it is not robust outside the sliding surface" [47]; and (3) it can present chattering. with D P y CG C y CG . The equivalent term corresponding to P D 0 is calculated starting from the equations of the single-track model of the system and is given by: [49,50] are other useful references in the area of sliding mode control applied to automated driving. The recent paper [47] presents and experimentally validates, including comparison with the super-twisting algorithm of Eqs. (5.49) and (5.50), a path tracking controller based on the theory of immersion and invariance, where the target dynamics for the system are selected during the control design phase, similarly to sliding mode control. The main benefits with respect to sliding mode control are that: (i) "the manifold does not necessarily have to be reached"; (ii) it allows more flexibility in the selection of the target dynamics; and (iii) "it avoids the use of a discontinuous term in the control law." In the area of control structures with discontinuous control action, [51] experimentally demonstrates a nonlinear controller (in practice a sliding mode controller including saturation functions, even if it is not explicitly called in this way in the original paper), based on the lateral offset at a look-ahead point (estimated by a Kalman filter, together with its time derivative) and the yaw angle error. [52] presents a discontinuous control law of the form ı D .jk d j C jrj/ sign b T P L . ref / , generated through model-reference control and a Lyapunov approach, with simulation results showing significant chattering, unacceptable for a real vehicle implementation. Other Control Structures Many other path tracking controllers were implemented in the literature, covering most of the control structure options. For example, O'Brien et al. [53] discuss an H 1 controller based on the theory of loop-shaping [54,55], with the purpose of providing robustness with respect to the variation of the plant parameters. Given a plant with a co-prime factorization G H optimization is to find a controller stabilizing the system and maximizing the value of . The value of max represents a measure of the stability margin (i.e., the so-called robust stability margin) for the nominal system to perturbations in the co-prime factorization of the plant. The paper assesses the performance of the H 1 controller through simulations with a basic 3-degree-of-freedom vehicle model, including robustness analysis with respect to varying speeds, icy roads, and wind gusts. As the controller is based on two inputs (i.e., the lateral displacement error and yaw angle error) and produces one output, a singular value decomposition analysis [55] would allow discussing its functional controllability. A more recent second example of H 1 controller implementation and experimental demonstration on a prototype vehicle is included in [56]. Shin and Joo [57] present a backstepping controller design (see [58] for the theory of backstepping), based on a linear single-track vehicle model, in this case with the following states: (i) sideslip angle; (ii) yaw rate; (iii) the difference between the actual and reference yaw angles, D ref ; and (iv) the vehicle offset from the center of the lane, y l d . In particular, in the model for control system design, the equation of P y l d is P y l d D v .ˇC / C l d P . Hence, the reference yaw rate is defined as By defining P D P P ref and substituting into the equation of P y l d , the lateral displacement error dynamics are described by P y l d D K d y l d C l d P . After differentiation of P with respect to time and re-substitution into the single-track model equations, the yaw rate error dynamics are given by Eq. (5.52), based on the expressions of the coefficients Cˇ, C P , C , C Ä , and b ı . The steering control law is then chosen as: which brings the following yaw rate error dynamics: The term k P P in Eq. (5.53) is used to decouple the lateral displacement and yaw rate error dynamics. The stability of the controller can be demonstrated through the Lyapunov function L Lyapunov D 1 2 W d y 2 l d C W P P 2 Á . The controller was assessed in relatively low lateral acceleration conditions, through experimental tests with a vehicle prototype (a Hyundai sedan) on a proving ground consisting of a straight section with a length of about 1.2 km and a curved section with a radius of 260 m. Interestingly, the experimental analysis included the comparison of the lateral offset of the vehicle trajectory with respect to the reference one for the cases of human driving and autonomous driving. In general, the proposed controller "has the properties of deviating outwards in the lane during curve entry and inwards during curve exit, similar to the human driver. This can be reduced by adjusting the look-ahead distance to vary proportionally to the vehicle speed and by tuning the feedforward gain related to the curvature" [57]. Many papers (e.g., [59][60][61][62]) present fuzzy logic-based controllers for vehicle path tracking control. Given the general caution with respect to fuzzy control still present in industry (e.g., because of the lack of formal proof of stability), this chapter will not report the detailed descriptions of the available fuzzy implementations. The only note is that Naranjo et al. [62] include experimental results (on a Citroen Berlingo) with a fuzzy controller based on a real-time kinematic differential global positioning system, used as the main sensor for vehicle positioning. Remarks Unluckily, apart from [16], assessing geometric controllers and different LQR formulations on a CarSim vehicle model (see Table 5.1 summarizing the main conclusions of that study), there is limited available literature objectively comparing the performance of the different path tracking controllers discussed so far in this chapter. [63] is an exception, presenting a simulation-based comparison of four controllers, i.e., a self-tuning regulator (for the details, see [64]), an H 1 controller, a fuzzy logic controller, and a P controller (of the form ı D k C k y l d y l d ). The assessment includes consideration of the effects of curvature, wind, and variations of vehicle speed and tire-road friction coefficients, along the simulated test track circuit at Satory -Versailles, France. The model used for the comparison is a simple linear single-track vehicle model, with a limited level of realism. The comparison shows that the self-tuning controller provides the best performance, followed by the H 1 controller and the fuzzy logic controller, which are approximately at the same level (even if the authors of [63] mention that fuzzy control is generally less reliable than a conventional controller), and finally by the proportional controller. However, this important analysis would require further development and level of detail. Recent Advances in Path Tracking Control The conclusion of the comparative study of different path tracking controllers in [16] (see Table 5.1), dating back to 2009, was that the expected evolution of the science of path tracking control would have been in the directions of (a) controllers combining different structures and formulations depending on the operating condition of the vehicle, in order to provide consistently reliable automated driving and (b) model predictive controllers, for autonomous driving even in extreme conditions, for example, at high lateral accelerations. Based on the literature discussed so far, it is evident that there are already extensive experimental demonstrations of gain scheduled controllers capable of simultaneously providing the required vehicle tracking response for a wide range of speeds and lateral accelerations, and very precise maneuverability in docking conditions. In particular, [40] explicitly mentions that with the specific experimentally validated path tracking controller, based on linear control theory and implemented with realistic vehicle actuators, there is no need for multiple control structures, in order to achieve consistently reliable and comfortable path tracking behavior. On the other hand, the very recent paper [65] still includes kinematic model-based controllers in the analysis and considers them useful in low-speed conditions. Nevertheless, given results such as those in [40], the authors of this chapter do not consider the development of heterogeneous control structures to be a priority or major obstacle for the development of the automated driving agenda. Two main trends can be observed in the recent research in the subject area of path tracking control: (i) Development of path tracking controllers characterized by the capability of controlling the vehicle at its cornering limit, for example, even at lateral accelerations of 9.5 m/s 2 (e.g., for automated car racing). (ii) Progressive increase of the level of sophistication of the implemented control structures, with particular focus on model predictive control, now extensively implemented in simulation and preliminarily demonstrated at the experimental level (which confirms the conclusion of [16]). The following subsections describe examples of recently published path tracking controllers, with concise critical analyses and discussions. Advanced Feedforward and Feedback Controllers for Limit Cornering Important recent contributions in the area of path tracking control, mainly developed at Stanford University, are aimed at achieving high path tracking performance at the cornering limit of the vehicle, e.g., at lateral acceleration levels up to 9.5 m/s 2 in high friction conditions (which represents the cornering limit for a typical passenger car), or for extreme combined cornering and braking/traction [66][67][68][69][70]. Particular focus is on the development of feedforward steering formulations, allowing a relaxation of the specifications of the feedback part of the controller and a reduction of the issues related to the effect of measurement disturbances. For example, Kritayakirana and Gerdes [67] present and experimentally validate a feedforward/feedback steering controller for a two-wheel-steering vehicle, based on the path tracking control of the front center of percussion (COP), already defined in Eq. (5.39). By considering P CG D P P path;CG D P ÄP s and the yaw moment balance equation of a single-track vehicle model, the time derivative of P CG (i.e., the yaw acceleration error) is: The lateral position error at a generic point P along the x-axis of the vehicle reference system is given by: y P D y CG C x P sin CG and its acceleration R y P is approximated with: The issue is that the steering actuator can command the front tire force, but it does not have any direct control over the rear tire force, which, thus, represents a disturbance in Eq. (5.55). However, by imposing x P D x COP,f and hence that The rear tire force is thus eliminated from the equation of the lateral acceleration error at the control point, which now depends only on F y,f , directly controllable through the steering input. The conclusion, similar to the outcome of the analysis in [46] referred to a four-wheel-steering vehicle, is that at the front COP the effect of the rear tire forces on the lateral position error dynamics can be neglected. The feedforward contribution of the steering controller in [67] has the purpose of eliminating the dynamics of the lateral acceleration error, i.e., it can be obtained by imposing R y COP;f D 0 in Eq. (5.56). Hence, the feedforward lateral force on the front axle, F FFW y;f , is given by: Eq. (5.57) allows ideal tracking performance, independently from the rear lateral tire force contribution, provided that the terms related to the reference path can be accurately estimated. By substituting Eq. (5.57) into the yaw moment error equation and introducing the feedback part of the control force on the front axle, F FB y;f , such that the total control force is F TOT y;f D F FFW y;f C F FB y;f , the system dynamics become: Through the terms v x L ÄP s b L .ÄR s C P ÄP s/, Eq. (5.58) shows that the disturbance caused by the curvature cannot be eliminated from the yaw tracking equation, unless an independent actuator controlling the rear tire force is adopted (as already proposed in [46]). The objective of the feedback part of the controller in [67,68] is to provide path tracking and yaw stability even when the rear tires are saturated, while the scenario in which the front tires are saturated is not considered. Note that passenger cars are usually characterized by an understeering behavior for vehicle safety, i.e., the absolute value of the front slip angles is normally larger than the absolute value of the rear slip angles, and the front lateral forces saturate first. During the design of the feedback controller, the nonlinear behavior of the rear tires is considered through the model proposed in [69], according to which F y;r D 2Á r C r˛r , with Á r (0 Ä Á r Ä 1) being a monotonically decreasing function of the absolute value of the rear slip angle. By substituting the control variables into the expression of˛r By including this formulation into the expression for F y,r and then into the equations of the single-track vehicle model in path coordinates (see also Eq. (5.22)), the statespace formulation of the system is obtained, including the effect of the feedforward controller. The feedback contribution of the front lateral force is then expressed as a fullstate feedback controller: By substituting Eq. (5.59) into the state-space formulation, the closed-loop system equations can be used for control system design: In the specific controller of [67], it is k 2 LC D 0. In particular, F FB y;f of Eq. (5.59) is manipulated to become F FB y;f D k LK y COP;f C .l d a/ CG k P P CG , where clearly the control point of the feedback part of the controller is not located at the front center of percussion any more. During the control system implementation, the following values are adopted: k LK D 4000 N/m, l d D 20 m and k P D 9500 Ns/rad. The stability of the control system at the cornering limit is demonstrated through Lyapunov method applied to Eq. (5.60), without considering the disturbance from the curvature (which does not affect stability). Detailed tuning criteria of the feedback control gains are reported in [71]. Starting from the previous formulations of the reference front lateral force, F TOT y;f , the Fiala tire model for pure cornering conditions is used to obtain the reference slip angle on the front axle,˛r ef,f . Then, based on the measured vehicle yaw rate and estimated sideslip angle, the reference steering angle ı for the front axle is calculated, based on the kinematic relationship ı ˇC a P v x ˛r ef;f . In practice, these steps can introduce significant errors in the process, in the absence of very accurate state estimation. A more recent paper of the same research group [70] presents a further development of the controller discussed so far, as the experimental tests show a suboptimal tracking performance of the controller based on Eqs. (5.57) and (5.59) above 7 m/s 2 of lateral acceleration. The authors suggest considering a simplification of the feedforward contribution in Eq. (5.57), by imposing x COP f .ÄR s C P ÄP s/ D 0 and P s D v x , in order to reduce vehicle response oscillations and increase damping. As a consequence, the feedforward values of the lateral force at the front and rear axles simply become: which are the steady-state values of lateral force in cornering at the reference lateral acceleration v 2 x Ä. In [70], the feedback control law of Eq. (5.59) is simplified into the proportional controller: The overall control law is, thus, given by ı D ı FFW,LC C ı FB,LC,1 . The steadystate response of the system as a function of v is reported in Fig. 5.18, at lateral accelerations of 3 m/s 2 and 7 m/s 2 , where for the first case the calculation is based on a linear vehicle model and for the latter it is based on a nonlinear model. Since the controller is aimed at eliminating a weighted sum of y CG and CG , the feedback part of the controller actually tries to reduce y l d , while the steadystate values of y CG and CG are nonzero. From a physical viewpoint, this is the situation corresponding to Fig. 5.19a. An important observation from Fig. 5.18 is that the steady-state value of path deviation is close to zero for vehicle speeds of 17 m/s and 20 m/s, depending on the considered vehicle model. This means that at these speeds the velocity vector at the center of gravity is tangent to the path and CG ˇ(see Fig. 5.19b). A new form of look-ahead feedback control law is suggested, with the aim of constantly keeping the vehicle in the operating condition of Fig. 5.19b: With this control law, the steady-state error y CG is significantly reduced (i.e., it is ideally zero when including the feedforward contribution according to Eq. (5.62)); however, the system is now characterized by reduced stability margins, with respect to the feedback control law of Eq. (5.63). This is evident through In order to prevent the stability issues of path tracking enforced through feedback, Kapanja and Gerdes [70] finally propose to eliminate the sideslip-related feedback contribution from Eq. (5.64). Nevertheless, the zero displacement error condition (corresponding to CG D ˇ) is incorporated into the feedback contribution by usingˇS S (i.e., the expected steady-state value of sideslip angle) instead of the estimatedˇ. Hence, Eq. (5.64) becomes: Actually, the sideslip contribution of Eq. (5.65), i.e., k P l d ˛F FW r C bÄ , is now a feedforward term. Vehicle experiments with an autonomous Audi TTS were executed at the Thunderhill Raceway Park, in order to compare the performance of (i) the controller corresponding to Eqs. show that controller (i) implies significantly improved path tracking. In any case, the whole approach needs further developments, as the feedforward contribution is sensitive to system uncertainty (e.g., tire-road friction conditions), which is much more important in the case of a vehicle operating on a real road rather than on a race track. Model Predictive Control With respect to the path tracking formulations discussed in the previous sections, model predictive control [72][73][74] brings the following benefits: • Inclusion of constraints on inputs and states; • Systematic approach to the control problem, with the possibility of considering multiple actuators and models at different levels of complexity within the same control design framework; • Enhanced tracking performance at medium-high lateral accelerations and during emergency conditions, depending on the complexity level of the selected model for control system design. Extensive literature provides simulation and experimental results of model predictive control applications for path tracking. For example, one of the first attempts is presented in [75], with a path tracking model predictive controller based on a single-track vehicle model. This adopts a nonlinear tire model with constant cornering stiffness and lateral force saturation at the value corresponding to the estimated tire-road friction coefficient. Falcone et al. [76] discuss and compare three model predictive control formulations. The first one (here called Controller A) is based on a nonlinear single-track vehicle model. This considers constant vertical load on the front and rear axles and uses Pacejka magic formula [77], under the hypothesis of zero longitudinal slip ratio (i.e., pure cornering conditions). The model is expressed in the form: The system output is z MPC k D OE k y k T . The cost function to be minimized is: The first contribution, P H P nD1 z MPC kCn z MPCref;kCn 2 Q , relates to the tracking performance of the system (z MPCref is the vector of the reference signals), while the second contribution, P H c 1 nD0 ku kCn k 2 R , considers the control effort. Similarly to the case of the linear quadratic regulator, the parameters of Q and R can be tuned to define the performance of the model predictive controller, i.e., the variables that need to be tracked with higher precision, and the relative weight between tracking performance and control effort. At each time step, the following finite horizon optimal control problem is solved: c k;t D c t;t ; s k;t D s t;t ; k D t; : : : ; t C H P (5.68d) ı min Ä u k;t Ä ı max (5.68f) u k;t D u k 1;t C u k;t ; k D t; : : : ; t C H C 1 (5.68g) u k;t D 0; k D t C H C ; : : : ; t C H P (5.68h) The optimization vector at time t is U t D OEu t;t : : : u tCH C 1;t T I H P and H C denote the prediction and control horizons, respectively. The solution of problem (5.68) implies a nonlinear optimization, with a very significant computational burden. The optimization of Controller A is solved through the commercial NPSOL software package [78]. Falcone et al. [76] present only experiments at low vehicle speed with the controller based on the nonlinear model in the form of Eq. (5.66a) and the optimization problem (5.68), i.e., Controller A. In fact, as speed increases, larger prediction and control horizons are required "in order to stabilize the vehicle along the path." This implies more evaluations of the objective function and increased size of the optimization problem, which becomes unpractical. As a consequence, Falcone et al. [76] also discuss an alternative formulation, Controller B, based on the linearization of the system at each time step, around the current operating point. This procedure significantly decreases the computational complexity of the optimization problem, even if additional calculations are required for system linearization at each time step. In the case of Controller B, the model output vector is z MPC k D k P k y k T . In Controller B, tire slip angle variation is an additional output that is constrained (through a soft constraint and a slack variable) but not tracked. Controllers A and B were assessed in double lane change tests through simulations and experiments. Controller parameters have a significant effect on the control system performance; therefore, the main parameter values used in [76] are reported for completeness: Finally, Falcone et al. [76] include a simplified version of Controller B, here called Controller C, with H C D 1, allowing a further reduction of the computational load for implementation on actual automotive control hardware (in this case, the set of required calculations at each time step can be predicted a priori). [76] state that this aspect is not critical, as the vehicle cornering dynamics act as a filter, and, therefore, the vehicle passengers do not perceive the oscillations. Tables 5.3 and 5.4 report the main tracking performance indicators during the tests, for different vehicle speeds, respectively, for Controller B and Controller C. According to [76], "Controller C performs slightly worse than Controller B;" nevertheless, "it is able to stabilize the vehicle at high speed." In [79], the same research group presents a model predictive controller (here called Controller D) based on a four-wheel vehicle model including wheel and tire dynamics. As a consequence, the state vector of the model for control system design The tires are modeled through the magic formula, this time including consideration of the interaction between longitudinal and lateral forces. However, the vehicle model considers a constant vertical load on each tire, which is a substantial limitation, as the load transfers induced by longitudinal and lateral accelerations are an important cause of nonlinearity and variation of the understeer characteristic. The main benefit is that Controller D allows systematic and concurrent control of the steering angle and the individual friction brake torques (and potentially the drivetrain torque as well). This feature is an important point for actual vehicles including stability control systems based on independent caliper pressure control. Therefore, the control output vector for Controller D is u D [ı T b,lf T b,rf T b,lr T b,rr ] T . As the controller was tested for a lane change maneuver only, the traction torque is not considered in u within [79]. Controller D provides better performance in extreme conditions than a controller (here called Controller E) based on a nonlinear single-track vehicle model, in which the control output is represented by u D [ı M z ] T [79]. In Controller E, heuristics are adopted (within a low-level controller) to calculate the individual friction brake torques required to generate the reference yaw moment, M z , output by the model predictive controller. The simulation results show that lane change tests can be executed at a higher initial vehicle speed with Controller D than with Controller E. However, the authors admit that the duration of the simulation runs with Controller D was about 15 min, and, therefore, they did not have the time to fine tune the parameters of Controller D. This justifies the simulation results for Controller D (e.g., see Fig. 5.25), showing significant vibrations of the control action, which requires further investigations in the opinion of the authors of this chapter. Falcone et al. [79] also include experimental results with Controller F, which is based on the linearized model used for Controller D (i.e., the model with four vehicle corners), where the linearizations are carried out online, around the current operating point of the vehicle. The resulting controller can, thus, run online with a fixed step size of 50 ms on the control hardware available in [79]. Yin et al. [80] propose a model predictive controller, Controller G, for an autonomous electric vehicle with individually controlled drivetrains, with a formulation very similar to that of Controller F of [79]. The controller is based on a linearized vehicle model including the four vehicle corners, where the reference steering angle and the four slip ratios are the control outputs. This means that a lowlevel controller is used to calculate the individual wheel torques required to achieve the reference longitudinal slips. In practice, this is very difficult to implement because of the approximations in the slip ratio estimation during normal driving. Slip ratio estimation is much easier in extreme conditions, i.e., when the absolute values of the slip ratios are larger and the conventional traction control and antilock braking systems are usually activated. The benefit of Controller G is that it allows the control of the traction torque during autonomous driving, without the requirement of two separate controllers for steering and longitudinal tracking. Similarly to [79], Attia et al. [81] present a nonlinear model predictive controller, Controller H, based on a model coupling the longitudinal and lateral vehicle dynamics. The model for control synthesis is a discretized single-track vehicle model, including the degrees of freedom corresponding to the longitudinal and lateral displacements of the center of gravity, the vehicle yaw motion, and the equivalent front and rear wheel dynamics. Therefore, the state vector is The main model nonlinearity is represented by the Burckhardt tire model. The model predictive controller is responsible for the steering angle demand only. The longitudinal vehicle dynamics are exclusively used for considering the interaction between longitudinal and lateral tire forces; in fact, the wheel torque demand is controlled by an independent controller based on Lyapunov approach. The paper does not report the details of the numerical aspects related to the algorithm implementation and online optimization, apart that the considered sample time is T S D 10 ms. Controller H includes some consideration of the interactions between longitudinal and lateral control. In fact, an excessive level of vehicle speed reference can originate problems in terms of lateral dynamics, as "no active lateral stabilization is considered in the control design." In this respect, the reference vehicle speed of the longitudinal controller can be saturated based on the expected road curvature and the estimated tire-road friction coefficient, according to the following formulations proposed in [82,83]: Also, according to the US National Highway Traffic Safety Administration (NHTSA) [83], the longitudinal acceleration to bring vehicle speed to the maximum value specified in Eq. (5.70) should be limited to: Criteria (5.69)-(5.71) are easily applicable for reference speed generation; however, safety-critical conditions could happen, for example, caused by an erroneous friction coefficient estimation, thus determining higher reference speed profile than the one compatible with the actual friction limits. In these situations, the stability control system of the vehicle is expected to intervene and overrule the inputs of the autonomous driving controller, as it happens in normal humanly driven passenger cars. Nevertheless, the authors of [81] report a couple of sideslip-related stability criteria (from [84,85]), mentioned as relevant to automated driving, without clearly specifying how to organically include them within their automated steering controller:ˇ1 The previous contributions discuss model predictive controllers without any specific feature aimed at providing system robustness. Nevertheless, the presented model predictive controllers guarantee enhanced tracking performance in nominal conditions, i.e., when the model used by the controller provides a good fit with the actual plant. To the purpose of conjugating excellent tracking performance in nominal conditions and robustness, recent contributions are focused on robust model predictive control [65]. For example, Gao et al. [86] discuss a robust tube-based model predictive controller (Controller I, for the theory refer to [87][88][89]), conceived with the specific purpose of relatively low computational load. The controller is based on a system model formulation of the form: with k 2 ", u k 2U, w k 2W. Equation (5.74) is derived starting from a single-track vehicle model, including the longitudinal force balance equation of the system. The path tracking controller is based on the lateral displacement error at the front center of percussion to eliminate the rear lateral tire forces from the lateral displacement error equation. The state vector is D P y COP;f P x COP;f P y CG s T , and the input vector (i.e., the output of the controller) is u D [ˇx ,fˇy,fˇr ] T .ˇx ,f andˇy ,f are the normalized longitudinal and lateral forces on the front wheels, which are controlled through the drivetrain/friction brakes and the steering system, respectively.ˇr is the normalized longitudinal force on the rear axle. A linear model is used for the lateral force of the rear axle. Essentially, the model in Eq. (5.74) includes a linear term, A k C Bu k , a small (under the hypotheses of the discussion in [86]) nonlinear term, g( k ), and a disturbance term, w k . The control action consists of two contributions: (i) a nominal control input for the nominal system, i.e., the system in Eq. (5.74) under the hypothesis of zero disturbance and (ii) a state feedback controller acting on the error, e k D k k , between the actual state of the system and the predicted state of the nominal system. The nominal system is defined as the system with the nominal control input and zero disturbance sequence. As a consequence, the control law has the following shape: where u k is the nominal controller and b u .e k / is the state feedback control action. In the specific case, the stabilizing feedback contribution is based on a linear quadratic regulator: . In practice, in [86], the linear part of the dynamics is separated into two contributions, the first one including the longitudinal dynamics and the second one including the lateral dynamics. In general, the error dynamics are given by Yu et al. [87] demonstrated that if Z is a robust positively invariant set of the error system in Eq. ). This means that if the system states start close to the nominal state, then the control law in Eq. (5.75) will keep the system trajectory within the robust positively invariant set Z centered at the predicted nominal states. This statement also suggests that "if a feasible solution can be found for the nominal system subject to the tightened constraints " D "«Z and U D U «b u .Z/, then the control law" in Eq. (5.75) "will ensure constraint satisfaction for the controlled uncertain system" (see Appendix for the definition of Pontryagin difference, «). In general, the controller and invariant set pair are very difficult to calculate, unless the nonlinear term in Eq. (5.76) is small, i.e., kg . 1 / g . 2 /k 2 Ä L Lipschitz k 1 2 k 2 ; 8 1 ; 2 2 ", where L Lipschitz is the Lipschitz constant of the nonlinear term. Under this hypothesis, the system in Eq. (5.76) can be rewritten as For this case, [86] provides an algorithm for the computation of the minimal positively invariant set Z (see Appendix for the definition) associated with the linear quadratic regulator gain K LQ applied to the system defined by A and B. In the actual calculations of [86], as the system was split into its longitudinal and lateral dynamics contributions, the respective invariant sets were calculated separately. From a practical viewpoint, the robust model predictive control system design procedure reduces to the following relatively simple steps: The robust formulation in [86] is simply a linear quadratic regulator coupled with an implicit model predictive controller, with an additional algorithm to calculate the invariant set. The calculation of Z is beneficial to know and consider the expected boundaries of the states, i.e., the projection of the bounds of the robust invariant set, Proj .Z/, while the controller is running (hence the concept of tube-based model predictive control). Gao et al. [86] report simulation results of obstacle avoidance maneuvers, including introduction of random bounded disturbances with uniform distribution into the simulation model, and comparison of the performance of the robust controller, u k D u k C b u .e k /, with the performance of the nominal controller, u k (the difference is evident in Figs. 5.27 and 5.28). Also, Gao et al. [86] include experimental results, such as multiple obstacle avoidance tests carried out on a surface with a tire-road friction coefficient c D0.1, while the controller is set for c D0. 3 (Figs. 5.29 and 5.30). In addition to the tube-based model predictive controller, Carvalho et al. [65] also suggest time-varying stochastic model predictive control for dealing with system uncertainty. The theory and an example of application to the automated driving problem are provided in [90,91]. Another interesting example of comparison of control structures for path tracking based on model predictive control in uncertain conditions is included in [92], dealing with the problem of obstacle avoidance on a slippery road. The paper supposes that the reference generation layer (see the introduction of this chapter) outputs the reference trajectory, but does not correct it to avoid an obstacle located on the desired path. Therefore, the reference trajectory has to be modified by the control layer, i.e., it has to be replanned to take the obstacle into account. four-wheel vehicle model as the one used in controller architecture (i), receives the replanned reference trajectory and calculates the same outputs as the singlelayer controller. Controller J and the top layer of Controller K use cost functions with similar structure to the one in Eq. (5.67), with a term referring to the tracking performance and a term referring to the control effort, but they also include an additional term given by: J obs k;t in Eq. (5.79) is the cost at time k associated to the predicted distance between the vehicle and the obstacle, under the assumption that obstacle position is known for a collection of discretized points P t,j . In particular, d min k;t D min j d k;t;j . d k,t,j is defined as: The simulation and experimental results demonstrate that the case study obstacle avoidance maneuver can be executed at higher values of vehicle speed with the twolevel model predictive controller (Controller K), while the single-level architecture (Controller J) tends to cause stability problems. In general, when the vehicle deviates too far from the reference trajectory, the system becomes uncontrollable. In these conditions, "the vehicle state is outside the region of attraction of the equilibrium trajectory associated to the desired reference." In the single-layer architecture, this phenomenon happens quite often, as it is induced by tire saturation. This situation does not occur with the simple point-mass model of Controller K, "since the path replanner always replans a path starting from the current state of the vehicle, and therefore ensures that the low-level reference is close to current state. This explains why the performance of the two-level approach is better than the one-level approach." Also, the computational performance of the two-level approach is much better than that of the single-level approach. Hence, the important conclusion is that for effective automated driving through model predictive control, the separation between the reference generation layer, which should replan the trajectory considering the presence of obstacles, and the control layer (see the introduction of this chapter) is not only convenient for control system simplification but also beneficial to control system performance. Table 5.5 is included as a summary of the main characteristics of the model predictive controllers discussed in this section, with an overview of the adopted models, the possible control action (lateral control or combined longitudinal and lateral control), the involved complexity, and the form of validation presented in the literature, i.e., through simulations and/or experiments. All the algorithms discussed in this section are based on implicit model predictive control formulations, i.e., the optimizations are run online, which implies a significant computational load for the vehicle control unit. Also, implicit model predictive control does not permit formal analysis of performance, suboptimality and stability [93]. An option that should be assessed in future research is explicit model predictive control, which has already been successfully implemented, including experiments, for concurrent yaw moment and active steering control in [94]. The validation of robust and explicit model predictive control formulations could represent the next step of path tracking control research. Concluding Remarks This chapter provided an overview of control structures for path tracking in autonomous vehicles, ranging from basic kinematic controllers to robust model predictive controllers. The presented analysis and results bring the following conclusions: • Without disturbances (e.g., caused by side wind and the banking of the road), and uncertainties (e.g., caused by the vision systems), the path tracking performance of simple control structures is adequate and was already experimentally demonstrated in research programs in the 1990s [95]. For an assigned vehicle speed, fixed parameter controllers can deal with the range of axle cornering stiffnesses, vehicle masses, and tire-road friction coefficients typical of a real vehicle operation. Apparently, fixed parameter controllers can be effective even in the case of buses and heavy goods vehicles, characterized by significant mass variations during their operation. • Look-down controllers show significant performance limitations at medium-high speeds. Effective feedback control design for path tracking must be based on the combination of lateral displacement and heading angle control, or lateral displacement evaluated at a look-ahead distance. • Gain scheduling of the controller parameters, including the preview distance, is recommended as a function of vehicle speed. • The separation among the reference trajectory level and the control layer is not only convenient for simplifying the automated driving control system design but also allows better system response from the viewpoint of vehicle stability, conjugated with reduced computational effort. • In order to reduce the effect of disturbances and relax the tuning of the feedback part of the path tracking controller, properly designed feedforward contributions are essential. The main limitation of existing feedforward controllers for path tracking is their reliance on very advanced state estimators, which have to provide smooth outputs. Machine learning techniques, adaptive control and sensor fusion could significantly help with this challenging task. • An interesting concept is represented by the center of percussion. A path tracking controller based on the position error at the front center of percussion eliminates the effect of the rear axle cornering forces on the lateral position error dynamics, which significantly facilitates the design of a feedback controller for a twowheel-steering vehicle. A four-wheel-steering path tracking controller based on the lateral position errors at the front and rear centers of percussion simplifies into the design of two decoupled single-input single-output controllers. • Some of the available experimental studies show that from the vehicle passengers' perspective, vehicle comfort (determined by the frequency and amplitude of the oscillations induced by the control action) is more important than the excellence of the tracking performance. Unluckily, only very limited data are available with respect to the comfort behavior of the most recent and performing path tracking controllers, including analysis of the subjective feedback of the occupants. • Despite many path tracking formulations have been assessed through experimental tests, an objective comparative assessment of the performance of different control structures for the same vehicle and set of state estimators is missing in the literature. On the one hand, the authors presenting the most advanced control formulations tend to show their benefits without going through a comparison with fine-tuned simple controllers. On the other hand, the authors presenting relatively simple control formulations tend to highlight their performance even at medium-high lateral acceleration levels. Also, the subjective performance of the different path tracking formulations, i.e., in terms of oscillation of the control action and the subsequent vehicle response, should be carefully assessed through experimental tests, in order to draw clear conclusions on the required level of control system sophistication. • Current research developments are in the area of automated driving for limit conditions, i.e., at extreme lateral and longitudinal accelerations, and in the area of robust model predictive control, with the main aim of systematically dealing with system uncertainty. • Future research activities should also systematically cover the interaction between automated steering and direct yaw moment control, which currently represents the main actuation technique for stabilizing the vehicle in extreme transient conditions. Direct yaw moment control can be sporadically actuated through the friction brakes within stability control systems [96] or, in the case of vehicles with torque-vectoring differentials or multiple electric drivetrains, can be continuously actuated during normal vehicle operation [97][98][99][100]. Especially in the latter case, further investigations of automated driving during extreme cornering are required, including more detailed and comprehensive experimental demonstrations. Appendix: Definitions of Invariant Sets, Minkowski Sum˚, and Pontryagin Difference « The following definitions are provided: (a) Reachable set for systems with external inputs. Consider a system k C 1 D f ( k , u k ) C w k , with k 2 ", u k 2U, w k 2W. The one-step robust reachable set from a given set of states S is Reach f .S; W/ ˚ 2 R n j 0 2S; 9u 2 U; 9w 2 W W D f . 0 ; u; w/ « ; (b) Robust positively invariant set. A set ZÂ" is said to be a robust positively invariant set for the autonomous system k C 1 D f a ( k ) C w k , with k 2 " and w k 2W, if 0 2Z) k 2Z, 8w k 2W; 8k 2 N C (c) Minimal robust positively invariant set. The set Z 1  " is the minimal robust positively invariant set for the defined autonomous system, if Z 1 is a robust positively invariant set and Z 1 is contained in every closed robust positively invariant set in " (see [89] for the details) (d) Minkowski sum. The Minkowski sum of two polytopes, } and H, is the polytope } L H´fx C h 2 R n jx 2 }; h 2 Hg; (e) Pontryagin difference. The Pontryagin difference of two polytopes, } and H, is the polytope } « H´fx 2 R n jx C h 2 }; 8h 2 Hg.
2019-04-15T13:06:47.013Z
2017-01-01T00:00:00.000
{ "year": 2016, "sha1": "83c6b590d7be945f7d56c60cdf080ba6b2bc760d", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1007/978-3-319-31895-0_5", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "07a66cb7747ec3ed95d135aab0d1dcd7c271bf3f", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
247608007
pes2o/s2orc
v3-fos-license
Vitamin D Metabolites in Nonmetastatic High-Risk Prostate Cancer Patients with and without Zoledronic Acid Treatment after Prostatectomy Simple Summary Recent research on prostate cancer and vitamin D is controversial. We measured three vitamin D3 metabolites in 32 selected prostate cancer patients after surgery at four time points over four years. Within a large European study, half of the patients were prophylactically treated with zoledronic acid (ZA); the others received a placebo. After the study start, all the patients daily took calcium and vitamin D3. The development of metastasis was not affected by ZA treatment. While two vitamin D metabolites had higher values after the study’s start, with constant follow-up values, the 1,25(OH)2-vitamin D3 concentrations remained unchanged. The latter form was the only metabolite that was higher in the patients with metastasis as compared to those without bone metastasis. This result is surprising. However, it is too premature to discuss possible prognostic value yet. Our results should be confirmed in larger cohorts. Abstract There are limited and discrepant data on prostate cancer (PCa) and vitamin D. We investigated changes in three vitamin D3 metabolites in PCa patients after prostatectomy with zoledronic acid (ZA) treatment regarding their metastasis statuses over four years. In 32 patients from the ZEUS trial, 25(OH)D3, 24,25(OH)2D3, and 1,25(OH)2D3 were measured with liquid chromatography coupled with tandem mass spectrometry at four time points. All the patients received daily calcium and vitamin D3. Bone metastases were detected in 7 of the 17 ZA-treated patients and in 5 of the 15 controls (without ZA), without differences between the groups (p = 0.725). While 25(OH)D3 and 24,25(OH)2D3 increased significantly after the study’s start, with following constant values, the 1,25(OH)2D3 concentrations remained unchanged. ZA treatment did not change the levels of the three metabolites. 25(OH)D3 and 24,25(OH)2D3 were not associated with the development of bone metastases. In contrast, 1,25(OH)2D3 was also higher in patients with bone metastasis before the study’s start. Thus, in high-risk PCa patients after prostatectomy, 25(OH)D3, 24,25(OH)2D3, and 1,25(OH)2D3 were not affected by supportive ZA treatment or by the development of metastasis over four years, with the exception of 1,25(OH)2D3, which was constantly higher in metastatic patients. There might be potential prognostic value if the results can be confirmed. Introduction Several international conferences in recent years have discussed, in detail, the current evidence and the ongoing controversies in vitamin D research [1][2][3]. Vitamin D 3 is The basis for the study on vitamin D metabolites presented here was the availability of serum samples from a randomized, open-label study to evaluate the efficacy of ZA treatment for bone-metastasis prevention in high-risk PCa patients [34]. Thus, we were able to initiate this study in a small subset of 32 patients to largely meet the above-described requirements for a valid vitamin D study measuring the three metabolites 25(OH)D 3 , 24,25(OH) 2 D 3 , and 1,25(OH) 2 D 3 . With this study, we intended to obtain better insights into the following open issues: (a) the changes in vitamin D metabolites in PCa patients after prostatectomy over four years, (b) possible ZA-treatment effects on the profile of vitamin D metabolites, and (c) abnormalities in the metabolite profile with regard to metastasis during the study. Patients and Samples The study was based on vitamin D measurements performed on blood samples available from PCa patients after radical prostatectomy in the ZEUS trial (https:// www.isrctn.com/ISRCTN66626762 accessed on 6 February 2022; https://doi.org/10.1186/ ISRCTN66626762). This trial was a randomized, open-label study to evaluate the efficacy of ZA treatment for bone-metastasis prevention in high-risk PCa patients [34]. Ethical approval was obtained from local medical ethics committees for all the participating hospitals of this multicenter study, and the patients signed an informed consent form. The details and results of this trial were previously reported [34]. Briefly, the here-investigated subgroup consisted of nonmetastatic PCa patients with at least one of three high-risk factors: a Gleason score of 8-10, node-positive disease, or prostate-specific antigen (PSA) at diagnosis ≥20 ng/mL. No other prior PCa treatment (antiandrogen monotherapy, chemotherapy, and treatment with bisphosphonates) was allowed. All the patients were included in this study within 6 months after radical prostatectomy. The patients either received an intravenous infusion of 4 mg every three months or were without ZA treatment and served as controls. All the patients were prescribed concomitant therapy with a daily 500 mg dose of calcium and 400-500 IU of vitamin D 3 . Blood samples were collected under standard conditions in BD Vacutainer tubes before the study began and at every three-month visit. Serum samples were prepared and frozen at −80 • C until analysis. We analyzed samples from 32 patients at four time points, as further explained in the Results. Analytics for Vitamin D Metabolites The 25(OH)D 3 and 24,25(OH) 2 D 3 concentrations were determined with the KM1320 assay, and the concentration of 1,25(OH) 2 D 3 was determined with a development version of the KM1400 assay, both from Immundiagnostik AG, Bensheim, Germany. The vitamin D metabolites were purified by immunoaffinity enrichment, with 1,25(OH) 2 D 3 additionally derivatized for improved detection, and subsequently analyzed by liquid chromatography-tandem mass spectrometry on a QTrap 5500 system coupled to an Exion LC (AB Sciex, Darmstadt, Germany). All the samples were analyzed in two replicates using individual 3-point linear calibration curves. All the calibrants and controls were prepared from certified reference material (Cerilliant Corp., Round Rock, TX, USA) and validated with NIST ® SRM ® 972a samples, if available. For 1,25(OH) 2 D 3 , reference samples are not available, but the calibrants were tested with samples from the Vitamin D External Quality Assessment Scheme (DEQAS). The reproducibility of the measurements was calculated as the within-run precision from the duplicate measurements using the root-mean-square method [35]. The coefficients of variation (and their 95% confidence intervals) were 3.28% (2.92 to 3.76%) for 25(OH)D 3 , 4.73% (3.30 to 5.82%) for 24,25(OH) 2 D 3 , and 8.96% (6.90 to 10.6%) for 1,25(OH) 2 D 3 . Statistical Analysis MedCalc 20.027 (MedCalc Software, Ostend, Belgium) and GraphPad Prism 9.3.1 (GraphPad Software, La Jolla, CA, USA) were used as statistical programs as previously Cancers 2022, 14, 1560 4 of 17 described [36]. One-way and two-way analyses of variance (ANOVAs) were performed. Repeated-measures analyses of variances (ANOVAs) were used for a single-factor study without a grouping variable or for a two-factor study with a specified grouping variable. Holm and Sidak's multiple-comparison test was applied to account for multiple testing. Pearson correlation analysis was used to determine the strength of the associations between vitamin D 3 metabolites. Two-sided p-values < 0.05 were considered statistically significant. The values in the figures are presented as the means ± 95% confidence intervals (95% CIs). Patient Characteristics and Study Design The study included a total of 32 patients after radical prostatectomy characterized by at least one of three high-risk factors: a Gleason score of 8-10, node-positive disease, and PSA of ≥20 ng/mL at diagnosis. The individual data of all the patients are summarized in Table S1. Nineteen patients exhibited one high-risk factor (2 × positive nodes, 5 × PSA, and 12 × Gleason score), twelve patients had two factors (2 × PSA plus Gleason, 3 × PSA plus positive nodes, and 7 × Gleason plus positive nodes), and one patient had all three factors. The study started for 16 patients each in winter/spring and summer/autumn ( Figure 1). In every patient, repeated measurements of vitamin D metabolites were performed in serum samples taken at four time points: before the study entry as baseline, after 3 and 9 months, and between 27 and 47 months when the study ended or bone metastasis was diagnosed. ZA was administered to 17 patients; 15 patients were controls and did not receive ZA treatment. Bone metastases were detected in 7 of the 17 ZA-treated patients during the study and in 5 of the 15 controls, indicating no significant differences between the two patient groups (Fisher's exact test, p = 0.725). This result corresponded with that of the ZEUS trial [34]. nificant. The values in the figures are presented as the means ± 95% (95% CIs). Patient Characteristics and Study Design The study included a total of 32 patients after radical prostatec at least one of three high-risk factors: a Gleason score of 8-10, nod PSA of ≥20 ng/mL at diagnosis. The individual data of all the patie Table S1. Nineteen patients exhibited one high-risk factor (2 × positi 12 × Gleason score), twelve patients had two factors (2 × PSA plus positive nodes, and 7 × Gleason plus positive nodes), and one patie The study started for 16 patients each in winter/spring and summer every patient, repeated measurements of vitamin D metabolites we samples taken at four time points: before the study entry as baseline and between 27 and 47 months when the study ended or bone met ZA was administered to 17 patients; 15 patients were controls an treatment. Bone metastases were detected in 7 of the 17 ZA-treate study and in 5 of the 15 controls, indicating no significant differe patient groups (Fisher's exact test, p = 0.725). This result correspo ZEUS trial [34]. The three vitamin metabolites 25(OH)D 3 , 24,25(OH) 2 D 3 , and 1,25(OH) 2 D 3 were analyzed in detail. 25(OH)D 2 and 1,25(OH) 2 D 2 were also measured, but in all 128 samples, the 25(OH)D 2 concentrations were found to be under the lower limit of quantitation of 3.6 nmol/L, and 1,25(OH) 2 D 2 was not detectable. Thus, only the results for the three vitamin D 3 metabolites are reported here. The effects of the two abovementioned potential influencing factors "ZA treatment (yes/no)" and "bone metastasis during the study (yes/no)" as well as the seasonal dependency of the vitamin D 3 status were evaluated. 3.2. Vitamin D 3 Metabolites in the Total Study Cohort and Dependency on the Season of the Start of the Study Figure 2a,c,e provide an overview of the concentration changes for the three metabolites in the total study cohort of the 32 patients during the study at four measuring points. Statistically significantly increased levels of 25(OH)D 3 and 24,25(OH) 2 D 3 were observed within three months after the study's start, with approximately constant values at the two subsequent measuring points. In contrast, the 1,25(OH) 2 D 3 concentrations remained statistically unchanged over the entire study period. As the seasonal dependency of the vitamin D 3 status, with lower concentrations in winter and spring in comparison to summer and autumn, is also well known for PCa patients [22,37,38], we subdivided the patients into two groups with respect to the season of their study entry (Figure 2b,d,f). Lower levels of 25(OH)D 3 and 24,25(OH) 2 D 3 were detected in the patients who started their study in winter/spring in comparison to the patients with a study start in summer/autumn. While the patients with a study start in winter/spring showed distinctly increased concentrations of the two metabolites after three months of treatment, only moderately increased levels were found in the patients with summer/autumn study entry (Figure 2b,d). For 1,25(OH) 2 D 3 , a subdivision of the patients did not have any effect on the influence of its concentration behavior over the entire study period (Figure 2f). When evaluating the data, it must be taken into account that all the patients received vitamin D 3 supplementation. Thus, the data presented here demonstrate that, even after the first treatment interval of 3 months, an equalization of the 25(OH)D 3 and 24,25(OH) 2 D 3 levels for the entire patient cohort over the study period was achieved, regardless of the season of the start of the study. Out of the 32 patients before the study entry, 17 (53%) patients had 25(OH)D 3 concentrations below 50 nmol/L, which is the recommended threshold indicator of vitamin D deficiency in humans [3,8]. Fourteen of the sixteen patients who began the study in winter/spring had values below this threshold. After the first treatment interval of 3 months, only three (9.4%) patients of the total study group remained with values below that limit (Fisher's exact test, p = 0.0003). Vitamin D 3 Metabolites in Relation to the ZA Treatment The repeated measurements in ZA-treated and ZA-untreated patients resulted in different curves for the respective individual vitamin D 3 metabolite during the study ( Figure 3). The 25(OH)D 3 and 24,25(OH) 2 D 3 levels were significantly lower at study entry in comparison with the levels at the three subsequent study time points, but not significantly different between ZA-treated and ZA-untreated patients at all the time points (Figure 3a,b). Thus, repeated-measures ANOVA for these two-factor studies showed ZA treatment to be a non-significant source of variation (p-values of 0.219 and 0.240; Figure 3a,b) and the time interval to be a significant source of variation (p-values of 0.0001 and 0.0007; Figure 3a,b). In contrast, the 1,25(OH) 2 D 3 levels did not statistically differ between the ZA-treated and ZA-untreated patients at any of the measuring points ( Figure 3c). These data prove that ZA treatment did not alter the levels of the three metabolites during the study. It can be concluded that the differences in the levels of 25(OH)D 3 and 24,25(OH) 2 D 3 observed between the study's start and the subsequent measuring points were due to the concomitant supplementation of vitamin D 3 to all the study patients. Statistically significantly increased levels of 25(OH)D3 and 24,25(OH)2D3 were observed within three months after the study's start, with approximately constant values at the two subsequent measuring points. In contrast, the 1,25(OH)2D3 concentrations remained statistically unchanged over the entire study period. Subfigures (a,c,e) present the results of the repeated-measures ANOVA for a single-factor study after treatment in the total cohort (n = 32). The corresponding subfigures (b,d,f) show the results of the twofactor study with repeated-measures ANOVA on the factor "study start" (winter/spring, n = 16, and summer/autumn, n = 16). Repeated measures were performed before the treatment (time point = 0) and 3 and 9 months after the treatment start. The last time point was 39 months (mean value) after the treatment start. Data at the time points are mean values with their 95% confidence intervals. At the error bars, the letters a, b, c, and d indicate statistically significant differences in the vitamin D 3 levels between the different measuring points (at least p < 0.05; corrected values according to Holm-Sidak test): a, compared to "before study"; b, compared to 3 months; c, compared to 9 months; d, compared to~39 months. Statistically significant differences between the metabolite levels of the two study subgroups at the respective time points are characterized by asterisks: **, p < 0.01; ***, p < 0.001. Abbreviations: ANOVA = analysis of variance; MP factor = related to the time intervals of the measuring points. At the error bars, the letters a, b, c, and d indicate statistically significant differences in the vitamin D3 levels between the different measuring points (at least p < 0.05; corrected values according to Holm-Sidak test): a, compared to "before study"; b, compared to 3 months; c, compared to 9 months; d, compared to ~36-42 months. No statistically significant differences for all three metabolite levels were found between the two ZA groups at the respective measuring points. Abbreviations: ANOVA = analysis of variance; ZA = zoledronic acid; MP factor = related to the time intervals of the measuring points. Vitamin D3 Metabolites in Relation to the Development of Bone Metastasis during the Study The analysis of the concentrations of 25(OH)D3 and 24,25(OH)2D3 regarding metastasis showed that they corresponded with those observed under the aspect of the ZA treatment ( Figure 4). Neither metabolite was associated with the development of bone metas- metastases in patients during the study, as the factor "metastasis" was not a significant variable of the source of variation (Figure 4a,b). The time-dependent changes can also be attributed to the concomitant vitamin D 3 supplementation. This is in contrast to the very striking 1,25(OH) 2 D 3 profile of the patients who did or did not suffer from bone metastasis during the study (Figure 4c). The patients who developed bone metastasis already had higher 1,25(OH) 2 D 3 values before the study's start compared to those without bone metastasis. This pattern remained throughout the study period. This observation suggests that 1,25(OH) 2 D 3 could be a possible factor associated with the metastatic process in PCa. Since our study was by no means designed to make prognostic statements, we have only compiled these indicative data in the supplement for interested readers ( Figure S1). of the source of variation (Figure 4a,b). The time-dependent changes can also be attributed to the concomitant vitamin D3 supplementation. This is in contrast to the very striking 1,25(OH)2D3 profile of the patients who did or did not suffer from bone metastasis during the study (Figure 4c). The patients who developed bone metastasis already had higher 1,25(OH)2D3 values before the study's start compared to those without bone metastasis. This pattern remained throughout the study period. This observation suggests that 1,25(OH)2D3 could be a possible factor associated with the metastatic process in PCa. Since our study was by no means designed to make prognostic statements, we have only compiled these indicative data in the supplement for interested readers ( Figure S1). without developed bone metastases during the study. Repeated measures were performed before the treatment (time point = 0) and 3 and 9 months after the treatment start. The last measuring points were 27 and 42 months (mean values) for the patients with (n = 12) and without (n = 20) bone metastasis, respectively. Results of the repeated-measures two-factor ANOVA classified according to the factor metastasis are shown as mean values with their 95% confidence intervals. At the error bars, the letters a, b, c, and d indicate statistically significant differences in the vitamin D 3 levels between the different measuring points (at least p < 0.05; corrected values according to Holm-Sidak test): a, compared to "before study"; b, compared to 3 months; c, compared to 9 months; d, compared to~27-42 months. Statistically significant differences between the metabolite levels for the two study subgroups at the respective time points are characterized by asterisks: **, p < 0.01; *** p < 0.001. Abbreviations: ANOVA = analysis of variance; Meta factor = related to the developed bone metastasis; MP factor = related to the time intervals of the measuring points. Correlations between Vitamin D 3 Metabolites Strong correlations between the 25(OH)D 3 and 24,25(OH) 2 D 3 levels were observed at the four measuring points, with correlation coefficients between 0.696 and 0.883 (mean ± SD; 0.776 ± 0.087) and p-values of <0.0001 in all cases. In this respect, the so-called vitamin D metabolite ratio (VMR), calculated as the ratio of 24,25(OH) 2 D 3 /25(OH)D 3 × 100, is of interest, as this ratio was suggested as an improved indicator of the vitamin D 3 status [39]. The close correlation between the two metabolites explains that a similar pattern was observed as for the two individual metabolites ( Figure 5). pared to "before study"; b, compared to 3 months; c, compared to 9 month months. Statistically significant differences between the metabolite levels groups at the respective time points are characterized by asterisks: **, p < 0 viations: ANOVA = analysis of variance; Meta factor = related to the develo factor = related to the time intervals of the measuring points. Correlations between Vitamin D3 Metabolites Strong correlations between the 25(OH)D3 and 24,25(OH)2D3 le the four measuring points, with correlation coefficients between 0. SD; 0.776 ± 0.087) and p-values of <0.0001 in all cases. In this respect D metabolite ratio (VMR), calculated as the ratio of 24,25(OH)2D3 interest, as this ratio was suggested as an improved indicator of the The close correlation between the two metabolites explains that a s served as for the two individual metabolites ( Figure 5). In Discussion Our study showed that the main vitamin D metabolites 25(OH)D 3 , 24,25(OH) 2 D 3 , and 1,25(OH) 2 D 3 were not affected in high-risk PCa patients who received ZA as supportivecare treatment over about 4 years. The ZA-treated patients and controls without ZA, who had 25(OH)D 3 concentrations below the deficiency threshold of 50 nmol/L [1,40] due to a study start in winter/spring, achieved stable levels above this limit after 3 months with a daily concomitant supplementation of 400-500 IU of vitamin D cholecalciferol. These data additionally indicate good patient compliance with the supplement administration in contrast to other reports [41]. Simultaneously, stable 24,25(OH) 2 D 3 levels were also observed afterwards during the three subsequent time intervals. The occurrence of bone metastases also did not result in altered profiles for these two metabolites. The metabolite 1,25(OH) 2 D 3 also did not show profile changes during the entire observation time, but it was completely unaffected by the cholecalciferol supplementation, in contrast to 25(OH)D 3 and 24,25(OH) 2 D 3 . In addition, it was remarkable that patients with metastasis already had higher concentrations of 1,25(OH) 2 D 3 in comparison to those patients without metastasis at the study's beginning and during the entire study. Thus, the results partly differed for 25(OH)D 3 and 24,25(OH) 2 D 3 , on the one hand, and for 1,25(OH) 2 D 3 , on the other hand. Therefore, it is advisable to discuss the data for the metabolites separately. It is currently generally accepted that the circulating 25(OH)D 3 is the best indicator characterizing the vitamin D status [1]. However, a final consensus about the definition of vitamin D deficiency based on a cutoff level of 25(OH)D 3 was not reached in the last International Conferences on Controversies in Vitamin D [1][2][3]8]. The Endocrine Task Force on Vitamin D defined a 25(OH)D 3 level of 50 nmol/L as a deficiency cutoff [42]. This cutoff was also recommended by the Institute of Medicine, USA [40]. A higher threshold of 75 nmol/L was suggested by other expert groups [8]. This absence of consensus results mainly from the lack of traceability and harmonization/standardization of the various 25(OH)D 3 assays that were applied in the different studies [3,43,44]. Our study revealed that, after treatment with vitamin D cholecalciferol, the high percentage of patients with deficient levels of 25(OH)D 3 below 50 nmol/L at the start of the study could be reduced from 53 to 9.4%. In guidelines and comments, a daily supplement dosage of 10 to 50 µg (400-2000 IU) of vitamin D has been recommended to achieve at least this threshold of 50 nmol/L [8,40,[45][46][47][48][49]. As the half-life of circulating 25(OH)D 3 is estimated to be approximately 15 days [50], this daily supplementation results in a steady state of 25(OH)D 3 after three to four months [48,51]. The increase in circulating 25(OH)D 3 depends on the baseline level and the dose of the supplemented vitamin D. For an initial level of 25 nmol/L, an increase to more than 60 nmol/L was reported with a daily supplement of 400 IU for three months [52]. This corresponds with the observation in our study (Figure 2b). A similar pattern was visible for the metabolite 24,25(OH) 2 D 3 . The first and second hydroxylation steps converting 25(OH)D 3 to 24,25(OH) 2 D 3 and 1,25(OH) 2 D 3 , respectively, are likely to be inversely regulated by the same effectors (parathormone, 1,25(OH) 2 D 3 , and fibroblast growth factor 23/klotho) [53][54][55], but there remains a close relationship between the two metabolites 25(OH)D 3 and 24,25(OH) 2 D 3 in the bloodstream. This is reflected in the strong Pearson correlation coefficients of 0.776 during the entire study period. The VMR, calculated as the ratio between 24,25(OH) 2 D 3 and 25(OH)D 3 , also confirms the increased levels of both metabolites after three months of the study compared with the baseline values at the study's beginning ( Figure 5). VMR has been proposed as a more sensitive indicator for monitoring vitamin D intake [56][57][58], but some recent studies failed to confirm this advantage over the assessment based on 25(OH)D 3 only [39,59,60]. Several studies in osteoporosis patients treated with ZA or other bisphosphonates have shown that there is a close relationship between the observed increased bone-mineral density and the circulating 25(OH) vitamin D concentration [33,[61][62][63][64]. To achieve this treatment effect, different threshold levels of 25(OH) vitamin D have been reported. This is certainly because the assays used in various studies did not show clear traceability [3,33]. However, despite these conflicting data, studies concerning the usefulness of ZA in PCa patients have generally been performed with a concomitant supply of vitamin D both in the trial and placebo arms [27,34]. Follow-up data for vitamin D metabolites, however, are lacking. The less-satisfactory evidence for vitamin D in combination with bisphosphonates has been summarized in a meta-analysis [65]. Out of 27 randomized studies [65], the authors of one of only three studies with ZA monotherapy without the administration of vitamin D advised the prophylactic administration of vitamin D and the monitoring of the vitamin D levels for these patients [23]. In this respect, our follow-up data for the vitamin D metabolites support this recommendation. The data show that, with daily medication with 100 to 125 µg of cholecalciferol, a long-term level of 25(OH)D 3 of >50 nmol/L can be achieved. In recent PCa guidelines, ZA has been recommended as a bone-protective agent and for pain relief in castration-resistant PCa patients and those with bone metastases [29,30]. Thus, ZA continues to be an important component of PCa management, even though the primary expectations of preventing bone metastases were not met [30,34]. For 1,25(OH) 2 D 3 , the reference range (95% confidence interval) in the serum/plasma of healthy adults (between 20 and 70 years) has been determined to be 59 to 159 pmol/L [66]. The circulating 1,25(OH) 2 D 3 accounts for only approximately 0.1% of 25(OH)D 3 . A comparable proportion of the two metabolites was detected in prostate tissue, and their concentrations were correlated with serum levels [67,68]. The baseline concentrations of 1,25(OH) 2 D 3 in our PCa cohort were within this reference range, except for three patients with lower values. Moreover, the repeated measures in our study showed that the concentrations did not significantly change over the entire period. There were no increased 1,25(OH) 2 D 3 values due to the supplementation of cholecalciferol, in contrast to 25(OH)D 3 and 24,25(OH) 2 D 3 , particularly during the first treatment interval (Figure 2e,f) and for the subclassification with/without metastasis (Figure 4c). The circulating 1,25(OH) 2 D 3 is strictly controlled by a multiregulatory feedback system consisting of parathormone, fibroblast growth factor, calcium, phosphate, and 1,25(OH) 2 D 3 itself [69]. In consequence, normal circulating levels of 1,25(OH) 2 D 3 are largely ensured by the adequate synthesis of 1,25(OH) 2 D 3 from its precursor 25(OH)D 3 , even at moderately decreased concentrations of 25(OH)D 3 [66,70]. This was also evident in our study and, likewise, explains the missing correlations between 1,25(OH) 2 D 3 and 25(OH)D 3 or 24,25(OH) 2 D 3 . Other studies with and without additional vitamin D intake also reported missing correlations or low coefficients for the correlation between 1,25(OH) 2 D 3 and 25(OH)D 3 [38,[70][71][72][73][74][75]. The peculiarity of sufficiently functioning 1,25(OH) 2 D 3 synthesis despite a limited 25(OH)D 3 substrate supply as long as a severe vitamin deficiency is not present also makes it understandable that 1,25(OH) 2 D 3 is not considered a valid marker for global vitamin D deficiency [70,76]. However, our finding of higher 1,25(OH) 2 D 3 levels in patients with subsequent metastasis during the study compared with distinctly lower levels in patients without progression was apparently surprising. However, it should be pointed out that the higher values in the metastasized PCa group were always in the reference range of circulating 1,25(OH) 2 D 3 [66]. Significantly, the elevated baseline values were confirmed during the study. This is in contrast to other PCa and cancer studies in which increased levels of circulating 1,25(OH) 2 D 3 were associated with improved outcome data [20,67,77,78]. Numerous preclinical studies based on cell-culture experiments and animal studies showed that 1,25(OH) 2 D 3 inhibits the proliferation, migration, and invasion of cancer cells; suppresses angiogenesis; activates the apoptosis and differentiation of cells; or synergistically potentiates the antitumor activity of chemotherapeutic agents [5,[79][80][81][82][83][84][85][86]. Since 1,25(OH) 2 D 3 is the actual active vitamin D metabolite, these experimental data are also used as arguments to confirm the hypothesis of an anticancer effect of vitamin D [5,87]. However, it is noticeable that the 1,25(OH) 2 D 3 concentrations used in the experiments are often 100-1000-fold higher than those detected in the bloodstream and target tissues [67,[88][89][90]. This obvious contradiction has largely been ignored in the literature to date [91]. Furthermore, other experiments with a transgenic prostate mouse model showed enhanced distant metastasis upon prolonged treatment with 1,25(OH) 2 D 3 [92]. Increased metastasis in treatment experiments with 1,25(OH) 2 D 3 was also observed in a model of mammary-gland cancer in mice depending on the age of the mice [93,94]. We interpret the higher 1,25(OH) 2 D 3 in the subgroup of PCa patients with metastasis after prostatectomy as a possible reflection of the interrelated complex action of this vitamin metabolite. 1,25(OH) 2 D 3 not only directly influences tumor development via the vitamin D receptor as mentioned above but also indirectly modulates this process through crosstalk with the tumor microenvironment, different immunological pathways, and the functional interplay between the vitamin D and androgen receptors [6,7,9,95]. It is also conceivable that the higher serum levels of 1,25(OH) 2 D 3 in the case of the subsequently metastasized PCa subcohort led, through the C23 and C24 metabolic pathways for 1,25(OH) 2 D 3 , to higher levels of their intermediates in cells [96]. These intermediates, for which very little is yet known [69], could favor direct or indirect cancerogenesis-promoting effects. Obviously, studies on their possible molecular mechanisms require experiments with biologically relevant concentrations, as already critically discussed above. On the other hand, this association between higher 1,25(OH) 2 D 3 levels and subsequent metastasis does not necessarily imply a causal relationship between the two observations. Due to the lack of corresponding follow-up data for the vitamin D metabolites in other studies, these particular results have likely not been captured to date. However, we think it is important to point out these findings so that they can be verified in other studies and provide potential prognostic decision support. Some limitations of our study should be mentioned while interpreting the results. First, it was a retrospective study with a limited sample size of patients and without external validation. Second, only the three essential vitamin D metabolites could be measured due to the limited availability of the sample material. Third, all the patients in both the study and control arms received vitamin D and calcium. Despite these limitations, we consider the results of this study to provide interesting information for understanding open questions in the ongoing vitamin D debate in practice. The strength of our study is based on the use of sophisticated analytical methods with traceability and good analytical performance as well as the strict adherence to the requirements for valid vitamin D studies. Conclusions The two vitamin D metabolites 25(OH)D 3 and 24,25(OH) 2 D 3 were not affected by supportive ZA treatment or the development of metastasis over four years in our selected cohort of high-risk PCa patients after prostatectomy. Surprisingly, the low-abundance metabolite 1,25(OH) 2 D 3 was already higher before the study's start in patients who developed bone metastasis compared to those without bone metastasis. Before potential prognostic decision support can be provided, verification in other studies is necessary. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/cancers14061560/s1. Figure S1: 1,25(OH) 2 D 3 level as prognostic indicator of subsequent bone metastasis after radical prostatectomy, Table S1: Clinicopathological risk factors of the study patients with and without zoledronic acid treatment and subsequent metastasis. Institutional Review Board Statement: The present study was part of the registered and approved trial "Effectiveness of Zometa ® treatment for the prevention of bone metastases in high risk prostate cancer patients: a randomized, open-label, multicenter study of the European Association of Urology in Cooperation with the Scandinavian Prostate Cancer Group and the Arbeitsgemeinschaft Urologische Onkologie (ISRCTN66626762 https://doi.org/10.1186/ISRCTN66626762)". This substudy focused on secondary outcome measures of bone health, as indicated in the trial protocol. The study was conducted in accordance with the Declaration of Helsinki and received ethics approval from the local medical ethics committees of the participating study centers, as indicated in [34]. Informed Consent Statement: Informed consent was obtained from all the subjects involved in the study. Data Availability Statement: The data presented in this study are available upon reasonable request from the corresponding author.
2022-03-23T15:28:50.021Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "fba42b3d282b58fa849c5d470060f6851c31c01f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/14/6/1560/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0c5e0d7731f1cf30df55d328d45ef396a3333b3d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
52174940
pes2o/s2orc
v3-fos-license
Disseminated Strongyloidiasis in Association with Nephrotic Syndrome Strongyloidiasis is a well-known parasitic infection endemic in tropical and subtropical areas of the world. While most infected individuals are asymptomatic, strongyloidiasis-related glomerulopathy has not been well documented. We present a case of disseminated strongyloidiasis in a patient with minimal change nephrotic syndrome treated with high-dose corticosteroids. The remission of nephrotic syndrome after treatment of strongyloidiasis suggests a possible causal relationship between Strongyloides and nephrotic syndrome. Introduction Strongyloidiasis is a common parasitic infection caused by Strongyloides stercoralis. it is endemic in tropical and subtropical countries including Southeast Asia, sub-Saharan Africa, South America, and Eastern Europe with prevalence rates reported as high as 30%. Under immunological conditions, intestinal infection with this nematode is usually associated with asymptomatic carrier status. However, disseminated strongyloidiasis occurs with immune suppression and has been studied in patients with organ transplantation, leukaemia, and human immunodeficiency virus. While parasite-related glomerular diseases have been studied, strongyloidiasis-related glomerulopathy has not been well documented. Several case reports have suggested a causal relationship between strongyloidiasis and nephrotic syndrome but the relationship remains controversial. We review the literature and report a case of disseminated strongyloidiasis in a patient with minimal change nephrotic syndrome treated with high-dose corticosteroids. The remission of nephrotic syndrome after treatment of strongyloidiasis postulates that the nephrotic syndrome could have been related to the Strongyloides immune complex deposition. Case Report A 67-year-old Hispanic male with medical history of hypertension, who was originally from Puerto Rico but had moved to the USA in adulthood, presented with nausea, bloating, and generalized oedema for 1 month in December 2014. He did not have fever, chills, cough, or diarrhoea. He was a non-smoker, and did not drink alcohol or use illicit drugs. On initial physical examination, he was afebrile, with a blood pressure of 168/100 mm Hg and heart rate of 80 beats/min. He had periorbital swelling and bilateral lower extremity oedema. His cardiovascular, respiratory, abdominal, and neurological examinations were otherwise unremarkable. Initial laboratory tests showed a leucocyte count of 12,800/µL with 15.5% eosinophilia, serum creatinine of 2.2 mg/dL, serum albumin of 2.3 g/dL, and total proteinuria of 11.1 g/day. HBs antigen and anti-HCV antibody were negative and abdominal ultrasound showed normalsized kidneys. Electrocardiogram and chest radiograph were normal. A percutaneous renal biopsy was performed. Light microscopy revealed 13 glomeruli. Electron microscopy revealed complete podocyte foot process effacement and immunofluorescence revealed linear, segmental to global GBM staining for IgG and kappa of unclear significance. At this point, idiopathic minimal change disease was suspected, as no clear aetiology or inciting factor was identified. The patient was started on prednisone 80 mg daily and treatment resulted in improvement of proteinuria to 0.47 g/day by day 33. A month following initiation of corticosteroids, the patient was evaluated for loss of appetite, fatigue, minimally productive cough, vomiting, and diarrhoea. The diarrhoea was nonbloody and watery, occurring after meals. Chest radiograph revealed a new interstitial reticulonodular pattern of unclear significance. Abdominal radiograph revealed abnormal gas collection thought to be due to small bowel obstruction that was managed conservatively. Due to the development of melena, he underwent diagnostic endoscopy which revealed oedematous gastric and duodenal mucosa (Fig. 1). Biopsy of lesions showed chronic active gastritis with the presence of larvae/nematode within the mucosa, consistent with diagnosis of strongyloidiasis (Fig. 2, 3). The patient was treated with 200 µg/kg ivermectin for 5 days and his diarrhoea promptly improved. No ova or parasites were noted in the stool 2 weeks later. Based on a literature review, it was hypothesized that his nephrotic syndrome and minimal change disease may have been in fact caused by Strongyloides and only unmasked during steroid therapy. It was therefore decided that his proteinuria would be closely followed off steroids, and did actually resolve 3 months after infection eradication. Discussion S. stercoralis is usually associated with an asymptomatic carrier state. However, under certain conditions, most notably a compromise of host immunity, fulminant disease may develop with greater than 50% mortality despite adequate treatment [1]. In symptomatic patients, the most common presentations are fever, nausea, vomiting, and diarrhoea. In disseminated disease, the nematode can affect virtually every organ, including the brain, lungs, and kidneys [2]. Cases of strongyloidiasis-associated glomerulopathy have rarely been reported. An extensive review of the medical literature has identified 7 other cases of strongyloidiasis associated with nephrotic syndrome (Table 1). In 7 out of 8 cases, the patient's initial presentation was with features of nephrotic syndrome, and all patients were found to have minimal change disease on renal biopsy. In the majority of cases, nephrotic syndrome was diagnosed first, and following the initiation of steroids, only then was Strongyloides unmasked. Corticosteroid therapy could alter immunological mechanisms and play and important role in the abrupt exacerbation of Strongyloides. Interestingly, Hsieh et al. [3] reported a patient with initial manifestations strongly related to Strongyloidiasis who later developed nephrotic syndrome. Heavy proteinuria resolved following treatment with anthelmintic therapy without use of steroids, suggestive of a possible causal relationship between S. stercoralis infection and minimal change disease. The pathogenesis of Strongyloides-associated glomerulopathy is postulated to be due to immune complex deposition, as supported by the fact that the Strongyloides antigen has been demonstrated in renal biopsy tissue [4]. A common finding among the reported cases is the presence of peripheral eosinophilia. In our patient, he had eosinophilia but this was not a persistent finding. Studies have shown that eosinophilia is not always present, usually fluctuating and even absent in more than 50% of patients with severe infection [5]. Hence, a high index of suspicion is required for prompt diagnosis. The diagnosis of Strongyloides can be confirmed by the presence of larvae in stool or duodenal aspirate. Duodenal biopsy usually shows blunting of the villi and invasion by the parasite and ova. The standard first-line therapy for Strongyloides infection includes thiabendazole, ivermectin, and albendazole, with comparable eradication rates [6][7][8]. Conclusions When a patient who resides in an endemic part of the world presents with nephrotic syndrome, a thorough review of systems should be done to rule out active Strongyloides infection. Our literature review suggests that active symptoms are only unmasked after steroids are initiated, so ongoing monitoring of any new or chronic symptoms is vital, especially as the usual treatment of nephrotic syndrome can cause lethal disseminated Strongyloides infection. To date, there is 1 published case of symptomatic Strongyloides with subsequent development of nephrotic syndrome that remitted with antibiotic treatment, and multiple other cases where infectious symptoms were unmasked by nephrotic syndrome treatment. Due to this increasingly reported association, more research is warranted to investigate for a clear causal relationship.
2018-09-16T05:47:13.008Z
2018-05-01T00:00:00.000
{ "year": 2018, "sha1": "950b58caaa27d7cd4f24cdc2074a60cf6c56f743", "oa_license": "CCBYNC", "oa_url": "https://www.karger.com/Article/Pdf/491632", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "950b58caaa27d7cd4f24cdc2074a60cf6c56f743", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
118433592
pes2o/s2orc
v3-fos-license
Inclusive b-jet and b\bar b-dijet production at the LHC via Reggeized gluons We study inclusive $b$-jet and $b\bar b$-dijet production at the CERN LHC invoking the hypothesis of gluon Reggeization in $t$-channel exchanges at high energy. The $b$-jet cross section includes contributions from open $b$-quark production and from $b$-quark production via gluon-to-bottom-pair fragmentation. The transverse-momentum distributions of inclusive $b$-jet production measured with the ATLAS detector at the CERN LHC in different rapidity ranges are calculated both within multi-Regge kinematics and quasi-multi-Regge kinematics. The $b\bar b$-dijet cross-section is calculated within quasi-multi-Regge kinematics as a function of the dijet invariant mass $M_{jj}$, the azimuthal angle between the two jets $\Delta\phi$ and the angular variable $\chi$. At the numerical calculation, we adopt the Kimber-Martin-Ryskin and Bl\"umlein prescriptions to derive unintegrated gluon distribution function of the proton from its collinear counterpart, for which we use the Martin-Roberts-Stirling-Thorne set. We find good agreement with measurements by the ATLAS and CMS Collaborations at the LHC at the hadronic c.m.\ energy of $\sqrt S=7$ TeV. I. INTRODUCTION The study of b-jet hadroproduction provides an important test of perturbative quantum chromodynamics (QCD) at high energies. The total collision energies, √ S = 1.8 TeV and 1.96 TeV in Tevatron runs I and II, respectively, and √ S = 7 TeV or 14 TeV at the LHC, sufficiently exceed the characteristic scale µ of the relevant hard processes, which is of order of b-jet transverse momentum p T , i.e. we have Λ QCD ≪ µ ≪ √ S. In this high-energy regime, so called "Regge limit", the contribution of partonic subprocesses involving t-channel parton (gluon or quark) exchanges to the production cross section can become dominant. Then the transverse momenta of the incoming partons and their off-shell properties can no longer be neglected, and we deal with "Reggeized" t-channel partons. These t−channel exchanges obey multi-Regge kinematics (MRK), when the particles produced in the collision are strongly separated in rapidity. If the same situation is realized with groups of particles, then quasimulti-Regge kinematics (QMRK) is at work. In the case of b-jet and bb-dijet inclusive production, this means the following: b-jet (MRK) or bb-dijet (QMRK) is produced in the central region of rapidity, while other particles are produced with large modula of rapidities. The parton Reggeization approach [1] is based on the hypothesis of parton Reggeization in t−channel exchanges at high energy [2]. It was used for the description of a large number of hard processes at the modern hadron colliders and the obtained results confirm the assumption of a dominant role of MRK or QMRK production mechanisms at high energy. This approach was successfully applied to interpret the production of isolated jets [3], prompt photons [4], diphotons [5], charmed mesons [6], heavy quarkonia [7][8][9][10] measured at the Fermilab Tevatron, at the DESY HERA and at the CERN LHC. The theoretical background of a parton Reggeization approach is the effective quantum field theory implemented with the non-Abelian gauge-invariant action including fields of Reggeized gluons [2] and Reggeized quarks [11], which was proposed by L. N. Lipatov in 1995 [12]. In this effective theory Reggeized partons interact with quarks and Yang-Mills gluons in a specific way. Recently, in Ref. [13], the Feynman rules for the effective theory of Reggeized gluons were derived for the induced and some important effective vertices. Usually it is suggested that MRK or QMRK production mechanism to be the dominant one only at small p T values. Our recent study of isolated jet production at the Tevatron and LHC colliders, see Ref. [3], demonstrated that the parton Reggeization approach can be successfully used already in the range of x T = 2p T √ S 0.1, or at the p T 300−400 GeV for the energy √ S = 7 TeV at the LHC. This result motivates us to apply the parton Reggeization approach for the study of b-jet and bb-dijet production in the kinematical range of transverse momentum 20 < p T < 400 GeV and rapidity |y| < 2.1, as it was measured by the ATLAS Collaboration at the CERN LHC [14]. The high-energy factorization scheme with the effective vertices for Reggeized gluons has been used earlier in Refs. [15,16] for description of inclusive open b-quark [17], bjet [18] and bb-dijet [19] production at the Tevatron collider. In this paper, we study in the same manner the inclusive b-jet and bb-dijet production at the CERN LHC invoking the hypothesis of gluon Reggeization in t-channel exchanges at high energy. We take into account two mechanisms of b-jet production: the open b-quark production and "jet-like" b-quark production via gluon-to-bottom-pair fragmentation [20]. We consider b-quark jet as an isolated, by the jet-cone condition [21], hadronic jet containing one b(b)-quark or bb-quark pair. Thus, the b-jet production cross section can be written as a sum of two terms. The first one represents a so-called "open b-quark" production, when the b-jet contains b(b)-quark which is produced directly in the hard partonic subprocess. The second term corresponds to the case of "jet-like" production, where a b-jet contains bb-quark pair which is produced via gluon or light-quark fragmentation. The transverse-momentum distributions of inclusive b-jet production measured with the ATLAS detector at CERN LHC [14] in the different rapidity ranges are calculated both within multi-Regge kinematics and quasi-multi-Regge kinematics. The bb-dijet cross sections are calculated within quasi-multi-Regge kinematics as functions of the bb-dijet invariant mass M jj , the azimuthal angle between the two jets ∆φ and the angular variable χ. This paper is organized as follows. In Sec. II the parton Reggeization approach is briefly reviewed. We write down the relevant for our analysis analytical formulas for squared matrix elements and differential cross sections. In Sec. III, we describe our calculations and present the results obtained. In Sec. IV, the conclusions are summarized. II. MODEL We study b-jet production in the region of large b-quark transverse momentum where m b is a b-quark mass. At the present time, the conventional approach for calculation of the b−quark production cross sections is based on the next-to-leading (NLO) approximation in perturbative QCD and collinear parton model [22]. It is well known that fixed-order perturbation QCD calculations are applicable when the transverse momentum p T of the produced heavy b−quark is not much larger than its mass m b . In the case when the transverse momentum significantly exceeds the mass, the large logarithms of type log(p T /m b ) arise to all orders of α s (µ), so that a fixed-order approach breaks down [23]. It is possible to resum all these logarithms in the fragmentation approach using the factorization theorem, which states that the cross section for the production process of high-p T b−quark can be written in factorized form as a convolution of the short-distance partonic cross section of parton f production with the fragmentation function D b f (z, µ 2 ) for a formation of a b−quark from the parton f : The fragmentation functions for heavy quarks in perturbative QCD have been studied at the next-to-leading order (NLO) QCD approach in Ref. [24]. The experimentally measured transverse energy E T (or the transverse momentum p T ) of b-jet includes transverse energies (transverse momenta) of all partons inside some jet-cone in the rapidity-azimuthal angle plane, which radius is defined as follows, R = △y 2 + △φ 2 [21]. Such a way, it is insignificant which part of the initial parton four-momentum is transferred to the b-quark, and we can simplify the formula (1) to the form where n f (µ) = 1 0 D b f (z, µ)dz is a b-quark multiplicity in the f -parton jet. It is obvious that a b-quark multiplicity in a gluon-initiated jet greatly exceeds a b-quark multiplicity in any quark-initiated jets, n g (µ) ≫ n q (µ) with q = u, d, s, c. We will take into account only main contribution from the gluon-to-bottom-pair fragmentation g → bb. Let us note that in this case the bb-pair is considered as a one b-quark jet. To describe inclusive b-jet and bb-jet cross sections in terms of the parton Reggeization approach, in the LO we need to consider gluon fusion subprocesses of open b-quark and gluon production only, which are to be dominant at the high energy, they write: where R is a Reggeized gluon and g is a Yang-Mills gluon, respectively, with four-momenta indicated in parentheses. The contribution of the partonic subprocess (5) can be neglected in comparison with the contribution of the subprocess (4) because of the strong suppression by the g → bb fragmentation (n g ≈ 10 −3 ) for both produced gluons. In Ref. [16] it was shown, that at the Tevatron energy range, the contribution of the subprocesses Q +Q → g and Q +Q → b +b with initial Reggeized quarks is sufficiently smaller comparing to the dominant contribution of the subprocesses (3) and (4), and the former becomes sizeable only at the very large b-jet transverse momentum p T . As the LHC energy exceeds by a factor 3.5 the one of the Tevatron collider, we estimate a quark-antiquark annihilation contribution to be even much smaller and therefore do not consider it in the present analysis. Performing a study of high-transverse-momentum b−quark production (p T >> m b ) in the collinear parton model, we have an additional b−quark production mechanism, namely a production via b−flavor excitation, where b(b)−quarks are considered as partons in the colliding protons. For example, this mechanism has been used successfully to describe B−meson p T −spectra at the Tevatron and LHC in NLO calculations of the parton model [27]. We have used a similar idea in our previous study of inclusive b−jet production at the Tevatron within the Parton Reggeization Approach [16]. In this work we took into account the LO in α s contribution from 2 → 1 partonic subprocess where B is a Reggeized b−quark. As it is shown in the Fig. 1 of Ref. [16], the sum of this contribution and a contribution from the subprocess (4) strongly overestimates the experimental data. In the present analysis we ignore a contribution from the subprocess (6). First, we avoid any chance of a double-counting between subprocesses (8) and (6). Second, the conception of quark Reggeization for a b−quark inside a proton seems to be wrong. b−quarks are produced preferably at the last step of QCD-evolution at the large scale µ ∼ p T , and their PDF is proportional to a large logarithm log(p T /m b ). However, the QCD-evolution of a Reggeized parton should be valence-like. It means, the Reggeized parton must be a t−channel parton throughout all steps of QCD-evolution in the parton ladder. But, a b−quark conventional collinear PDF, which we take as input for a KMR (Blümlein) prescription to obtain a b−quark unintegrated PDF, satisfies sea-like QCD-evolution. For this reason we strongly overestimate a value of a b−quark unintegrated PDF. The more adequate way should be to consider a subprocess bR → b with a collinear b−quark in the initial state, instead of subprocess (6). But even in this case a problem of doublecounting still exist. That is why in the present study we consider Reggeized-gluon induced contributions, like (3) and (4), only. The squared amplitude of subprocess (3) reads [7,16]: where p 2 , with q 1T and q 2T representing the transverse momenta of initial Reggeized gluons, and φ 12 is the azimuthal angle enclosed between them. Exploiting the hypothesis of high-energy factorization, we express the hadronic cross sections dσ as convolutions of partonic cross sections dσ with unintegrated PDFs Φ h g of Reggeized gluon in the hadrons h. For the processes under consideration here, we have The unintegrated PDFs Φ h g (x, t, µ 2 ) are related to their collinear counterparts F h g (x, µ 2 ) by the normalization condition which yields the correct transition from formulas in the parton Reggeization approach to those in the collinear parton model, where the transverse momenta of the partons are neglected. In our numerical analysis, we adopt as our default the prescription proposed by Kimber, Martin, and Ryskin (KMR) [31] to obtain unintegrated gluon PDF of the proton from the conventional integrated one, as implemented in Watt's code [32]. The precise analysis of KMR gluon unintegrated PDF had been performed in the Ref. [33], including an accurate study of the dependence on the choice of collinear input. As is well known [34], other popular prescriptions, such as those by Blümlein [35] or by Jung and Salam [36], produce unintegrated PDFs with distinctly different t dependences. In our analysis we don't evaluate the unintegrated gluon PDF after Jung and Salam [36] because this PDF had been tabulated only in a range of t, µ 2 ≤ 10 4 GeV 2 . It is not enough to calculate b − jet production cross sections up to p T = 400 GeV, in accordance with measurements of the relevant experiments. In fact, we had to use the unintegrated gluon PDF up to t, µ 2 ≤ 10 6 GeV 2 . In order to assess the resulting theoretical uncertainty, we also evaluate the unintegrated gluon PDF using the Blümlein approach, which resums small-x effects according to the Balitsky-Fadin-Kuraev-Lipatov (BFKL) equation [2]. As input for these procedures, we use the LO set of the Martin-Roberts-Stirling-Thorne (MRST) [37] proton PDF as our default. The relevant theoretical study of particle production in the high-energy factorization scheme using KMR and Blümlein unintegrated gluon PDFs [4]- [9] demonstrates that both unintegrated PDFs lead to a similar behavior of production spectra at the non-large particle transverse momentum (p T ≤ 20 GeV). In case of high transverse momentum production of isolated jets and prompt photons [3], the theoretical predictions obtained with these PDFs are different. Although we take identical collinear inputs for both KMR and Blümlein approaches, the relevant kernels of integrand transformation between collinear and unintegrated PDFs differ. The KMR approach is based on DGLAP evolution equation, while the Blümlein approach is based on BFKL evolution equation. As the BFKL approach seems to be preferable in the region of very small x ≪ 1, which corresponds to non-large p T at fixed √ S, the KMR unintegrated gluon PDF should be more suitable to describe the experimental data at large The master formula for the doubly differential cross section of inclusive b−jet production via gluon-to-bottom-pair fragmentation at the p T ≫ m b reads as follows where y is the rapidity of b−quark, φ 1 is the azimuthal angle enclosed between the vectors q 1T and p T , In case of bb−dijet production via the partonic subprocess (4) we get the differential cross section in the form: where p 1,2T and y 1,2 are b−quark andb−antiquark transverse momenta and rapidities, respectively, △φ is the azimuthal angle enclosed between the vectors p 1T and p 2T , The inclusive b-jet transverse-momentum spectrum can be presented in the form: where R bb = (y b − yb) 2 + (φ b − φb) 2 , R is the experimentally fixed jet radius parameter, θ(x) is the unit step function. In such a way, the subprocess (4) of open b-quark production contributes two separate b-quark jets while R bb > R, and only one b-quark jet while R bb < R. III. RESULTS Recently, the ATLAS Collaboration presented data on inclusive and dijet production cross sections which have been measured for jets containing b-hadrons (b-jets) in protonproton collisions at a center-of-mass energy of √ S = 7 TeV [14]. The inclusive b-jet cross section was measured as a function of transverse momentum in the range 20 < p T < 400 GeV and rapidity in the range |y| < 2.1. The bb-dijet cross section was measured as a function of the dijet invariant mass in the range 110 < M jj < 760 GeV, the azimuthal angle difference between the two jets ∆φ and the angular variable χ in two dijet mass regions. Jets were reconstructed with jet radius parameter R = 0.4. The angular variable χ is defined as follows χ = exp |y 1 − y 2 |. To measure the cross section as a function of χ, an additional acceptance requirement was used that restricts the boost of the dijet system to |y boost | = 0.5|y 1 + y 2 | < 1.1. The bb-dijet cross-section as a function of dijet invariant mass M jj for b-jets with p T > 40 GeV and |y| < 2.1 is shown in Fig. 1. The data are compared to LO parton Reggeization approach predictions, the solid polyline corresponds to KMR unintegrated PDF [31], the dashed one -to Blümlein PDF [35]. We observe nice agreement between data and theoretical prediction obtained with the KMR unintegrated PDF. In case of Blümlein PDF, the theoretical histogram lies about factor 2 lower than the experimental data and this difference increases towards the high values of dijet invariant mass. In Fig. 2 the bb-dijet cross-section as a function of the azimuthal angle difference ∆φ between the two jets for b-jets with p T > 40 GeV, |y| < 2.1 and a dijet invariant mass of M jj > 110 GeV is presented. The normalized to the total cross section data are compared to LO parton Reggeization approach predictions, the solid polyline corresponds to KMR unintegrated PDF, the dashed -to Blümlein PDF. For the both unintegrated PDFs our predictions lie within the experimental uncertainty interval of data except only one point at the ∆φ ≈ 2. We need to mention that in the case of CDF measurements at the Tevatron [19] the azimuthal-separation-angle distribution of inclusive bb-dijet production is well described using the parton Reggeization approach formalism at the all values of the azimuthal angle difference 0 < ∆φ < π (see Fig. 4 in the Ref. [16]). The bb-dijet cross-section as a function of angular variable χ for b-jets with p T > 40 GeV, |y| < 2.1 and |y boost | = 1 2 |y 1 +y 2 | < 1.1, for dijet invariant mass ranges 110 < M jj < 370 GeV and 370 < M jj < 850 GeV are shown in the Fig. 3 and Fig. 4, correspondingly. The normalized to the total cross section data are compared to our LO parton Reggeization approach predictions. In the range of 110 < M jj < 370 GeV the polylines corresponding to KMR and to Blümlein unintegrated PDFs coincide. In the region of large invariant masses 370 < M jj < 850 GeV, the prediction obtained with the Blümlein unintegrated PDF lies about factor 2 lower than the data. On the contrary, the calculations with the KMR unintegrated gluon PDF are found to be in a good agreement with data. To calculate inclusive b-jet transverse-momentum production spectra we need to take into account gluon-to-bottom-pair production mechanism and to use the function of bbpair multiplicity n g (µ) in a gluon jet. Because the existing theoretical predictions (see, for example, Ref. [25]) contain large uncertainties, we consider n g (µ) as a free phenomenological parameter, which is extracted from the experimental data from the ATLAS Collaboration [14] for the inclusive b-jet cross sections. In the Fig. 5, the inclusive differential b-jet cross section as a function of p T for b-jets with |y| < 2.1 is compared with our LO predictions of the parton Reggeization approach. The contribution of QMRK subprocess (4) and the contribution of MRK subprocess (3) are shown separately. We see that open b-quark production mechanism does not describe data, especially at the large p T and some contribution from gluon-to-bottom-pair fragmentation mechanism is needed. We have obtained good description of the data using n g (µ) as a free parameter. In Fig. 6, the bb-pair multiplicity n g (µ) in a gluon jet as a function of p T extracted from the ATLAS data for the inclusive b-jet production spectra [14] is shown. The open circles and dashed fitting line correspond to Blümlein unintegrated PDF, the black circles and solid fitting line correspond to KMR unintegrated PDF. The general theoretical consideration [25] leads to the following analytical approximation for the bb-pair multiplicity in a gluon jet where we fixed m b = 4.75 GeV and µ = p T /4 [26], and found that A KM R = 0.0012 in case of KMR unintegrated PDF, and A B = 0.0027 in case of Blümlein unintegrated PDF. At the scale µ ⋍ m Z /4, which corresponds gluon-to-bottom-pair fragmentation of secondary gluon in the Z-boson decay (Z → qq → qqbb), our approximation (16) yields n g ≃ 0.002 − 0.004, that is in an agreement with the measurements at the LEP Collider: n g = (3.3 ± 1.8) × 10 −3 from the DELPHI Collaboration [38], n g = (2.44 ± 0.93) × 10 −3 from the SLD Collaboration [39]. The difference in obtained bb-pair multiplicities n g (µ) with the KMR and Blümlein unintegrated PDFs should be used to distinguish last ones. We conclude that KMR unintegrated PDF looks preferably to describe b-jet production cross sections. Opposite this conclusion, we found recently [3] that Blümlein unintegrated PDF is better to describe inclusive all-flavor inclusive jet production spectra [28]. To test the universality of the approach as well as the universality of the obtained function n g (µ) we compare our prediction with experimental data for transverse-momentum b-jet spectra from CMS Collaboration at the CERN LHC [40] (Fig. 8) and CDF Collaboration at the Fermilab Tevatron [18] (Fig. 9). In both cases we find a good agreement between theoretical predictions and experimental data. Looking in Figs. 5 -9, we find that contribution of the gluon-to-bottom-pair fragmentation in inclusive b-jet production p T -spectra increases from the 10-15 % at the p T ≃ 50 GeV up to 30-40 % at the p T ≃ 350 GeV. This conclusion contradicts to the prediction of the NLO calculations in the collinear parton model in which the gluon-to-bottom-pair fragmentation mechanism would be dominant at the large-p T region at the LHC Collider and it would be about 50 % at the Tevatron Collider [26,41]. Comparing as a whole our results with theoretical predictions obtained in the NLO of a parton model, which also describe ATLAS data for b-jet production [14], IV. CONCLUSIONS The CERN LHC is currently probing particle physics at the terascale c.m. energies √ S, so that the hierarchy Λ QCD ≪ µ ≪ √ S, which defines the MRK and QMRK regimes, is satisfied for processes of heavy quark (c or b) production in the central region of rapidity, where µ is of order of their transverse momentum. In this paper, we studied QCD processes of particular interest, namely inclusive b-jet and bb-dijet hadroproduction, at LOs in the parton Reggeization approach, in which they are mediated by 2 → 1 and 2 → 2 partonic subprocesses initiated by Reggeized gluon collisions. We describe well recent LHC data measured by the ATLAS Collaboration [14] at the whole presented range of the bb-jet transverse momenta, the bb-jet rapidity, the bb-dijet invariant mass M jj , the azimuthal angle between the two jets ∆φ and the angular variable χ. We show that the gluon-to-bottom-pair fragmentation component [24], which takes into account effects of large logarithms log(p T /m b ), increases in inclusive b−jet production at the high transverse momenta p T up to 30-40 % of sum of all contributions. The extracted by the fit of data the bb-pair multiplicity is in agreement with the previous measurements at the LEP Collider [38,39]. Comparing different unintegrated gluon PDFs, we have found that the agreement with the data has been obtained when we used KMR PDF [31], and the calculations with Blümlein PDF [35] regularly underestimate data, approximately by factor 2, in the region of large b−jet p T and the large bb−dijet invariant mass. V. ACKNOWLEDGEMENTS We are grateful to B. A. Kniehl (4) 1.2 < |y| < 2.1. The data are from ATLAS Collaboration [14]. The solid polylines correspond to sum of all contributions (15) and KMR unintegrated PDF.
2012-07-16T06:53:26.000Z
2012-01-23T00:00:00.000
{ "year": 2012, "sha1": "9a1931bcb559df5374c4312793873a35b73bdc64", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1201.4640", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "9a1931bcb559df5374c4312793873a35b73bdc64", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
218933452
pes2o/s2orc
v3-fos-license
SP1-Induced Upregulation of lncRNA LINC00514 Promotes Tumor Proliferation and Metastasis in Osteosarcoma by Regulating miR-708 Background Growing studies have suggested the dysregulation of long non-coding RNAs (lncRNAs) in several tumors, including osteosarcoma (OS). However, limited studies report metastasis-associated lncRNAs in OS. Our present study aimed to explore the roles of lncRNA LINC00514 (LINC00514) in OS. Materials and Methods The LINC00514 expression was measured using qPCR assays in OS tissues and cell lines. The clinical significance of LINC00514 expression in OS patients was analyzed using chi-square test, Kaplan–Meier assays and multivariate analysis. The possible effects of LINC00514 in tumor cellular progression were determined using a series of functional assays. The mechanisms of LINC00514 action were explored through bioinformatics, luciferase reporter assays and RT-PCR assays. The mechanisms involved the upregulation of LINC00514 expression in OS were determined using luciferase reporter and chromatin immunoprecipitation (ChIP) assays. Results We showed that LINC00514 expressions were distinctly upregulated in both OS tissues and cell lines, especially in advanced cases. High levels of LINC0051 were positively correlated with advanced tumor stages, distant metastasis, and reduced survival of patients with OS. Functional experiments indicated that silencing of LINC00514 suppressed the ability of cell growth, colony formation and metastasis, whereas promoted cell apoptosis in vitro. Mechanistic investigation revealed that LINC00514 could directly bind to miR-708 and effectively serve as a ceRNA for miR-708. In addition, LINC00514 was upregulated by the transcription factor SP1. Conclusion Our findings revealed SP1-induced upregulation of LINC00514 as an oncogene in OS through competitively binding to miR-708, suggesting that there are potential diagnostic and treatment values of LINC00514 in OS. Introduction Osteosarcoma (OS) is one of the most common primary malignant tumors in children and young adults, and this tumor usually originates in the metaphysis of the long bones. 1,2 The approximate incidence rate worldwide is 4.5/million/year, with a peak incidence at the age of 14-20. 3 Although treatments and perioperative management have evolved in the past ten years with recent advancements in diagnosis progress, surgical procedures and the use of adjuvant chemotherapy, OS remains an extremely high morbidity and mortality. 4,5 Although more and more tumor-related regulators have been identified, almost no commonly-accepted markers have been established for the clinical application. [6][7][8] Thus, it is urgent to make clear the key molecule involved in the growth and metastasis of OS for the improvement of early detection and targeted treatment of OS. With the advancement of next-generation sequencing methods, more and more dysregulated noncoding RNAs are identified in human tissues and they are confirmed to be frequently transcribed in the genome. 9 Long noncoding RNAs (lncRNAs) are RNAs with more than 200 nucleotides in length and lack abilities of protein coding due to no functional open reading frame. 10 In recent years, emerging evidences reveal that lncRNAs act as novel modulators of gene expressions via a series of mechanisms involved in epigenetic modification. 11,12 Of note, increasing lncRNAs are demonstrated to be abnormally expressed in various tumor tissues, and some of lncRNAs functioning as oncogenes or tumor suppressors in particular conditions have been functionally characterized. [13][14][15] In addition, the critical roles of lncRNAs in tumor progression highlighted the clinical application of lncRNAs used as novel cancer biomarkers. 16,17 Recently, more and more abnormally expressed lncRNAs were demonstrated by bioinformatics analysis and various cell experiments using RT-PCR. However, the molecular mechanism involved in the abnormal expressions of lncRNAs remained largely unclear. Emerged evidences indicated that transcription factors have a central role in the transcription of genes, which highlighted the potential function of transcription factors as novel modulators in the expression of lncRNAs. 18,19 In addition, several transcription factors such as SP1, STAT1 and STAT3 have been demonstrated to display functional roles in the regulation of lncRNAs. [20][21][22] In this study, we identified a new c-related lncRNA, LINC00514 which was firstly functionally identified in papillary thyroid cancer. 23 We firstly provided evidence that LINC00514 levels were upregulated in OS and predicted a poor clinical outcome. Further experiments indicated that overexpression of LINC00514 was induced by SP1. Moreover, we performed functional assays and mechanism experiments to explore the potential functions of LINC00514 in OS cells and the relative mechanisms. Overall, our findings provided a novel clue for the discovery of cancer biomarkers and therapeutic targets for OS patients. Clinical Samples OS specimens and adjacent normal tissues were acquired from 107 patients with OS that were enrolled in the Zuanshiwan Hospital District of The Second Hospital of Dalian Medical University from March 2010 to September 2013. The samples were collected with the written informed consents of the patients, and this study was approved by the ethics committee of Zuanshiwan Hospital District of The Second Hospital of Dalian Medical University. Each specimen after collecting were frozen in liquid nitrogen. Real-Time PCR Trizol reagents (BoYuan, Ningbo, Zhejiang, China) were employed for extracting the total RNAs form OS samples or cells. ND-1000 UV (Thermo Scientific, Waltham, MA, USA) apparatus was used to identify the concentrations of total RNAs. The extracted RNAs were then subjected to reverse transcription using cDNA synthesis kits (TAKARA, Dalian, Liaoning, China). Then, qPCR analyses for LINC00514 and SP1 were carried out using SYBR Green qPCR kits (Kaijie, Suzhou, Jiangsu, China), and the reaction was operated on the basis of the kits' protocols. For miR-708 detection, Transgen two-step miRNA qPCR kits (Longjun, Chengdu, Sichuan, China) were employed in accordance with the kits' protocols. GAPDH was used as an internal control of LINC00514 and SP1 detection, while U6 was used as the internal control of miR-708 detection. Fold change was calculated using 2 −ΔΔCt method. The primers are described in Table 1. Western Blot Total proteins of OS cells after treatment were lysed by RIPA buffer (Hongjun, Changsha, Hunan). The protein concentrations were then determined using the Pierce BCA kits (Dongkun, Chengdu, Sichuan, China). Thereafter, proteins were separated with 10% SDS-PAGE, followed being transferred onto PVDF membranes. After being blocked using 5% BSA, primary antibodies were used for probing the membranes for 12 h at 4°C. After washing using TBST buffer, corresponding secondary antibodies were used for incubation with the membranes. The protein blots were visualized by applying ECL kits (Tengrui, Changsha, Hunan, China). GAPDH was used as the loading control. The primary antibodies against caspase 3 and caspase 9 were bought from PTG technology (Wuhan, Hubei, China). Colony Formation Assay The transfected 143B and MG63 cells were respectively placed into plates (6-well; 800 cells per well), followed by being cultured for 2-3 weeks in complete media. After the colonies were visible, crystal violet (0.2%) was applied for treating these colonies for 15 min. After washing by PBS buffer, the colonies were photographed by a microscope. TUNEL Assay Promega TUNEL assay kits (Shengda, Hangzhou, Zhejiang, China) were used for cell apoptosis detection according to the kits' protocols. In brief, the OS cells after treatment with corresponding siRNAs were placed into 48well plates. After the cells attached the plates (more than 12 h), the cells were washed and treated with paraformaldehyde (4%). Subsequently, the cells were treated with proteinase K reagents and TdT buffer for 1 h at 37°C. After treating with TUNEL detection buffer, the cells were washed twice and the fluorescence was imaged by a fluorescence microscope. Wound-Healing Assay The cell migratory abilities were evaluated by woundhealing assays. In short, OS cells were treated with corresponding siRNAs, and 8-10 hours later, the cells were collected and re-plated in 12-well plates with high density. On the second day, the cell monolayers (near 100% cell confluence) were formed, and 200 μL tips were used for scraping the cells. The wound closures were imaged using a microscope at 0 h and 48 h after the cells were scraped. Transwell Assay To assess the invasive capacities of OS cells, 1.5 × 10 5 cells after treatment were added into the Corning transwell chambers (Xunfeng, Hefei, Anhui, China) coated with Matrigel. Twenty-four hours later, cells invasive to the bottom sides of the membranes were treated using formaldehyde (4%) and crystal violet (0.3%) for 15 min. After washing by PBS buffer, the colonies were photographed by a microscope. Subcellular Fractionation Location Assay The nuclei and cytoplasm of 143B cells were separated by employing Thermo Scientific Nuclei-Cytoplasm extracting kits (Hening, Ningbo, Zhejiang, China) in accordance with the kits' protocols. The RNAs from nuclei or cytoplasm were isolated and subjected to qPCR detection as described above. U6 and GAPDH acted as markers of the nuclei and cytoplasm, respectively. ChIP Assay ChIP assays were carried out to evaluate the binding of SP1 and LINC00514 promoter. In brief, the cells after treatment were cross-linked with formaldehyde (1%) and subsequently quenched by glycine (0.125 M), followed by being collected in ChIP lysis buffer after the cells were washed twice. Afterwards, the sonicators were used for shearing the DNAs into 200-400 bp. The chromatins were immune-precipitated by anti-SP1 antibodies (4°C, 4 h) with IgG as a negative control, followed by adding Invitrogen protein G Sepharose (JunLong, Xiamen, Fujian, China). The mixture was incubated for 2 h at 4°C. The precipitated complex was rinsed twice, followed by adding elution buffer. Finally, qPCR was employed to analyze detect the immune-precipitated chromatin DNA. Luciferase Reporter Assay The binding site between LINC00514 and miR-708 was predicted using "miRDB" algorithm. The regions containing the predicted binding site was constructed into pGL3 luciferase reporter vector (LINC00514 wild-type). In addition, the wild-type of corresponding region was also constructed (LINC00514 mutant-type). The binding sites between LINC00514 and SP1 were predicted by "Jaspar" algorithm. Correspondingly, the sequence containing the predicted binding site 2 (PS2) was also constructed into pGL3 vector and the plasmid was named as PS2 WT (wild-type). In addition, PS2 mutant-type (PS2 MUT) reporters were also constructed. After transfection with corresponding reporter plasmids, the luciferase activities were evaluated by using Promega dual-luciferase reporter assay kits in accordance with the kits' protocols (Shengda, Hangzhou, Zhejiang, China). Statistical Analyses SPSS 20.0 (SPSS, Chicago, IL, USA) software was used for statistical analyses. Student's t test was used to examine pairwise comparisons and one-way ANOVA analysis was used to examine comparisons (more than two groups). Overall survival rates were analyzed using Kaplan-Meier methods and Log rank tests. Univariate and multivariate models were used examine the influence of related factors on patient survival. Differences were considered significant at p < 0.05. Aberrant Upregulation of LINC00514 Was Observed in OS Tissues and Cells To determine whether LINC00514 was dysregulated in OS, we firstly examined LINC00514 expression in OS tissues and cells using qRT-PCR. Our results indicated that the expressions of LINC00514 were distinctly upregulated in OS specimens compared to matched normal specimens ( Figure 1A, p < 0.01). In addition, patients with advanced stages displayed higher levels compared to other patients ( Figure 1B), suggesting that higher levels of LINC00514 contributed to tumor progression. Then, we performed RT-PCR to detect the expression of LINC00514 in OS cells, finding that LINC00514 expression was distinctly higher in five OS cell lines than in hFOB1.19 (p < 0.01, Figure 1C). These results revealed that LINC00514 might play potential roles in the progression of OS. Increased Expressions of LINC00514 Was Associated with the Poor Prognosis in OS OS tissue samples were classified into the lowexpressing group (n = 55) and the high-expressing group (n = 52) according to the median expression level of all OS samples. Table 2 showed the associations between several clinicopathological factors and LINC00514 levels. Our data indicated that high LINC00514 levels were positively correlated with tumor stage (p = 0.017) and distant metastasis (p = 0.031), suggesting that LINC00514 may contribute to clinical progression of this tumor. Thus, we wondered the possible correlation between LINC00514 expression and long-term overall. As shown in Figure 1D, we found that overall survival was higher in patients with high LINC00514 expression than in those with low LINC00514 expression (p = 0.0062). To further determine the prognostic values of LINC00514 in OS patients, univariate and multivariate assays were performed and the results revealed that LINC00514 (HR=2.896, 95% CI: 1.217-4.285, p =0.022) was an independent protective predictor of overall survival of OS patients (Table 3). Overall, our findings suggested LINC00514 as a novel biomarker for this tumor. However, more OS samples were needed to be analyzed for further confirmation of our results. LINC00514 Knockdown Suppressed OS Development in vitro To explore the functional roles of LINC00514 in OS cells, we transfected siRNAs targeting LINC00514 into 143B and MG63 cells. Real-time PCR was conducted at 48 h post-transfection and indicated that LINC00514 siRNAs had a high efficiency of interference ( Figure 2A). Thereafter, we sought to assess the effects of LINC00514 knockdown on cell proliferation. CCK-8 assays revealed that cellular growth was remarkably impaired in LINC00514 siRNAs-transfected OS cells ( Figure 2B). Similarly, the data from colony formation assays demonstrated that LINC00514 deficiency reduced clonogenic survivals of OS cells ( Figure 2C and D). Subsequently, to determine whether the effects of LINC00514 depletion on OS cell proliferation influenced apoptosis, TUNEL assays were performed. The results showed that the proportion of apoptotic cells following treatment with LINC00514 siRNAs was markedly increased compared with the controls ( Figure 2E). Mechanically, data from Western blot validated that repressing the levels of LINC00514 remarkably accelerated the expression of caspase 3/9 in OS cells ( Figure 2F and G). Overall, the data suggested that LINC00514 depletion was capable to depress OS development. LINC00514 Inhibited the Metastatic Potentials of OS Cells In spite of proliferation, metastasis is also an important feature of cancer cells. Therefore, we next attempted to investigate the influence of LINC00514 suppression on OS cell migration and invasion. First, we conducted woundhealing assays to evaluate the effects of LINC00514 downregulation on cell migration. As the data presented in Figure 3A and B, depression of LINC00514 notably elevated the velocity of cell movements. Afterwards, the transwell invasion assays demonstrated that cell invasion of OS cells was also suppressed by repressing LINC00514 expression ( Figure 3C). Therefore, these data proved that depression of LINC00514 inhibited metastasis of OS cells. LINC00514 Directly Targeted miR-708 and Acted as a miR-708 Sponge in OS Cells The previous data proved that LINC00514 deficiency depressed the malignant behaviors of OS cells. Since mount of studies had reported that lncRNAs functioned as miRNA sponges to modulate neoplastic processes, we hypothesized that LINC00514 might act as a miRNA sponge in OS oncogenesis. The subcellular fraction assays clarified that LINC00514 was mainly located in cytoplasm ( Figure 4A). With the aid of "miRDB" program, we found that miR-708, which had been verified to be a tumor suppressor in several cancer types, was a potential target of LINC00514. The complementary binding site between LINC00514 and miR-708 is presented in Figure 4B. In addition, the levels of miR-708 were determined by qPCR, indicating the lower expression of miR-708 in 107 OS tumor specimens ( Figure 4C). Besides, qPCR analyses revealed that ectopic expression of LINC00514 remarkably decreased miR-708 levels, while LINC00514 deficiency notably elevated the levels of miR-708 in OS cells ( Figure 4D). Vice versa, miR-708 overexpression significantly depressed LINC00514 levels, while miR-708 knockdown markedly increased LINC00514 levels ( Figure 4E). Finally, we performed luciferase reporter assays. The results validated that when compared to that of the control group, the wild-type LINC00514/miR-708 mimics co-transfected group presented diminished luciferase activity, while no significant difference of luciferase activity was observed in the mutant-type LINC00514/miR-708 mimics co-transfected group ( Figure 4F). Taken together, we discovered that miR-708 was a direct target of LINC00514 in OS cells. SP1 Induced LINC00514 Expression by Acting as a Transcription Activator in OS Cells Transcription factors (TFs) played crucial roles in modulating lncRNAs expression. To further investigate the regulator of LINC00514 in OS, we predicted the potential TFs which could bind to the promoter of LINC00514 using "Jaspar" program. The results showed that SP1, which was reported to activate diverse lncRNAs aberrant expression, was a potential TF which could promote LINC00514 expression ( Figure 5A). Interestingly, SP1 was also upregulated in OS tumor samples ( Figure 5B). To determine the regulatory effects of SP1 on LINC00514 expression, we silenced or overexpressed SP1 in 143B cells, and we discovered that LINC00514 was downregulated or upregulated in response to the knockdown or overexpression of SP1 in 143B cells ( Figure 5C and D). In addition, we performed ChIP analyses to discover the exact binding site of SP1 in LINC00514 promoter and the result demonstrated that SP1 could bind to PS1 site of LINC00514 promoter ( Figure 5E). Besides, luciferase reporter assays were conducted for further proving the binding relationship between SP1 and LINC00514 promoter, and we found that the effects of SP1 on the luciferase activity were remarkably promoted when PS1 site was wild-type, indicating that PS1 site of LINC00514 promoter was responsible for the binding of SP1 in OS cells ( Figure 5F). Overall, SP1 transcriptionally activated LINC00514 and induced its expression in OS cells. Discussion The discovery of novel biomarkers for tumor screening is of great significance for the improvement of longterm survival of patients. 24 Recently, increasing studies revealed lncRNAs as potential novel biomarkers due to their critical functions in epigenetic regulation as well as their frequent dysregulation in blood and tumor tissues of patients. 25 In this study, a novel OS-related lncRNA, LINC00514, was identified by RT-PCR assays. Overexpression of LINC00514 was distinctly observed in both OS specimens and cell lines. Clinical study with 107 patients showed that patients with higher levels of LINC00514 had an advanced stage and positive metastasis, and exhibited a shorter overall survival. Moreover, LINC00514 was further demonstrated to be an independent poor prognostic factor for OS patients, which highlighted the great clinical values of LINC00514 as a novel prognostic biomarker. Previously, several lncRNAs were also reported to be frequently associated with advanced clinical stages and unfavorable prognosis of OS patients, such as lncRNA SNHG12 and lncRNA UCA1. 26,27 Our findings, together with previous results, indicated that the positive associations between some functional lncRNA levels and patients' prognosis may be a frequent event. In cancer, the functional effects of lncRNAs acting as tumor promoters or oncogenes through transcriptional regulation of target genes have been frequently demonstrated in the abilities of tumor cells proliferation and metastasis. 28,29 Our above results indicated that LINC00514 was highly expressed in OS, suggesting that it may act as a positive regulator in tumor progression. Thus, we performed a series of cellular experiments by silencing LINC00514 expression in 143B and MG63 using si-LINC00514. As expected, knockdown of LINC00514 distinctly inhibited tumor cell growth. Moreover, TUNEL assays demonstrated that knockdown of LINC00514 could suppress apoptosis of OS cells by increasing the activity of Caspase 3/9 which was confirmed by Western blot assays. In addition, we also provided evidence that LINC00514 silencing decreased OS cell migration and invasion. Overall, our findings suggested LINC00514 as an onco-lncRNA in OS. The novel discovery of biological studies indicated that ceRNA which has emerged as a novel modulator in the posttranscriptional modification, which provided novel mechanisms involved in the regulation of gene expression. 30,31 Several lncRNAs have been demonstrated to influence the tumorigenesis of various tumors via sponging tumor-related miRNAs. 32,33 Thus, our group speculated that LINC00514 may display its carcinogenic roles by serving as a ceRNA. Firstly, LINC00514 was found to localize preferentially to the cytoplasm, indicating the possibility of LINC00514 as a mainly cytoplasmic lncRNA which can compete with ceRNAs to regulate miRNAs for further binding to their target mRNAs. Then, based on the data from bioinformatic assays and luciferase reporter assays, LINC00514 was confirmed to be a novel target of miR-708 which was an important tumor-related miRNA. Previously, dysregulation of miR-708 in several tumors and its potential roles in tumor progression had been frequently reported. 34,35 In OS, miR-708 was found to be lowly expressed and suppress tumor cell proliferation and invasion by targeting URGCP. 36 Hence, we also showed that the levels of miR-708 were upregulated in OS tissues. In addition, overexpression of LINC00514 could result in the suppression of miR-708 expressions. Thus, our findings, together with previous results, suggested that LINC00514 may promote the proliferation and metastasis of OS cells via sponging miR-708. Transcriptional activation is an imperative mechanism leading to the overexpression of lncRNAs. Recently, several transcription factors have been reported to act as positive regulators in the modulation of lncRNAs expression in several tumors. For instance, lncRNA LINC00174 was shown to be upregulated in colorectal carcinoma and this upregulation was induced by STAT1 which was a transcription factor. 21 Su et al 37 reported that upregulation of lncRNA MIR100HG acting as a tumor promoter in OS was induced by the transcription factor ELK1. In addition, a common transcription factor, SP1, had been frequently reported to be involved in the regulation of the expression of lncRNAs and act as a promoter in may tumor progression. 38,39 In this study, the results of bioinformatics assays predicted that SP1 could regulate LINC00514 transcription. Moreover, ChIP assays and Luciferase assays confirmed that SP1 could directly bind to LINC00514 promoter regions. These essential data suggested that SP1 activated LINC00514 translational expressions to increase LINC00514 in OS. In conclusion, we firstly provided evidence that highly expressed LINC00514 acted as an oncogenic lncRNA that promoted the progression of OS through targeting miR-708. Upregulation of LINC00514 was associated with poor clinical prognosis and induced by SP1, target of miR-708. Our findings indicated that LINC00514 could be of interest in developing markers and therapeutic targets for OS patients. Ethics Approval This study was conducted with permission by Ethics Review Committees of Zuanshiwan Hospital District of The Second Hospital of Dalian Medical University. Dovepress Publish your work in this journal Cancer Management and Research is an international, peer-reviewed open access journal focusing on cancer research and the optimal use of preventative and integrated treatment interventions to achieve improved outcomes, enhanced survival and quality of life for the cancer patient. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors.
2020-05-21T00:13:18.575Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "2630f3cf11fa16d67423f7bfe440bdf02a30140f", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=58000", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "292d52942c0b0ae3a9529819c54186a49860dcf4", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
6146868
pes2o/s2orc
v3-fos-license
Mass spectrometric analysis of prefrontal cortex proteins in schizophrenia and bipolar disorder Background Schizophrenia and bipolar disorder are the two most serious and debilitating neuropsychiatric disorders that share many characteristics, both symptomatic and epidemiological. There has yet to be a single diagnostic biomarker discovered for schizophrenia and bipolar disorder. Proteomics holds promise in elucidating the pathophysiology of these neuropsychiatric disorders from each other and healthy individuals. Findings Postmortem prefrontal cortex tissue from schizophrenia, bipolar disorder, and psychiatric-free controls (n = 35 in each group) were subject to SELDI-TOF-MS protein profiling. There were 13 protein peaks distinguishing schizophrenia versus control and 15 in bipolar versus control. Using a predictor set of 10 peaks for each comparison, 73% prediction accuracy (p = 2.3×10−4) was achieved. Three peaks were in common between schizophrenia and bipolar disorder. Conclusions This pilot study found protein profiles that distinguished schizophrenia and bipolar patients from controls and notably from each other. Identifying and characterizing the proteins in this study may elucidate neuropsychiatric phenotypes and uncover therapeutic targets. Further, applying class prediction bioinformatics may allow the clinician to differentiate the two phenotypes by profiling CSF or even serum. Findings Background Schizophrenia and bipolar disorder are the two most serious and debilitating neuropsychiatric disorders that share many characteristics, both symptomatic and epidemiological. Each disorder affects roughly 1% of the population, has equal risk across gender, persists through a patient's lifespan, and often affects patients after puberty and before 25 years of age. In addition, the course of illness is episodic in both disorders and places the sufferer at an increased risk of suicide. Genetic studies have mapped many susceptibility loci common to both diseases as well as large chromosomal aberrations (Gogos and Gerber 2006;Pearlson and Folley 2008). The prefrontal cortex (PFC) has been identified as a prominent site of dysfunction based on substantial neuroimaging, clinical, and postmortem studies, and microarray studies (Liemburg et al. 2012;Teffer and Semendeferi 2012;Volpe et al. 2012). However, gene expression data do not consistently correlate with protein expressions and cannot identify posttranscriptional and post-translational modifications, major modulators of protein function (Ideker et al. 2001;Lakhan and Vieira 2009;Gygi et al. 1999). Proteomic approaches are able to characterize post-translational modifications, a method by which the cell dynamically and quickly modifies protein function and regulates both creation and degradation in response to cellular perturbations (e.g. disease provocation) (see (Lakhan 2006) for review). There has yet to be a single diagnostic biomarker discovered for schizophrenia and bipolar disorder. While clinical biomarkers have tremendous diagnostic benefits, analyzing proteins from brain tissues of disease phenotypes is a superior initial approach to reveal differentially expressed proteins that may elucidate schizophrenia and bipolar disorder etiology and contribute to the understanding of neuropathogenesis and molecular psychiatry. This pilot study demonstrates the possibility of accurately and sensitively distinguishing schizophrenia from bipolar disorder based on proteomic level data. Rather than using CSF or serum where potential pathogenicrevealing biomarkers are masked by common plasma proteins (e.g. platelet factors), human PFC tissues was subject to proteomic profiling to yield protein biomarkers. Methods Postmortem prefrontal cortex (BA10) were dissected from patients with schizophrenia, bipolar disorder, and non-psychiatric control subjects. Each group consisted of 35 subjects. Diagnosis was made according to the Diagnostic and Statistical Manual of Mental Disorders, 4th Edition (DSM-IV). Brain tissues samples were cut into 50 mg pieces. The cut samples were incubated with lysis buffer (urea, CHAPS, and DTT). The samples underwent tissue homogenization via mechanical disruption and centrifugation. The supernatant was extracted containing the tissue lysate protein content. All samples are stored at -80˚C until mass spectrometry application. Tissue lysates were normalized to total protein concentration using lysis buffer. Tissue lysates were unfractionated. All samples were automatically and simultaneously processed in duplicate on Ciphergen ProteinChip arrays with the special chromatographic surfaces immobilized metal affinity capture (IMAC30). Briefly, the arrays were generally activated with proper solution, treated with washing/binding buffer, co-incubated with sample, mixed for several cycles to allow for protein binding to surface. Subsequently, unbound proteins were washed away with wash/binding buffer. Then, the arrays were concurrently treated with a saturated sinapinic acid solution (the energy absorbent molecule) and allowed to dry. Arrays were analyzed with the Ciphergen ProteinChip Reader for surface enhanced laser desorption/ionization time-of-flight mass spectrometry (SELDI-TOF-MS) protein profiling. Spectra were normalized to the total ion current and the baseline was subtracted. Peak labeling and clustering were performed using the Ciphergen Biomarker Wizard, exported into a worksheet, and intensity values for each peak averaged for duplicate samples. The algorithm structural pattern was employed using a localization analysis by sequential histogram (SPLASH) (Califano 2000), a supervised method designed to discover patterns of multivariate associations in gene and protein expression data (Lepre et al. 2004). Class prediction was done using the weighted voting algorithms where the informative peaks in the training set are used to perform leave-one-out cross validation (Golub et al. 1999). The process started with two groups (e.g. schizophrenia and control) and a set of features (i.e. informative protein peaks). A sample was left out and a predictor set of peaks that differentiate between the two groups was built. The sample that was left out was then classified as one of the two groups using the predictive peaks. This was cycled through all samples individually. The accuracy of the predictor was assessed by the total number of correct predictions. The p-value for the predictor accuracy is calculated using Fisher's test. This biostatistical approach was carried out on two comparisons: 1) schizophrenia vs. control and 2) bipolar vs. control. The identified independent patterns were provided by 60-70% of the samples in a given phenotype. Results There were 13 protein peaks representing consistent patterns in schizophrenia vs. control and 15 in bipolar vs. control (see Figure 1). The discovered patterns were then used to perform leave-one-out cross validation in the relevant phenotypes. Namely, out of the 13 peaks that differentiate schizophrenia from control, a predictor set of 10 peaks was used and obtained 73% accuracy in the prediction. The results for bipolar vs. control were the same with 73% prediction accuracy using 10 peaks selected out of 15 informative peaks. In both cases, the significance of the prediction accuracy assessed by Fisher's test was at a p-value of 2.3 × 10 −4 . The overlap between the 13 and 15 informative protein peaks that differentiate schizophrenia and bipolar disorder, respectively, from controls were also investigated. The results are shown in Figure 2, where the overlap between these peaks is 3. Discussion The experimental design presented in this study demonstrates preliminary neuropathological investigation and revelation of protein profiles in schizophrenia and bipolar disorder, both shared and exclusive to each diagnosis. Three protein peaks were found to be differentially expressed in both schizophrenia and bipolar disorder from controls and, notably, 10 and 12 protein biomarker peaks were exclusive. While it is essential to identify similarities in schizophrenia and bipolar disorder, it is equally or more so important to classify the changes specific to a given neuropsychiatric condition. Where altered expression overlaps between disorders suggest a common pathology and perhaps attributed to the similar symptomatic profile, the exclusive changes may elucidate and distinguish the phenotype. Diagnostically, applying class prediction bioinformatics may allow the clinician to differentiate the two phenotypes by profiling serum or CSF. In addition, where drug discovery is dynamically evolving, direct proteomic data of identified and characterized biomarkers may be viable drug target proteins. Further investigations to sequence and further characterize the biomarker proteins are planned. The vast majority of pharmaceutical agents target proteins. Inherently, direct study of diseased proteomes is the essential utility for drug discovery and clinical proteomics (Mikami et al. 2011;Trist 2011). In fact, it is believed that proteomics-based tests are likely to be to a large extent more predicative than genetic tests, which tend to be more non-specific (Wilson 2004). The robust and high-throughput nature of mass spectrometry applications allows streamlined identification of new disease specific targets. In addition, schizophrenia and bipolar specific protein targets may customize pharmacotherapy and thereby augment the efficacy and decrease the toxicity potential of drugs. Proteomics in the post-genomic era has the capability of characterizing macromolecules and their interactions, complexes, and networks. Ultimately, biomarker discovery, class prediction, and elucidation of schizophrenia and bipolar disorder neuropathogenesis are worthy goals with a large potential benefit and minimized risks.
2017-12-31T02:40:59.631Z
2012-04-11T00:00:00.000
{ "year": 2012, "sha1": "fde17125e488003f4e2148cbcd5145a29c28c513", "oa_license": "CCBY", "oa_url": "https://springerplus.springeropen.com/track/pdf/10.1186/2193-1801-1-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fde17125e488003f4e2148cbcd5145a29c28c513", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55995825
pes2o/s2orc
v3-fos-license
Is the Weekend Effect Really a Weekend Effect ? The weekend effect, as documented in earlier finance literatures, refers to the tendency in which stock returns are lower on Mondays relative to the other days of the week. This paper confirms and extends the results illustrated in these earlier literatures by showing that the weekend effect exists in the early years and it is beginning to diminish in the recent years. This paper also finds out that the weekend effect is related to the industries in which the firms belong to. In addition, this paper demonstrates that the weekend effect as defined in earlier literatures is in fact a Monday effect rather than a weekend effect. Introduction Are security returns predictable or not?Are security prices reflecting all available information or not?These are common questions being investigated in market efficiency literatures.The efficient market hypothesis states that security prices reflect certain type of information, and the particular type of information depends on the form of the efficient market hypothesis.In particular, the weak-form efficient market hypothesis states that past information is reflected in the security prices, the semi-strong-form efficient market hypothesis says that security prices reflect all publicly available information, and the strong-form efficient market hypothesis asserts that all public and private information is reflected in security prices.The implication is that security prices cannot be predicted from the relevant information as indicated above if the market is efficient. The early finance literatures focus on whether security returns are predictable from past information, such as past returns.If the market is efficient, then security returns should not be predictable from past information; that is, the security prices should follow a random walk.Nevertheless, some of the early literatures suggest that security returns are predictable from past returns.Fama (1965) investigates whether the security returns of the 30 stocks in the Dow Jones Industrial Average Index are predictable or not from past returns, and the results demonstrate the existence of positive autocorrelation for daily returns of the stocks.In addition to daily returns, there is also evidence suggesting that security returns are predictable for weekly and monthly returns.Lo and MacKinlay (1988) observe positive autocorrelation in weekly returns of New York Stock Exchange (NYSE) stocks, and Fisher (1966) finds evidence of autocorrelation in monthly returns of diversified portfolios.All these literatures suggest that security prices are predictable from past security returns, and the implication is that the efficient market hypothesis may not hold.Nevertheless, some literatures do not agree with the above by stating that either the power of the tests is low or the autocorrelation is not significant in magnitude. During the period from the 1970"s to the 1990"s, more literatures begin to focus on the issue of whether security returns are predictable or not.DeBondt andThaler (1985, 1987) test whether security returns are predictable or not by grouping NYSE stocks based on their returns in the past three to five years relative to the market benchmark.In particular, the groups of NYSE stocks that perform the worst and the best in the past three to five years are referred to as "extreme losers" and "extreme winners", respectively.DeBondt and Thaler find that the extreme losers tend to outperform the market in subsequent years and vice versa for the extreme winners.Instead of grouping securities based on their long-term past returns, Jegadeesh and Titman (1993) categorize securities based on their short-term past returns.They find that past winners (securities with the best performance in the past 12 months) have higher returns than past losers (securities with the worst performance in the past 12 months) in the subsequent year.Baker et al. (2011) find that stocks with low volatility have higher future returns relative to those with high volatility in the United States.Dutt and Humphery-Jenner (2013) extend the analysis and confirm that the same results apply to stocks in emerging markets and developed markets outside of North America.The above empirical evidence suggests that securities returns can be explained by past returns and volatilities, and it seems like that the market is not as efficient as it may seem. In addition to the time-series predictability as discussed above, there is also evidence suggesting the existence of cross-sectional predictability.Basu (1977) shows that expected returns can be explained by earnings-to-price ratios.In particular, securities with high earnings-to-price ratios are expected to have higher returns.Banz (1981) finds that the size or market capitalization of a firm has an impact on security returns.It is suggested that smaller firms tend to have higher expected returns.Fama and French (1992) claim that beta in the capital asset pricing model (CAPM) does not have explanatory power; instead, two other factorssize and book-to-market ratioare sufficient to explain security returns. Finally, a number of literatures suggest that there are seasonal effects in security returns.French (1980) examines whether the abnormal returns of individual securities are correlated with the days of the week.Using daily returns of the Standard and Poor (S&P) composite portfolio, French shows that the average returns tend to be lower on Monday, while the returns on the other days of the week do not show any specific patterns.The lower returns observed on Mondays are referred to as the weekend effect.Instead of lower Monday returns, Chan et al. (2005) find higher Monday returns for real estate investment trusts (REITs) with higher institutional holdings.In addition to the weekend effect, Keim (1983) investigates the relationship between abnormal monthly returns and the market value of NYSE and American Stock Exchange (AMEX) common stocks.The individual securities are grouped into ten deciles based on their market values.Keim confirms the empirical results in Banz"s literature that small-size firms tend to have higher expected returns than large-size firms.Keim further shows that the size effect tends to be related to another effect-the January effect.That is, the high abnormal returns of the small-size firms are concentrated on the first few days of January.All of the above results show that there is predictability in the security prices.Since the above empirical results are not consistent with the efficient market and CAPM, these are referred to as anomalies.However, whether these imply an inefficient market or not remains a question, as the above anomalies can be caused by problems such as incorrect models being used, data snooping, and so on. Among the anomalies discussed above, this paper is going to investigate the weekend effect further.The weekend effect serves as potential evidence suggesting a violation of the efficient market hypothesis, as returns are predictable from the days of the week.However, what"s more interesting and puzzling for the weekend effect is that it is not consistent with the traditional theory.The traditional finance theory states that returns should be higher when risk is higher.Thus, according to the traditional theory, the Monday returns should be higher than the returns on any other days of the week.The reason is that the Monday returns are calculated based on the period from the closing time on Friday to the closing time on Monday.As a result, the Monday returns span over a three-day period.The longer the time period is, the higher the uncertainty involved, which is equivalent to a higher level of risk.However, the weekend effect seems to contradict with the traditional theory because the weekend effect states that the Monday returns are lower rather than higher as implied by the risk level. There are many literatures investigating on the existing anomalies.Among those, Schwert (2003) shows that the weekend effect exists until 1977, and it seems to have disappeared for the sample period from 1978 to 2002.The purposes of this paper are three-folded.First, this paper confirms and extends Schwert"s results by using data from 1928 to 2010.Second, this paper investigates whether the weekend effect is related to the industry that the firms belong to by using the Standard Industrial Classification (SIC) code.Third, French (1980) and Schwert (2003) calculate Monday returns based on the closing price on Fridays and the closing price on Mondays.This paper extends the analysis on the weekend effect by separating the Monday returns as defined above into two parts: the weekend returns and the Monday returns.The weekend returns are defined as the returns based on the closing price on Fridays and the opening price on Mondays, while the Monday returns are referred to as the returns based on the opening price on Mondays and the closing price on Mondays.As a result, this paper looks at whether the weekend effect is actually a weekend effect or a Monday effect.This paper is organized as follows: Section 2 presents the data being used in this paper.Section 3 shows the empirical results obtained by confirming and extending Schwert"s paper.Section 4 discusses the methodology used to obtain the main empirical results of this paper, such as whether the weekend effect depends on the industry or not, as well as whether the weekend effect is in fact a weekend effect or a Monday effect.Section 5 analyzes these main empirical results, and section 6 concludes this paper. Data Daily returns of the NYSE/AMEX/NASDAQ portfolio for the period from January 2, 1928 to December 31, 2010 obtained from the Center for Research in Security Prices (CRSP) are used to confirm and extend Schwert"s results in section 3 of this paper.The data used to generate the main empirical results in section 4 and 5 of this paper are obtained from the daily returns of the individual stocks in the Daily Stock file of CRSP for the period from January 2, 1928 to December 31, 2010.Schwert (2003) uses portfolio data based on the Dow Jones Index and the S&P composite portfolio from 1885 to 2002 and shows that the weekend effect seems to be disappearing in the recent years by running the following regression: R t =α 0 +α W Weekend t +ε t Empirical Results by Confirming and Extending Schwert's (2003) Paper (1) R t refers to the daily return of the above portfolio at time t, and Weekend t is a dummy variable that takes on the value of 1 when the daily return at time t spans a weekend and otherwise 0. The Monday returns in Schwert"s paper are defined as the returns of a security calculated based on the closing prices on Fridays and the closing prices on Mondays.As demonstrated in Table 1, Schwert shows that the coefficient α W takes on a negative value for the period from 1885 to 1977.This suggests that security returns are lower on Mondays, and this confirms the weekend effect.However, Schwert also shows that α W is no longer statistically significant for the sample period between 1978 and 2002.As a result, he concludes that the weekend effect does exist but only for the earlier periods.Using daily returns of the NYSE/AMEX/NASDAQ portfolio from January 2, 1928 to December 31, 2010, this paper attempts to confirm Schwert"s results up to year 2002, as well as to extend the results to determine whether or not the weekend effect exists during the period from January 1, 2003 to December 31, 2010.Table 1 shows that, based on the entire sample period from 1928 to 2010, the weekend effect exists and is statistically significant.For further analysis, the data are broken down into four different sub-periods: i).1928-1952; ii). 1953-1977; iii). 1978-2002; and iv). 2003-2010.Table 1 shows similar results as those of Schwert and suggests that the weekend effect is significant for the period up to 1977.The results also suggest that the weekend effect is statistically significant for the period between 1978 and 2002, which is inconsistent with the conclusion of Schwert.Nevertheless, it is clear that the trend is consistent with that of Schwert.That is, the value of the coefficient α W is getting closer to 0 in recent years, and the results are getting less statistically significant as well. In particular, the weekend effect is not statistically significant anymore in the period between 2003 and 2010.Table 1 also shows that the results are similar regardless of whether the value-weighted or equal-weighted market portfolio is being used.Nevertheless, when the results are obtained by using equal-weighted market portfolio, the weekend effect is more significant.One of the potential explanations is that the weekend effect is concentrated in small-size firms.When the market portfolio is equal-weighted, the small-size firms have the same impact as the large-size firms.On the other hand, when the market portfolio is value-weighted, the impact of the small-size firms is lower than that of the large-size firms.If the weekend effect is mostly the result of the small-size firms, then equal-weighted results would show a more significant weekend effect.Overall, the results in Table 1 confirm the conclusion of French and Schwert in the sense that the weekend effect exists.That is, the returns on Mondays are lower than the returns on any other days of the week.However, the results also show that the weekend effect is the most significant in the early and mid-1900"s only.The weekend effect seems to be diminishing in the late 1900"s, and it is not significant anymore in the early 2000"s. Methodology for Main Empirical Results As discussed above, the weekend effect mostly exists in early periods only, and it begins to diminish in recent years.This paper is going to extend the analysis of the weekend effect beyond those of French and Schwert in two ways.First, this paper is going to determine whether or not the weekend effect is related to the industry in which a firm is in.Second, French and Schwert define the Monday returns based on the closing price on Fridays and the closing price on Mondays, and such results do not explain whether the effect is a real weekend effect or just a pure Monday effect.That is, the anomaly can be caused by a lower return during the period between the closing of the market on Friday and the opening of the market on Monday (a weekend effect), or it can be generated by a lower return from the opening of the market on Monday to the closing of the market on Monday (a Monday effect). To determine whether there is a relationship between the weekend effect and the industry in which a firm belongs to, daily return data of all individual stocks in the Daily Stock file of CRSP for the period between January 1, 1928 and December 31, 2010 are used.Similar to Table 1, the sample period is broken down into four different sub-periods: i).1928-1952; ii). 1953-1977; iii). 1978-2002; and iv). 2003-2010.The individual securities are grouped by the industries in which they belong to, and such a classification is done by using the two-digit SIC codes.The two-digit SIC codes are listed in Table 2.After the individual securities are grouped into different industry groups, regression (1) is run for each security based on the daily returns.The average values of the coefficients and test statistics for each sub-period are calculated to determine whether the weekend effect exists in all industries or not. The same data set will be used to determine whether the weekend effect as defined in French and Schwert is actually a weekend effect or a Monday effect.To do so, the weekend returns and the Monday returns are calculated for each individual stock in the Daily Stock file of CRSP.The weekend return is defined as the return based on the closing price on Fridays and the opening price on Mondays.The Monday return is calculated based on the opening price on Mondays and the closing price on Mondays. Analysis of Main Empirical Results Daily returns of the market portfolio are used in section 3 of this paper.Instead of using the market portfolio, daily returns of individual stocks are used in this section.Table 3 shows the results of regression (1) when individual security data are used.The average coefficients and test statistics of the regression of individual stocks are presented for each of the four sub-periods.As the results in Table 3 illustrate, the weekend effect clearly exists in early periods.Nevertheless, the results in Table 3 are different from those in Table 1 in the sense that the weekend effect is still statistically significant in the recent years, from 2003 to 2010.As a result, the existence of the weekend effect seems to depend whether data on market portfolio or individual securities are being used.However, the trend presented in Table 3 is quite similar to that of Table 1.Even though the weekend effect is still present in recent periods, it is getting less significant.The value of the coefficient α W is closer to 0 in the 2003-2010 time period relative to the earlier periods.Thus, it can be concluded that the weekend effect is more apparent when daily returns of individual stocks rather than those of the market portfolio are used; however, the trend that the weekend effect is getting less significant in time is consistent regardless of the data being used.Note.This table presents the average Monday returns as defined in French (1980) and Schwert (2003), as well as the weekend returns and the Monday returns as defined in this paper, for individual securities from the Daily Stock file of CRSP.The average trading volume is also presented in this table. Using the same set of data and methodology, the weekend effect is being analyzed further in this paper by first grouping the individual securities based on the industries they belong to.Table 4 shows the regression results for each sub-period grouped by the SIC codes.Consistent with the results in Table 3, the weekend effect is statistically significant in the early periods for almost all of the industries.Nevertheless, the results are different in the recent years.The weekend effect is only statistically significant for fewer than half of the industries from 2003 to 2010.On the other hand, the coefficient α W is positive for some of the industries for the recent period, and five of those are statistically significant.This suggests that the security returns are actually higher on Monday for those five industries.Table 4 also shows that there are a few industries that do not experience a weekend effect for most or all of the sample periods.Industries with SIC code 41 (Local and Interurban Passenger Transit) and 57 (Furniture and Home Furnishings Stores) do not experience any weekend effect throughout the entire sample period, while there does not seem to be a weekend effect for industries with SIC code 1 (Agricultural Production-Crops), 15 (General Building Contractors), 46 (Pipelines, except Natural Gas), and 99 (Nonclassifiable Establishments) for most of the sample period.All of the above provide evidence suggesting a relationship between the weekend effect and the industry in which a firm is in. The final analysis of this paper is to determine whether the weekend effect as defined in French and Schwert is actually a weekend effect or a Monday effect as defined earlier in this paper.The weekend returns and the Monday returns are calculated for each individual security in the Daily Stock file of CRSP from January 2, 1928 to December 31, 2010.Table 5 presents the average Monday returns as defined in French and Schwert (Friday closing to Monday closing returns), the average weekend returns as defined in this paper (Friday closing to Monday opening returns), the average Monday returns as defined in this paper (Monday opening to Monday closing returns), and the average trading volume. Based on the definition of French and Schwert, the Monday returns are negative in the early periods as illustrated in Table 5.The returns are also negative for Tuesdays but the magnitude is less negative than that of Mondays.Thus, in addition to the weekend effect, there seems to be a Tuesday effect as well in the early periods.However, further research needs to be done in order to confirm the Tuesday effect, as the effect could be the result of holidays on Mondays or nonsynchronous trading.Nevertheless, consistent with previous results, the negative returns on Mondays and Tuesdays disappear recently. Table 5 also shows whether the weekend effect as defined in French and Schwert is actually a weekend effect or a Monday effect as defined earlier in this paper.As illustrated in Table 5, the weekend returns based on the Friday closing prices and Monday opening prices are positive for all sub-periods.On the other hand, the Monday returns based on the Monday opening prices and Monday closing prices are negative for all sub-periods, even for the most recent years.This result suggests that it is very important to separate the weekend returns from the Monday returns.It is not the weekend returns that are resulting in the so-called weekend effect; rather, it is the Monday returns that are causing the anomaly.As a result, it is in fact a Monday effect rather than a weekend effect.This finding is actually consistent with the traditional theory, that is, the weekend returns should be higher as more uncertainty is involved over the weekend. Finally, it is interesting to investigate whether the anomaly is due to abnormal levels of trading volume on a particular day of the week.The average trading volume, defined as the average number of shares being traded in one day, is presented for all days of the week in Table 5, and the results demonstrate that the trading volume remains quite consistent throughout all days of the week.As a result, there does not seem to be any evidence that the trading volume is a cause of the anomaly. Conclusion Market efficiency literatures focus on whether stock prices are predictable or not.Many literatures are arguing against the efficient market through different forms of anomalies.Among those, French (1980) finds out that the security returns on Mondays are lower than those on any other days of the week, and this anomaly is referred to as the weekend effect.This is a puzzling effect because if the market is efficient, then security returns should follow the random walk, and so returns should not be predictable.In addition, the weekend returns should be higher than any other days of the week because more days are spanned over the weekend.This longer time period should mean a higher level of uncertainty, and so a higher return should be expected, as opposed to the lower returns being observed on Mondays.Although Schwert (2003) confirms the existence of the weekend effect in the early periods, he finds out that the weekend effect is beginning to diminish in the recent years. This paper first confirms and extends the results of Schwert by using data from 1928 to 2010.In addition, by using daily returns of individual securities rather than those of market portfolios, further analysis is being done on the weekend effect.This paper demonstrates that the existence of the weekend effect depends on the industries in which the firms belong to.The weekend effect does exist for most industries, especially for the early periods.However, there are some industries that do not exhibit the weekend effect for all or most of the sample periods.In addition, there are some industries that show a totally opposite result from the weekend effect.That is, the results are higher on Mondays for the firms in those industries.Thus, further research should be done to investigate such an industry effect further. Finally, this paper shows that the weekend effect is in fact a Monday effect.The weekend returns are positive, and it is the Monday returns that are causing the so-called weekend effect.Such a finding is important because this is more consistent with the traditional theory.Since a longer period is spanned over the weekend, a higher return should be expected with the higher level of uncertainty, and this is being confirmed by the empirical results of this paper.More importantly, this paper demonstrates that the Monday effect is still present even in the recent years.This is a very different result from the one demonstrated by Schwert.As a result, it may not be the case that the weekend effect is diminishing in the recent years.Rather, such findings may just be the result of an incorrectly defined weekend effect.If the weekend effect is defined differently as in this paper, it is clear that this anomaly still exists. Table 1 . Results by confirming and extending Schwert"s paper Note.This table presents regression results inSchwert"s (2003)paper: Rt=α0+αWWeekendt+εt.In addition, this table shows the results by confirming and extending Schwert"s paper through the use of daily returns of the NYSE/AMEX/NASDAQ portfolio from January 2, 1928 to December 31, 2010.These results are obtained by using value-weighted data and equal-weighted data.T-statistic with an * refers to those with 95% significance. Table 2 . Two-digit standard industrial classification codes This table presents the list of two-digit Standard Industrial Classification (SIC) codes. Table 3 . Regression results of individual stocksNote.This table presents the average coefficients and test statistics of the regression Rt=α0+αWWeekendt+εt in four sub-periods.This regression uses daily returns of all individual stocks in the Daily Stock file of CRSP from January 2, 1928 to December 31, 2010.T-statistic with an * refers to those with 95% significance. Table 4 . Regression results of individual securities grouped by SIC codes This table presents the average coefficients and test statistics of the regression Rt=α0+αWWeekendt+εt in four sub-periods.This regression uses daily returns of all individual stocks in the Daily Stock file of CRSP from January 2, 1928 to December 31, 2010.The results are presented by first grouping the individual stocks by their SIC codes.T-statistic with an * refers to those with 95% significance. Table 5 . Average weekend returns and monday returns of individual securities
2018-12-12T06:32:20.203Z
2015-08-25T00:00:00.000
{ "year": 2015, "sha1": "1e38d8451571716d2dc951da193e81e2dc91aba1", "oa_license": "CCBY", "oa_url": "https://www.ccsenet.org/journal/index.php/ijef/article/download/52606/28158", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "1e38d8451571716d2dc951da193e81e2dc91aba1", "s2fieldsofstudy": [ "Business", "Economics" ], "extfieldsofstudy": [ "Economics" ] }
233783746
pes2o/s2orc
v3-fos-license
Productivity of rabbits when using the drug “KED+LBA” in their diets The rabbit is an important agricultural species, due to its high productivity, precocity, relative unpretentiousness in care, the possibility of use in the fur industry and the production of high-value dietary meat. Public and private organizations are concerned about the lack of industrial production of rabbit meat in Russia, as the demand for this product is growing. The quantity and quality of meat is more dependent on a full-fledged, balanced diet for nutrients and metabolic energy. In the conditions of the Primorsky territory, vegetable raw materials of the far Eastern flora are increasingly used in the practice of animal husbandry. It is used to balance the rations of the missing elements to improving the palatability of the basic feed, improvement of digestibility, purposeful change of the metabolism and the prophylaxis of stress conditions of animals. The article discusses the effect of the drug “KED+LBA” on the productivity of young rabbits in the conditions of Primorye. Introduction The rabbit is an important agricultural species, due to its high productivity, precocity, relative unpretentiousness in care, the possibility of use in the fur industry and the production of high-value dietary meat. Public and private organizations are concerned about the lack of industrial production of rabbit meat in Russia, as the demand for this product is growing. High fecundity and precocity of rabbits makes it possible to get meat from them in a short time with a high protein content, low cholesterol and good digestibility. The quantity and quality of meat depend to a greater extent on a full-fledged diet that is balanced in terms of nutrients and metabolic energy [1][2][3]. It is known that the search for local sources of feed and biologically active additives that can improve production efficiency has always been of great importance for the development of rabbit breeding [4][5][6][7][8][9]. Currently, there are new preparations from plants that are better absorbed by the animal body than vitamins, hormones, macro -and microelements obtained synthetically [6]. Such biologically active substances include preparations obtained from Amur velvet bast, Chinese lemongrass, and vitamin concentrate from aspen bark [10][11][12]. This can be attributed to herbal products, Sneakers and FOREHEAD (FOREHEAD-the inner bark of Amur velvet). The drug Ked is obtained from the husk of pine cones. Biologically active substances were isolated from waste pine cones of the Ussuri taiga after separating nuts from them. These biologically active substances are conventionally called Ked (patent RUS 2138160 08.06.1998). The drug KED is a brown powder with the smell of pine cones. The taste is salty-sour with a slight bitterness. The Ked preparation contains a wide range of biologically active substances: protein - 16 [13]. Our research in 1998 showed that the inclusion of the drug Ked in the diets of Minks had a positive effect on increasing their live weight, body length, increasing the yield of young animals, and improving the quality of skin products. The introduction of the drug Ked in the diet of rabbits allowed to increase the absolute growth by 12%, the safety of livestock by 13.3%, the yield of slaughter weight by 1.9% and the level of profitability by 20.9% [14]. The use of the inner bark of the Amur cork in the feeding of young mink as a biologically active additive to the basic diet, in the conditions of Primorsky Krai has allowed to increase the relative weight gain of experimental animals males 0.7% females 0.3% (P>0,001), increase the level of hemoglobin and number of red blood cells, to improve the fur quality. Commodity evaluation of skins showed a positive trend in favor of the introduction of mink FOREHEAD in the diet, the score for the quality of skins was 6.1% higher than the control. The level of profitability in the experimental group was 10.6% against 4.5% in the control group [15]. In the literature available to us, we did not find data on the use of biologically active substances isolated from the husk of cedar cones (Korean pine) and Amur velvet bast in feeding rabbits, so we decided to conduct research on the joint effect of Ked and LBA preparations on the productivity of rabbits and identify the optimal dose. Methods and materials Scientific and economic experience was carried out in the Primorsky territory on fattening young rabbits of the California breed from 45 to 120 days of age. For the research, three groups of rabbits were formed by the method of pairs of analogs, taking into account the origin, live weight, age, and gender of 15 heads each. The animals were kept in the same conditions. The first group was a control group, the secondan experimental group. During the entire experiment, the drug "Ked + LBA" was set in periods of 10 days with the same interval. Intact rabbits did not receive the drug" Ked + LBA". In the control group, rabbits were given a basic diet (RR), in the second group, the drug "Ked + LBA" was added to the main diet at the rate of 5 mg per 1 kg of live weight, and in the third group -10 mg per 1 kg of live weight. Throughout the experiment, the safety of young rabbits was taken into account. A pathoanatomic autopsy was performed on fallen rabbits and the cause of death was determined. After the rabbits reached the age of 120 days, a control slaughter was carried out, 3 rabbits were selected from each group. The slaughter yield was determined by calculation, the ratio of the mass of the carcass together with the internal fat to the pre-slaughter mass expressed as a percentage. When determining meat qualities, the mass of a carcass with kidneys and internal fat without a head, skin and internal organs was weighed. After the rabbits were slaughtered, the internal organs were examined and weighed. In carcasses, the appearance and color of the surface, integumentary and internal adipose tissue, and abdominal serous membrane were determined by examination. After maturation of meat, varietal cutting of carcasses was carried out. When carrying out varietal cutting, carcasses were divided 3 into 4 anatomical parts: shoulder-shoulder, cervical-thoracic, lumbosacral and hip. To determine the amount of muscle, fat and bones in the carcasses, they were boned. Then the meat ratio was calculated. The quality of rabbit meat was evaluated according to GOST 20235.0-74. At the same time, the appearance and color, the condition of the muscles on the incision, consistency, and smell were determined, according to the method developed by A. T. Mysik (1986). Based on the research results, the economic efficiency of using the drug "KED + LBA" was calculated. Purpose of research is to study the effectiveness of different doses of the drug "KED+LBA" in feeding young rabbits. In accordance with this goal, the following tasks were defined: • to study the effect of the drug "KED + LBA" on changes in body weight; • analyze the effect of the drug on the "KED + LBA" safety of young animals and conduct pathoanatomical studies; • determine the effect of the drug "KED + LBA" on the meat quality of rabbits; • establish economic efficiency. Results The studied drug had a positive effect on the growth of live weight of rabbits. Throughout the experiment, high rates were observed in animals of the 2nd experimental group, which received mixed feed with the addition of the drug "KED+LBA" in the amount of 5 mg/kg of live weight. Rabbits of the 2nd experimental group at the end of the study had a greater absolute increase in live weight compared to the control by 6.1%. We have identified a positive effect of the drug "KED+LBA" on the safety of rabbits. The safety of livestock in the control group was 80%, in the 2nd and 3rd experimental groups 93.3 and 86.6%, respectively. At autopsy, the fallen rabbits of the control group were found to have a lesion of the small intestine. The mucous membrane of the small intestines is reddened, sometimes with peeling, the contents of the intestines are liquid, sometimes with gas bubbles. In the 2nd experimental group, 1 goal had to be scored due to a traumatic injury. In the 3rd experimental group, rabbits died from sunstroke. Changes in the live weight of rabbits are shown in table 1. In order to determine the specificity of the action of the vegetable Supplement "KED+LBA" on the meat productivity of rabbits at the end of the studies at 120 days of age, a control slaughter of rabbits was carried out, the results of which are presented in table 2. Slaughter yield, %feeding rabbits with rations with the addition of the drug "KED+LB" contributed to an increase in the mass of the carcass in the experimental groups. The largest carcass weight was in rabbits of the second experimental group-1881.1 g, which is significantly higher by 10.5% ( p ≤ 0.001) than in the control group. According to the results of examination of rabbit carcasses in the control and experimental groups, there were no differences in organoleptic parameters. All the carcasses were identical in appearance, color, serous membrane of the abdominal cavity, muscles on the incision, consistency, smell, transparency and aroma of the broth. The ratio of edible parts of the rabbit carcass is shown in table 3. The research results showed that the highest content of muscle tissue was found in the 2nd experimental group-79%, i.e. 1.1% higher than the results in the control group. The meat content coefficient in the 2nd and 3rd experimental groups was higher by 14.7% and 3.2% than in the control group. Feeding rabbits the drug "KED+LBA" did not have a negative effect on the meat quality of rabbit meat. Studies have shown that the use of the drug "KED+LBA" in feeding young rabbits is economically profitable. Thus, the level of profitability for the period of scientific and economic experience in the second and third experimental groups was higher than in the control group by 18.6% and 10.2% respectively.
2021-05-07T00:03:58.003Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "81e9dc6f9a9b35f5a2f510b31213e9c894242cfa", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/677/4/042001", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "38f3bd6dcefb94e84d766f2815b857803b38046a", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Physics", "Biology" ] }
269657132
pes2o/s2orc
v3-fos-license
Evaluation of Vertical Discrepancies in Crown Seating Using Different Glass Ionomer Cement Volume: An In Vitro Study Background This in vitro study aimed to assess the vertical disparities in the positioning of complete crown castings when different quantities of cement were used and to determine the optimal amount of cement for cementation while minimizing any marginal discrepancies. Methodology A total of 60 ideal nickel-chromium (Ni-Cr) crown castings were divided into three groups of experimental volumes of glass ionomer cement, with 20 castings in each group. Group I had completely filled volume with cement, group II had it half-filled, and group III had brushed up cement internally. The crowns were cemented by applying a static load of 5 kg to the cementation apparatus for 10 minutes. The marginal discrepancy between the die and the castings was measured pre-cementation and post-cementation using image analysis software in combination with a stereomicroscope (Motic, USA) at predetermined points that were marked on the die. Statistical analysis was performed using Statistical Package for the Social Sciences (IBM SPSS Statistics for Windows, IBM Corp., Version 16, Armonk, USA) software. A one-way analysis of variance (ANOVA) was used for the intergroup analysis. A paired sample t-test was used for intragroup analysis. Result Brushing cement onto the internal surface presented the least mean values (P<0.05) of post-pre-cementation vertical discrepancy (14.92±10.77 μm) when compared to the half-filled cement group (28.42±12.45 μm) and the fully-filled cement group (58.50±20.91 μm). Conclusion Cement volume appeared to be a key factor in the vertical marginal discrepancy of the crown. The cement brush applied to the internal surfaces of the crown showed smaller post-cementation vertical discrepancies. Introduction An important factor that ensures the long-term success of fixed prosthodontic restorations is marginal integrity.Accurate marginal adaptation with a minimum discrepancy in the crown is an important goal in prosthodontics.This improves longevity and reduces the risk of restoration misfits associated with periodontal disease and caries [1,2].At the heart of clinical success for fixed partial dentures lies the luting procedure, where the luting agent serves as a barrier against microbial leakage, sealing the interface between the tooth and restoration and holding them together through some form of surface attachment.This attachment bond may be mechanical, chemical, or a combination of both [3].The cementation procedure, which influences both the occlusal relationship and marginal fit, underscores the importance of factors that influence crown seating.Methods to facilitate complete seating of crowns are distributed mainly in preparation or casting and modification of the luting procedure by altering the choice of cement, composition of cement, mixing procedure, or cementation load [4][5][6]. Factors such as the viscosity of the cement, the morphology of the restoration, venting, seating force, and volume of cement may influence the complete seating [7,8].Some attention has also been given to the amount of cement and how it is applied to achieve better seating.Most researchers have agreed that a smaller volume of cement results in more complete seating [8].While some have advocated the placement of cement only on the preparation margins, others have suggested brushing up the entire internal surface of the crown [9].While glass ionomer cement is a commonly used luting agent, there is a scarcity of studies focusing on the seating accuracy of a full-coverage metal crown luted with glass ionomer cement using different cement volumes. This study aims to delve into the relationship between the vertical seating discrepancy of complete crown casting and the volume of glass ionomer cement employed as a luting agent. Materials And Methods The study utilized an ivorine (Nissan Dental Products Inc., Japan) right mandibular first molar prepared with diamond rotary cutting instruments for a full coverage restoration following the standard recommended procedure (Figure 1). FIGURE 1: Tooth preparation for full coverage crown Light viscosity and putty polyvinyl siloxane (Neopure, Orikam) were used to create an impression mold of the dentoform tooth.This impression mold was poured into Type IV stone (Asian Chemicals, India) to obtain sixty dies.Duplicate dies were poured into Type IV stone for working die fabrication.Wax patterns were fabricated on working dies coated with three coats of die spacer of Pico-Fit (Renfert, Hilzingen, Germany) thickness 40 μm short of 0.5 mm from the margin.Crowns were cast using standard procedure, and the internal fit was verified with a fit-checker.Crown volume was calculated by filling the crown with wax of known density and calculating the volume of wax from its weight and density. All nickel-chromium (Ni-Cr) complete crown castings (Figure 2) were sequentially seated on the respective die and loaded with a 5 kg weight centered on the crown. FIGURE 2: Crowns used in the study The marginal discrepancy between the die and the castings was measured using image analysis software (Motic Images Plus) in combination with a stereomicroscope (Motic, US) at predetermined points that were marked on the die just near to the margins.Type I glass ionomer cement (Gold label Fuji I, GC Corp.) with a 1.8:1.0powder-to-liquid ratio was employed for cementation.Standardized cement volume as per Table 1 was dispensed for each group, and crowns were cemented under a static load of 5 kg with cementation apparatus for 10 minutes.The post-cementation marginal discrepancies of crowns were measured using image analysis software in combination with a stereomicroscope at predetermined points that were marked on the die (Figure 3).Intergroup analysis employed a one-way analysis of variance (ANOVA), while intragroup analysis used a paired sample t-test, contributing to a comprehensive assessment of the study's outcome.P (probability) value of less than 0.05 was considered significant in the present study. Results The statistical analysis (one-way ANOVA) demonstrated significant differences (P<0.0001) between the mean post-cementation marginal discrepancies in each group.Brushing cement internally (group III) showed smaller discrepancies compared to other groups.No significant differences (P<0.6) were found in the mean pre-cementation discrepancies (Table 2). Post-cementation mean (SD) Post-and pre-cementation mean (SD) Multiple comparisons (Tukey's HSD (honestly significant difference)) indicated no significant differences in prevalence discrepancies between groups.However, the post-cementation discrepancy for the filled group differed significantly from the half-filled and brushed-up groups.The half-filled group showed significant differences with the fully-filled group, and the brushed-up group showed no significant differences.The brushed-up group demonstrated significant differences from the fully-filled group and no significant differences from the half-filled group.Post-and pre-cementation vertical discrepancies were statistically significant (P<0.05), with brushing cement onto the internal surface showing the least mean values (14.92±10.77μm) compared to half-filled (28.42±12.45μm) and fully-filled groups (58.50±20.91μm).The comparison of mean marginal discrepancies for buccal, lingual, mesial, and distal surfaces before and after cementation in each group was statistically significant. Marginal discrepancies in surfaces Tukey's HSD revealed no significant differences among the preparation surfaces of multiple groups when compared with each other except the distal surface comparison of group I and group II, which showed a significant difference.Tukey's HSD revealed significant differences among post-cementation buccal surfaces of multiple group comparisons, except buccal surface comparisons of groups II and III.Tukey's HSD revealed no significant differences among post-cementation lingual surfaces of multiple group comparisons except for the lingual surface comparisons of groups I and III, which showed a significant difference.Tukey's HSD revealed no significant differences among post-cementation mesial surfaces of multiple groups.Tukey HSD revealed highly significant differences among post-cementation distal surfaces in multiple comparisons, except for the distal surface comparison of groups II and III, which showed no significant difference (Tables 3-4). Discussion Significant differences were observed among the groups after cementation, suggesting that cement volumes do play a crucial role in post-cementation marginal discrepancies.However, it's noteworthy that no significant differences were found in pre-cementation marginal discrepancy values, highlighting the precision of the baseline fabrication processes by conventional process across the groups. The study highlights the impact of cement volumes on the marginal discrepancies of the evaluated groups and correlates well with findings from other in vitro studies on crown discrepancies after cementation [8][9][10][11].Notably, the literature lacks studies specifically comparing different glass ionomer cement volumes for the cementation of metal crowns, making this study a valuable addition to the existing knowledge.The standardized tooth preparation with a chamfer finish line and 6° total taper aimed to resemble the clinical situation.However, the 6° taper angle used in this study may not fully replicate clinical conditions, as the clinical taper typically varies between 12° and 20°.Despite this limitation, the results are consistent with previous studies using similar taper angles. The application of three layers of die spacer (40 μm) aimed to provide internal relief while accommodating the cement layer and irregularities on the tooth and inner crown surface [12][13][14][15][16]. Pre-cementation vertical marginal discrepancies fell within clinically acceptable limits between 100 and 120 μm, with mean values aligning with previous studies [11,[17][18][19][20].The choice of glass ionomer cement as the luting agent was based on its favorable clinical performance, exhibiting attributes such as compressive strength, fluoride ion release, and a low coefficient of thermal expansion.It has a low film thickness and maintains a relatively constant viscosity for a short time after mixing.This results in improved seating of cast restoration compared with zinc phosphate cement [21]. This study utilized a standardized load from the above position of 5 kg with a cementation apparatus for 10 minutes.To ensure the complete setting of the luting agent and prevent rebound, the glass ionomer cement setting time was exceeded by five minutes and 30 seconds.Rebound refers to the possibility of the luting agent, such as glass ionomer cement, partially regaining its original shape or position after compression during the setting process.This can occur if the setting time is not allowed to be fully complete before removing the applied load or pressure.By extending the setting time by five minutes and 30 seconds, the study aims to ensure the complete setting of the luting agent, thereby reducing the risk of rebound and achieving optimal bonding or cementation results.The selected load aligns with prior research by Jorgensen and Petersen (1963), indicating improved crown seating with increased force up to 5 kg (49 N), beyond which the gains were marginal.The post-cementation marginal discrepancies increased after cementation in all groups, with significant differences observed.The study emphasizes the complexity of seating crowns, considering factors like intra-coronal pressures and the hydrodynamic situation during cementation.The research aligns with the consensus that a marginal gap between 100 and 120 μm is clinically acceptable, and the post-cementation discrepancies within this range for brush-up and half-filled groups are noteworthy [11,[17][18][19][20]. For this in vitro evaluation, differences were observed between buccal, lingual, mesial, and distal surfaces, and although this could be due to the more complex geometric form of restoration resulting in fit problems, previous studies [22,23] have reported similar results in individual ceramic crowns.Some studies done by Kokubo et al. [24] and Holden et al. [25] have evaluated the marginal discrepancies without taking the cementation process into consideration.Evaluating discrepancies without luting them is not reflective of clinical reality because the cement and the cementation process play a relevant role in the final discrepancy achieved.In the current study, although space was created to allow the cement to flow into the space between the tooth and internal surface, the results for the metal crown and cement combinations increased the discrepancy after cementation. Our study revealed that crowns with larger cement volumes exhibited higher post-and pre-cementation marginal discrepancies.This could be attributed to several factors: entrapment of cement on the occlusal surface of the abutment, sealing the natural escape route during crown seating, hydraulic pressure during cementation leading to filtration along narrow spaces between crown and abutment, hindering complete seating by slowing cement escape, and increased viscosity of cement over time as greater volumes took longer to escape, impeding complete seating.The initial rapid seating of crowns may be due to the initial low viscosity of the luting agent, which increased over time along with hydraulic pressure buildup, limiting cement escape.Completely filled crowns (group I) experienced the greatest seating discrepancy (58.50±20.91µm) as the large initial cement volume hindered further flow before viscosity and pressure increased.Tilt may have also contributed to seating discrepancies, particularly in group I, where crowns were difficult to align onto the abutment.In group III, where brush-up cement volume was used, better alignment onto the abutment was observed initially, resulting in minimal seating discrepancies (14.92±10.77µm).Despite a slight increase in marginal discrepancy post-cementation, the predetermined internal spaces of 40 μm for cement appeared sufficient to accommodate the film thickness, leading to no or minimal increase in marginal fit.Compared to a previous study by Tan and Ibbetson [8], our study showed lower post-and precementation discrepancies (14.9 µm-58.5 µm), likely due to differences in luting agents and measurement methods. The limitations of the research, particularly its in vitro nature, are acknowledged.Simulating the complex clinical oral environment with variations in tooth preparations, finish line designs, and intraoral conditions remains a challenge.The focus on vertical marginal gaps and the absence of quantification for horizontal relationships are identified as limitations, emphasizing the potential influence of over-or under-contouring on plaque accumulation and gingival irritation. Conclusions In conclusion, this study focuses on the critical role of glass ionomer cement volume in minimizing vertical discrepancies during crown cementation.The results emphasize the need for careful consideration of cement application techniques to achieve optimal marginal adaptation, thereby enhancing the longevity and stability of dental restorations.Additionally, the findings underscore the intricacies of the cementation process, highlighting the importance of meticulous attention to detail in clinical practice.Further research in this area could explore additional factors influencing crown cementation outcomes, ultimately contributing to advancements in dental materials and techniques. TABLE 2 : Vertical marginal discrepancies mean values and SDs for each experimental condition (μm) * means statistically significant
2024-05-11T15:10:55.755Z
2024-05-01T00:00:00.000
{ "year": 2024, "sha1": "95865f20de2aa576514e5bf04268694534d6fe0e", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/original_article/pdf/247657/20240509-1106-1fscfno.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2fd4d113aa4fa5b569843fe52100754f75a4307f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
261661287
pes2o/s2orc
v3-fos-license
Ctrl-C: a cross-sectional study of the electronic health record usage patterns of US oncology clinicians Abstract Despite some positive impact, the use of electronic health records (EHRs) has been associated with negative effects, such as emotional exhaustion. We sought to compare EHR use patterns for oncology vs nononcology medical specialists. In this cross-sectional study, we employed EHR usage data for 349 ambulatory health-care systems nationwide collected from the vendor Epic from January to August 2019. We compared note composition, message volume, and time in the EHR system for oncology vs nononcology clinicians. Compared with nononcology medical specialists, oncologists had a statistically significantly greater percentage of notes derived from Copy and Paste functions but less SmartPhrase use. They received more total EHR messages per day than other medical specialists, with a higher proportion of results and system-generated messages. Our results point to priorities for enhancing EHR systems to meet the needs of oncology clinicians, particularly as related to facilitating the complex documentation, results, and therapy involved in oncology care. Despite evidence of positive effects on quality and safety (1), use of electronic health records (EHRs) is associated with negative effects, such as emotional exhaustion, a component of burnout (2).Previous studies have suggested opportunities for EHR design to better meet the needs of oncologists (3).Although evidence of differences in EHR use across specialties (4) and among primary care specialties (5) exists, patterns of EHR use among oncologists vs other medical specialists are not well understood.We sought to characterize these differences by using EHR use data from ambulatory health systems from across the United States to improve design and reduce EHR burden. We conducted a cross-sectional study of EHR usage data for 349 ambulatory health-care systems collected from the vendor Epic from January to August 2019.Data were aggregated at the specialty level within each health-care system.Our sample included all physicians and advanced practice professionals with scheduled appointments.Clinicians subcategorized as oncologists were compared with nononcology medical specialists, as previously defined (4).Nononcology medical specialties included cardiology, general endocrinology, allergy/immunology, gastroenterology/hepatology, geriatrics, hematology, infectious disease, nephrology, orthopedics, pulmonology, rheumatology, and reproductive endocrinology.Health-care systems that did not have oncology clinicians were excluded.EHR use was measured by the Epic Signal metadata extraction tool, which tracks all active EHR interactions.Nonactivity periods longer than 5 seconds, nonclinical tasks (administration or research), and nonambulatory patient visits were excluded (4). Descriptive statistics with 2-tailed t tests and unequal variances were used to compare note composition, measured as the proportion of note characters across note-writing modalities, message volume, and EHR time breakdown between oncology and other medical specialty clinicians.We used ordinary least squares regression to examine the relationship between being an oncology clinician and EHR time, adjusting for organizational characteristics and mean daily patient volume.Analyses were conducted using Stata statistical software, Version 17.0 (StataCorp, College Station, TX), with a 2-sided A ¼ .05. Overall, 318 of 349 health-care systems in the sample included oncology clinicians and were included in the study.Health-care systems in our sample had a mean (SD) 1501 (1604) physicians and a mean (SD) 1 195 466 (1 309 959) annual outpatient visits, demonstrating a skew towards large health-care systems.Mean daily patient volume was 9.4 vs 9.6 patient encounters per day for oncologists vs other medical specialists. In aggregate, oncologists spent less time per appointment in the EHR system across Clinical Review, Notes, Orders, and InBasket activities than other medical specialists (Figure 2).In adjusted analyses, these differences translated to oncologists spending less time in the EHR system per appointment than other medical specialists (b ¼ -3.10 minutes, 95% confidence interval ¼ -3.69 to -2.52, P < .001). In this cross-sectional, nationwide study, we demonstrate differences in EHR use patterns for oncologists compared with other medical specialists.The findings suggest priorities for enhancing EHR design to meet the needs of oncologists. Note composition represents a significant portion of clinical burden, and our work highlights oncologists' reduced use of efficiency tools such as SmartPhrases and greater use of Copy/Paste functions for writing notes.Copy/Paste, in some studies, has been associated note bloat and inaccurate documentation (6,7).Alternatively, it is possible that because oncologists see patients whose assessment and plan stay relatively stable over time, they have a less need for SmartPhrases to facilitate generation of new documentation, and structured oncology data elements may better meet their needs. In addition, oncologists appear to have a greater burden of messages received per day, with higher volumes of resultsrelated and system-generated messages (Figure 1, B).The need for multidisciplinary, complex care for oncology patients may drive such differences, with oncologists necessarily receiving a variety of alerts for their patients.Some system-generated messages can be clinically meaningful (eg, reminders that completion of labs or imaging orders are overdue), but system-generated messages have also been associated with a higher probability of burnout and physicians' intention to reduce clinical work time (2,8).Given that the number of patient-initiated messages, which take significant clinician time and effort, have increased dramatically following the COVID-19 pandemic (9), reducing the burden associated with results and system-generated messages may be particularly beneficial. Our findings have important implications for practicing oncologists and health-system leaders seeking to address the clinician burnout crisis.Although recent efforts by policymakers and professional societies such as the American Medical Association have focused primarily on reducing EHR documentation burden (10), our findings suggest that oncology-specific efforts may focus on the EHR inbox.Specific interventions, such as enhancing team-based workflows, engaging members of the care team in triaging and responding to messages, streamlining systemgenerated messages, and providing dedicated time during clinic hours for inbox work, may be effective methods of reducing oncologist EHR burden (11,12).Improving data capture in structured locations may also enable oncologists' use of efficiency tools to improve the accuracy of documentation and physician well-being.Future research in oncology and elsewhere should carefully consider how best to target EHR burden reduction interventions to the specific clinician population under study. Our study is strengthened by the availability of EHR use data from oncologists across the United States.Given the data available through Epic Signal at a national level, however, we were unable to segment oncologists as surgical, medical, or radiation Ultimately, differences in EHR use in by oncologists compared with other medical specialists point to key differences in documentation and messaging that reflect the complex, multidisciplinary care in oncology.These differences suggest potential for further EHR design and workflow optimization for specialty care and highlight the need for further investigation into how documentation and messaging can be optimized to meet the needs of oncology clinicians, potentially by blending observational and qualitative analyses with EHR use data to inform oncologyfocused EHR system optimization. Figure 1 . Figure 1.Documentation and InBasket messaging between oncologists and other medical specialists.A) Note composition, by source; B) messages received per day, by source. Figure 2 . Figure 2. Time distribution in EHR.A) Time in the EHR system per appointment; B) total EHR time per appointment, by system characteristic.AMC ¼ academic medical center; EHR ¼ electronic health record.
2023-09-11T06:17:43.039Z
2023-09-09T00:00:00.000
{ "year": 2023, "sha1": "aeeab91226b5e5cb0151349c8c743636864f4601", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "270f22e2f1a85bc0bb8d974690a47550ea63cf3e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232136303
pes2o/s2orc
v3-fos-license
Regional and socioeconomic predictors of perceived ability to access coronavirus testing in the United States: results from a nationwide online COVID-19 survey Purpose Access to COVID-19 testing remained a salient issue during the early months of the pandemic, therefore this study aimed to identify 1) regional and 2) socioeconomic predictors of perceived ability to access Coronavirus testing. Methods An online survey using social media-based advertising was conducted among U.S. adults in April 2020. Participants were asked whether they thought they could acquire a COVID-19 test, along with basic demographic, socioeconomic and geographic information. Results A total of 6,378 participants provided data on perceived access to COVID-19 testing. In adjusted analyses, we found higher income and possession of health insurance to be associated with perceived ability to access Coronavirus testing. Geographically, perceived access was highest (68%) in East South Central division and lowest (39%) in West North Central. Disparities in health insurance coverage did not directly correspond to disparities in perceived access to COVID-19 testing. Conclusions Sex, geographic location, income, and insurance status were associated with perceived access to COVID-19 testing; interventions aimed at improving either access or awareness of measures taken to improve access are warranted. These findings from the pandemic's early months shed light on the importance of disaggregating perceived and true access to screening during such crises. Introduction The COVID-19 pandemic has evolved into one of the most challenging public health crises in modern history. As of February 2021, the United States (U.S.) reported the highest number of confirmed cases and deaths worldwide [1] . The U.S. has made significant efforts to enhance testing capacity to promptly detect, treat, isolate cases, initiate contact tracing protocols to test contacts for infection, and track the spread of the virus and determine the scale of the pandemic [ 2 , 3 ]. As of February 7, 2021, over 305 million COVID-19 tests were performed in the U.S. (9% overall positive rate), with the states of Rhode Island, Massachusetts, and Vermont having the highest number of daily tests per million nationwide [ 4 , 5 ]. Given socioeconomic disparities in the risk and outcome of COVID-19, particularly the disproportionate toll of the disease in communities of color in the U.S. [ 6 , 7 ], lack of equitable and universal access to COVID-19 testing has emerged as an area of concern for public health authorities and activists following the initial shortage of diagnostic tests [8] . Although several diagnostic and antibody tests have emerged throughout the pandemic, and faster and less costly diagnostic tests are continuously developed and deployed [9] , resourceintensive RT-PCR molecular tests contributed to the delays in the early phase of the pandemic [ 10 , 11 ]. Likewise, during the early months of the pandemic there were significant quality assessment and control issues with new diagnostic tests as they were released by the Centers for Disease Control and Prevention (CDC) [12] . Efforts have since then been made to address these gaps in test availability [13] . The U.S. government and health insurance companies have attempted to expand access to COVID-19 testing through emergency measures such as making testing free-of-charge [14] , and some state governments have enacted action plans to increase testing capacity [ 15 , 16 ]. The federal government also intervened in April 2020 by enacting the Families First Coronavirus Response Act (FFCRA) and the Coronavirus Aid, Relief, and Economic Security (CARES) Act to require that COVID-19 testing be covered by private health insurers [17] . However, concerns remain that testing may still impose a significant financial burden on uninsured populations or those with certain insurance plans because of gaps in protection [18] . A recent analysis of COVID-19 testing locations across the U.S. revealed access inequities among minorities, rural communities, and those with no insurance or low income [19] . Furthermore, geographic differences in the extent and timing of the spread of COVID-19 may have affected the populations' awareness of the disease and played a role in local governments' prioritization of effort s to enhance testing capacity and accessibility [1] . Past research suggests that those with low socioeconomic status, no or limited health insurance, and living in rural areas face greater access barriers to timely, comprehensive, and quality healthcare services [20] . To promote COVID-19 testing, interventions have been implemented to improve access (e.g., government policies to enhance testing capacity, specifically in low-resource communities) [15] and perceived access (e.g., insurer communication campaigns on new policies and protections regarding testing) [14] . Several factors may influence an individual's decision to get tested, including the perceived need for being tested, perceived safety of getting tested, perceived severity of COVID-19, and possible consequences of a positive test result. To that end, this study aims to assess individuals' perceptions of their ability to access COVID-19 testing during the first peak of the pandemic in the U.S. The importance of the perceived ability to access COVID-19 testing, which makes it distinct from the actual availability of testing infrastructure and opportunities, lies in the fact that socioeconomic circumstances and geographic location of individuals influence not only their actual capacity to access testing, but also their perceived ability to proactively seek testing services ( Fig. 1 ) [21] . For example, while policies to improve access to testing for low-income or uninsured individuals are implemented (such as through free testing), gaps in awareness of these policies may result in the cost of testing to remain as a perceived barrier rather than an actual barrier in these target populations. Indeed, past research on HIV prevention has shown perceptions and awareness of testing services to be an area of concern in addressing disparities [ 22 , 23 ]. Using data from a nationwide survey of U.S. adults conducted in April 2020, we examine individual-level factors that may associate with perceived access to COVID-19 testing. While both perceived and actual ability to access COVID-19 testing have no doubt changed in more recent months, this data from the early months of the pandemic corresponds to a time period when various nationwide and state-level policies aimed at improving access to COVID-19 testing were being formulated and implemented [14][15][16][17] . In doing so, this study sheds light on whether disparities in perceptions of access to testing services during an infectious disease pandemic were aligned with effort s seeking to improve accessibility of these services, while highlighting the importance of other supportive measures, such as those aimed at improving awareness of such policies or effort s. Participant recruitment The full study methodology and recruitment strategy have been described elsewhere [24] . Briefly, social media users (primarily Facebook, Instagram, and Messenger) aged ≥18 years and residing in the U.S. (eligibility conditions) were recruited using an advertisement campaign on the aforementioned social media platforms with a link to an online Qualtrics (Provo, UT) survey; eligibility was assessed through a set of screening questions at the start of the survey. Facebook (and affiliated platforms) was chosen due to its extensive past use in health research as a low cost and efficient recruitment tool (particularly in the context of data collection in rapidly evolving health crises, such as COVID-19) [ 24 , 25 ]. Although not a nationally representative sample, recruited participants were a demographically and regionally diverse national sample across multiple key indicators [24] . Recruitment occurred from April 16-21, 2020. The advertisement campaign was designed to target adults of any sex residing in the U.S. Eligibility was assessed using two screening questions. Those who were ineligible or completed the survey were provided a list of COVID-19 resources from the World Health Organization (WHO) and the CDC. Measurement of variables The development of the survey questions was informed by the WHO tool for behavioral insights on COVID-19 [26] and previous health belief model-based questionnaires on infectious disease outbreaks [27][28][29][30] . Perceived access to COVID-19 testing was captured by a single binary (Yes or No) question: "Do you think you would be able to get a test for Coronavirus if you thought you needed one?" Health insurance status was ascertained using a single binary question, and those who reported having health insurance were asked to specify the primary source of their insurance [31] (including plans through an employer, spouse, or parent, Medicare, self-purchased or other, and Medicaid or state-Medicaid). Demographic and socioeconomic variables included sex, age, race, educational attainment, employment status, marital status, living with children < 18 years of age, U.S. state of residence, urban/suburban/rural residence, and annual household income. Lost income status was assessed by a single question (Yes, No, or Not Applicable): "Have you lost income from a job or business because of the Coronavirus?" Geographical region and division of residence were based on the U.S. Census region (groups of states based on their geographic location, including the Northeast, South, Midwest, and West) and division (a smaller grouping of states within each region based on their disaggregated geographic locations, with each region having 2-3 divisions) definitions using U.S. state of residence information provided by participants. All variables were ascertained by self-report. The analysis was conducted by geographic division due to small sample sizes attained from individual states [32] . Statistical analysis Participants who answered the question on perceived access to COVID-19 testing were included in the final sample; those who responded "prefer not to say" to any of the demographic questions were excluded from the analysis. Descriptive statistics of participant characteristics, stratified by perceived access to COVID-19 testing, were calculated. Initially, bivariate contingency tale analyses assessed socioeconomic and geographic variables that were statistically different between those who answered "yes" and "no" to the question on perceived access to COVID-19 testing. Second, multivariable logistic regression analysis estimated the odds of perceived access to COVID-19 testing, adjusted for significant socioeconomic and geographic variables identified in the bivariate analysis. Although the initial model focused on self-reported health insurance status, a separate multivariable logistic regression analysis was also conducted to further differentiate health insurance coverage and determine the odds of perceived access by source of primary health insurance, adjusted for significant socioeconomic and geographic variables. Selection of socioeconomic and geographic variables adjusted in multivariate model was informed by bivariate analyses and past literature [ 18 , 20 ]. Bivariate analyses of socio-demographic and regional differences by source of health insurance guided the selection of covariates in the more granular regression model. Participants with missing data for variables included in the models were excluded from the analysis. All analyses were conducted using R (version 4.0.0). Finally, the geographic differences in perceived access to COVID-19 testing and health insurance status across U.S. regional divisions [32] were displayed using Tableau (version 2020.2.0). Participant characteristics A total of 6676 responses were received, of which 6518 were eligible to complete the survey. Of those, 6378 (97.9%) provided data on perceived access to COVID-19 testing (final sample). Due to the small sample size of participants who identified as "Other" for sex (n = 14), this category was not included in the analysis. Respondent's race was re-categorized and converted into a binary variable ("White, Non-Hispanic"/"Non-White") due to the small number of participants not identifying as White, Non-Hispanic, which were collectively 7.8% of the sample (including 167 (2.6%) Hispanic/Latinx; 50 (0.8%) Black, Non-Hispanic; 48 (0.8%) Asian/Pacific Islander; 44 (0.7%) Native American or American Indian; and 187 (2.9%) interracial, mixed race, or other race participants). Participants were mostly female (57.6%), Non-Hispanic white (92.2%), married or cohabiting (70.8%), employed (56.2%), lived in suburban residences (53.3%), were not living with children (74.5%), and held a bachelor's degree or higher (55.5%) (Supplemental File 1). Almost all participants (94.4%) reported having health insurance. Sociodemographic variables observed to be significant in the bivariate analysis by perceived ability to access COVID-19 testing included sex, age, employment status, marital status, income, health insurance status, and have lost wages due to COVID-19. Although past evidence has shown significant associations between race and ethnicity and COVID-19 testing [33][34][35] , the lack of association observed in the binary race variable constructed in bivariate analyses ( P = .075) and our subsequent inability to appropriately adjust for the variable in multivariate models was likely due to the small sample size of disaggregated racial and ethnic sub-populations (which constrained our ability to identify any disparities across this diverse group of Non-White participants). Although differences in both the U.S. census region and division were found to be significant in the bivariate analysis, the division was used for subsequent analyses since it provided more specific data on geographic location. Socioeconomic differences in perceived access to COVID-19 testing Slightly over half of the participants (51.7%) believed that they could have access to COVID-19 testing if they needed to. The proportion of those believing they could access COVID-19 testing varied across socioeconomic status and was notably low among those aged 18-39 years old (47.8%), students and unpaid workers (42.5%), those with an annual household income of less than $30,0 0 0 (39.8%), and those without health insurance (39.8%) ( Table 1 ). The adjusted odds of perceived access to COVID-19 testing differed across multiple socioeconomic indicators ( Table 1 ). Compared to females, males were more likely to perceive they could access COVID-19 testing (adjusted odds ratio [AOR]: 1.51, 95% confidence interval [CI]: 1.32-1.73). We observed an income gradient, with higher income being associated with higher odds of perceived access to COVID-19 testing. Geographic differences in perceived access to COVID-19 testing The median number of responses per state was 100 (interquartile range [IQR]: 32.2-121.3), with the greatest number of responses from New York (n = 495) and the lowest number from the District of Columbia (n = 3). Perceived access to COVID-19 testing varied markedly across U.S. census regions and divisions, with the highest perceived access to COVID-19 testing in the East South Central division (68.0%) and the lowest in the West North Central division (38.7%); adjusted odds of perceived access to COVID-19 testing were significantly lower across all divisions compared to East South Central, except for West South Central ( Table 1 ). Figure 2 displays the geographic variation in perceived access to COVID-19 testing and health insurance status across U.S. census divisions (Supplemental Table 2 for tabulated data). Although health insurance coverage in the study population was relatively high, it varied by region, with the highest coverage in the Middle Atlantic division (97.2%) and the lowest in the Mountain division (89.7%). However, it must be noted that while geographic variation at the regional and divisional levels was observed for both perceived ability to access to COVID-19 testing and health insurance, notable state-by-state heterogeneity was also observed within each region (Supplemental Table 2), albeit based on much smaller sample sizes. Health insurance and perceived access to COVID-19 testing Overall, those with health insurance, relative to those with no health insurance, had higher odds of perceived access to COVID-19 testing (AOR: 1.73, 95% CI: 1.29-2.35), when adjusted for covariates. Among insured participants the most common source of insurance was through an employer (41.9%), followed by Medicare (22.3%) and through a spouse's employer (18.0%). Table 2 presents the adjusted odds of perceived access by health insurance type. Except for Medicaid, respondents with any source of insurance had higher odds of perceived access to COVID-19 testing than those without insurance. Compared to those without insur-ance, adjusted odds of perceived access were the highest among those who have coverage through a parent's insurance plan (AOR: 1.68, 95%CI: 1.26-2.24). Discussion Overall, disparities were observed in perceived access to COVID-19 testing according to health insurance status (including different types of health insurance), income, and geographic region. Specifically, participants with any source of insurance, except for Medicaid, were more likely than uninsured participants to perceive that they would be able to access COVID-19 testing. Although insurance coverage was high in the study population, there were considerable geographic differences in perceived access to COVID-19 testing across U.S. geographic regions and divisions. These findings highlight that, even as effort s are ramped up to promote COVID-19 testing, there is a need to carefully consider and appropriately address both socioeconomic and geographic disparities in perceived access to testing. Health insurance and COVID-19 testing Despite effort s on the part of the federal government and health insurance providers to expand COVID-19 testing to both insured and uninsured individuals alike [ 14 , 17 ], the findings show that perceived access to COVID-19 testing was still determined by health insurance coverage. This suggests a need for stronger, population-wide communication of expanded coverage for COVID-19 testing, particularly targeting uninsured populations. Furthermore, people may still be concerned about incurring costs for some types of testing [36] , may not be aware of where to get a test, may still be subject to appointment and insurance requirements, or may not be willing to wait in long lines to get tested due to safety concerns. This perceived inability to get a COVID-19 test may, in part, be explained by concerns about the existing protection gaps afforded by expansion efforts, or the emerging evidence of other sociodemographic and geographic barriers to COVID-19 testing [19] . Taken together, further research is needed to qualitatively assess the potential reasons behind why COVID-19 testing is perceived as inaccessible by many Americans. A key finding was the lack of association observed between those with Medicaid, government-sponsored health insurance for low-income, vulnerable populations [37] , and perceived access to COVID-19 testing. While this may suggest that Medicaid is not providing the same increase in perceived access as other forms of health insurance, it must be noted that the study sample was composed of largely high-income individuals, with those relying on Medicaid comprising only 4.5% of the study population (n = 289). While the percentage of the participants with any form of health insurance (94.4%) was slightly higher than the 2019 U.S. average of 92.0%, [38] this high health insurance coverage in the study sample precluded an analysis of socioeconomic disparities by insurance status. Therefore, further large-scale observational research among low-income or socio-economically vulnerable populations is needed to corroborate our overall findings, and the specific find- Table 2 Source of health insurance and perceived access to COVID-19 testing among respondents to an online nationwide survey in the United States, April 2020. ing that Medicaid insurance is not associated with actual and perceived access to COVID-19 testing. Socioeconomic factors and COVID-19 testing The study did not find an association between employment status and perceived access to COVID-19 testing; however, the results indicated that income status was significantly associated with perceived access, corroborating reports of low-income neighborhoods experiencing greater perceived inability to access COVID-19 testing [39] . Indeed, these findings support efforts currently underway to improve access to testing in low-income and underserved communities, as access to health care services is one of the important drivers of health inequalities [40] . One unexpected finding was that men were significantly more likely than women to express a perceived ability to access COVID-19 testing, despite gender-based bivariate analyses showing men to also be significantly less likely to have health insurance, a disparity noted in previous studies [41] . However, what may explain these findings is that men had significantly ( P < .001) higher income than women; 36.7% of men had an annual household income of more than $10 0,0 0 0, versus only 29.9% of women. Although income was controlled for in the analysis, given that a substantial proportion of income data was also missing (31.8%), further largescale analyses among diverse populations may shed further light on whether these sex-disparities (or income disparities) in perceived access to COVID-19 are meaningful for public health policy considerations. Geographic disparities and COVID-19 testing Disparities in perceived access to COVID-19 testing were observed across the country. Importantly, disparities in health insurance coverage did not directly correspond to disparities in perceived access to COVID-19 testing. For instance, while the West North Central division had the lowest level of perceived access to COVID-19 testing among the nine U.S. divisions, it had the fourthhighest proportion of insured individuals in the study sample. Likewise, while the Mountain division had the lowest proportion of insured individuals, it had the fourth-highest level of perceived access to COVID-19 testing. These findings emphasize that regional disparities in health insurance coverage alone may not explain disparities in perceived access to COVID-19 testing and that factors related to regional-and division-level testing disparities, such as availability and accessibility of testing sites, and other socioeconomic or geographic disparities [19] should be considered in efforts to enhance access to testing. However, these preliminary findings at aggregate geographic levels of regions and divisions may inform large-scale and systematic surveillance initiatives to understand state-level disparities in COVID-19 testing (both during the early months of the pandemic as well as now) and provide guidance to state-level policy initiatives. Likewise, while populations living in rural areas have low access to health services in general [42] , no significant differences in perceived access to COVID-19 testing was observed by urban/rural status. Future studies using nationally representative data are needed to provide more detailed insights into the relationship between type of residence and perceived access to COVID-19 testing. Strengths and limitations There were several strengths of this study, including 1) reaching a large, geographically diverse sample in a short frame time during the first peak of the COVID-19 pandemic through socialmedia advertisement-based recruitment methods; [24] 2) obtaining a large sample size among some sub-populations particularly vulnerable to COVID-19, such as older adults; and 3) obtaining a diverse sample of types of health insurance possessed by participants to allow for disaggregated analysis on the effect of insurance type on the outcome variable. However, the study was limited by the non-probability convenience sampling from Facebook and affiliated platform users. Although 70% of Americans use Facebook, certain demographic groups may be underrepresented (e.g., racial and ethnic minorities), which limits the generalizability of the findings [43] . While effort s were made to enhance the sampling of racial and ethnic minorities during recruitment [24] through supplemental social media advertisements specifically targeting African Americans, Hispanics, and Asian Americans, the racial and ethnic diversity of the sample did not improve in both rounds of survey implementation. As a matter of fact, studies that have used Facebook and other social media platforms for recruitment have reported similar problems [25] . Given the significant structural barriers experienced by such minority populations in the U.S. in access to COVID-19 testing [33][34][35] , there is a clear need for further in-depth research to build upon these preliminary findings and identify key modifiable drivers related to perceived access to COVID-19 to reduce disparities. Moreover, although the association between geographic divisional differences and perceived access to COVID-19 testing and health insurance coverage were analyzed, in reality, any geographic differences in policy actions relevant to COVID-19 testing occur at a state level (rather than at a divisional level). We were unable to conduct a comprehensive state-based geographic analysis due to some small state-level sample sizes. Future scaled-up, nationally representative survey research is needed to build on these preliminary findings on geographic disparities. Finally, given that there have been continued effort s to enhance testing in the weeks and months since the survey data were collected in late April 2020 [15] , changes in actual and perceived access to COVID-19 testing are likely to have occurred. To address this, the survey used in this study will be adapted and administered periodically throughout the COVID-19 crisis. Nonetheless, these findings have shed light on socioeconomic and geographic disparities in access to testing during the early phase of a major health crisis and can inform areas of early policy action for future public health crises. Conclusions The ongoing COVID-19 pandemic is one of the most significant health crises faced by the U.S. in modern history. The need to ex-pand access to COVID-19 testing is key to assessing the extent and scale of the pandemic and develop interventions to contain and prevent onward spread. Although some effort s had been made to enhance access to tests during the early months of the pandemic, our findings highlight that many Americans perceived difficulty in accessing COVID-19 testing. Likewise, it is important to consider that the observed perceived inability may also be attributed to a lingering sense of test shortages that were observed during the early months of the pandemic in the U.S.; indeed, there may be a salient delay between actions taken to enhance COVID-19 testing access and the awareness or perception of enhanced access, as reflected in the linkages in Figure 1 . These findings also highlight the need for mixed-method research approaches for a qualitative assessment of the reasons behind perceived inability to access COVID-19 testing as the pandemic has progressed and the specific concerns individuals may have regarding access to testing (e.g., awareness of testing locations, costs of tests, etc.). Although COVID-19 testing capacity and access have markedly improved since the early days of the pandemic, our data provide a snapshot of disparities in perceived access at a time of greater uncertainty as the virus was beginning to spread across the U.S. and highlight some of the key socioeconomic and geographic factors that may need to be considered concerning access in future infectious diseases crises.
2021-03-08T14:07:21.884Z
2021-03-07T00:00:00.000
{ "year": 2021, "sha1": "c52612840208e01e43bbec66a1acefad1e723e44", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.annepidem.2021.03.001", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "214ea70562712786413e51afc055fef02ef29d22", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
20082453
pes2o/s2orc
v3-fos-license
Impact of Bisphenol A ( BPA ) and Free Fatty Acids ( FFA ) on Th 2 Cytokine Secretion from INS-1 Cells Bisphenol A (BPA) is used in huge amounts for many plastic products and is a hormone (estrogen) disrupting agent. BPA as well as FFAs may be deleterious for the immune system. The aim was to identify Th2 cytokines and some of their signal transduction mechanisms in INS-1 cells, an insulin secreting cell line. Screening using a proteome profile indicated an increase of IL-1, IL-2, IL-4, IL-6, IL-10, IL-13 and IL-17 by BPA. Also FFAs (in combination with LPS) were positive. In detailed quantitative measurements, these results were confirmedly indicating a complex array of proand anti-inflammatory potential. The interaction of BPA with 17β-estradiol was non-additive with respect to IL-4 and IL-6 release and additive with respect to FFA interaction indicating same and different mechanisms of action, respecttively. As signal transduction PI3K (Wortmannin-sensitive) and STAT-3/6 (Tofacitinib-sensitive) are involved in various effects, INS-1 cells release several cytokines due to BPA and FFA attack which may be involved in disturbance of glucose homoeostasis and type 1 diabetes. Introduction Bisphenol A (BPA) is the building block of polycarbonate plastic used to make numerous consumer products including baby bottles, beverage containers, food cans (inner layer), Tetra-Pak TM , dental materials, thermal paper and impact-resistant safety equipment.BPA is also a part of epoxy resins.It even shows up in cell culture media during cell research when plastic materials are used.800,000 tons are produced worldwide. BPA is an endocrine disrupting chemical (EDC) with a toxic influence on reproduction [1].BPA interacts with G-protein coupled receptors, e.g.GPR-30 and binds to both types of estrogen receptors and various cytokine receptors with low affinity (genomic effect).BPA imitates 17β-estradiol effects [2] which is not surprising because of the similarity of phenol groups of either compound resulting e.g. in effects on blood glucose homeostasis.BPA disrupts pancreatic beta-cell function [3].Even low levels can detrimentally affect the glucose metabolism mediated by the glucose-regulated-protein (grp) [4].BPA is linked to several other diseases such as allergy, cardiovascular diseases [5] chronic inflammation and e.g.asthma [6].It modulates immunological processes with respect to differentiation of Th0 cells to Th1 or Th2 cells [7].BPA increases body weight [8], and since obesity is linked to inflammation [9] (i.e. more than 100 hormones and cytokines are secreted by fat cells), this process may be linked to T cells. BPA is completely (95% -100%) absorbed [10] and was found in the urine of nearly 100% of investigated children [1, [11][12][13].Average urine concentration is 3 µg/L.Plasma concentrations are higher in children than in adults [14] which may partly be due to a slower glucuronidation [10,15,16] making children highly susceptible to damage.In the European Union BPA is not allowed in baby bottles since 2011. Plasma FFAs play important physiological roles in skeletal muscle, heart, liver and pancreas.FFAs (e.g.palmitate as used in this study) supply energy, build cell membranes and are precursors for intracellular signal molecules such as prostaglandins and leucotriens.Normal concentrations of 200 -600 µM are elevated more than three-fold in pathophysiological situations such as obesity and diabetes mellitus [17].Increased FFAs have an immunological potential [18], e.g.inhibiting the T-cell activity.The induction of various proinflammatory effects was described for β-cells [19].The pancreatic β-cell function and the insulin receptor signaling cascade are inhibited (insulin resistance) [20].FFA-induced hepatic insulin resistance is associated with increased hepatic diacylglycerol content, increased activities of protein kinase C (PKC), of pro-inflammatory NFκB pathway and increased inflammatory cytokine expression [21].FFAs are a major link between obesity and insulin resistance (inhibition of insulin-stimulated glucose uptake), activate PI3K [22] and lead to a chronic inflammatory status [23]. Whereas the influence of BPA on Th1-cytokines is well investigated, not so much is known about Th2, especially not with respect to INS-1 cells, an insulib secreting cell line.The aim of this study is to investigate the impact of pathophysiologically important factors such as BPA and FFA and their complex interplay with 17β-estradiol with respect to various Th2 cytokines using INS-1 cells.Cytokines which are released by Th2 cells are IL-4, IL-6, IL-10 and IL-13.Also the signal transduction system of cytokines, STAT-3, STAT-6 and Akt, is of interest.The basic question is which pro-and anti-inflammatory cytokines are released from INS-1 cells and which secondary mechanisms are involved.INS-1 cells are a valuable tool since they show physiological and immunological similarities to human pancreas cells [24]. We altogether focus on pathophysiological effects of BPA and FFA on Th2 cytokine secretion from INS-1 cells and some signal transduction mechanisms including interactions with 17β-estradiol. Cell Culture of INS-1 Cells Asfari et al. established in 1992 a X-ray-induced rat transplantable insulin secreting cell line (INS-1).The continuous growth of the INS-1 cells, kindly supplied by Dr. C. Wollheim, Geneva, Switzerland, was found to be dependent on the reducing agent 2-mercaptoethanol to increase the total cellular glutathione levels (Asfari et al., 1992).INS-1 cells with the passage numbers 39-70 were used.They were subcultivated once weekly and the growth medium was changed four days after the subcultivation. A problem induced by antibiotics added to culture media thus releasing endotoxins from hypothetically present bacteria was circumvented by doing once a while experiments lacking the presence of antibiotics.In fact when the production of endotoxins is decreased by omitting penicillin and streptomycin in culture media, the IL-1 secretion for example was reduced.This has to be kept in mind when the influence of cytokines on initiation of type 1 diabetes is discussed to those experiments. Proteome Profiler (ARY 008) Selected capture antibodies have been spotted in duplicate on nitrocellulose membranes.Cell culture supernatant mixed with a cocktail of biotinylated detection antibodies.The sample/antibody mixture is then incubated.Any cytokine/detection antibody complex present is bound by its cognate immobilized capture antibody on the membrane.Streptavidin-Horseradish Peroxidase and chemiluminescent detection reagents are added, and a signal is produced in proportion to the amount of cytokine bound.The darkening of the photo was scanned and calculated in relative terms (semiquantitative measurement). ELISA for IL The basis is a quantitative sandwich enzyme immunoassay technique (ELISA Quantikine®).A monoclonal antibody specific for various rat interleukins has been precoated onto a microplate.Standards, control (DMSO 1%) and samples are added and any specific cytokine present is bound by the immobilized antibody.After washing away unbound substances, an enzyme-linked polyclonal antibody specific for the interleukin antibody of interest is added to the wells.The enzyme reaction yields a coloured product.Its colour intensity is in proportion to the amount of interleukin bound in the initial step to the first antibody.The sample values were then read off the standard curve which was linear between 31.25 and 2000 pg/mL of either cytokine. Blanks were between 19.1% and 24.4% of maximum effect. Dot Blot of IL-17 Receptor Washed cells (5 ml cold PBS), lysed in the presence of a protease inhibitor and pelleted (12000 rpm).They were spotted on a nitrocellulose membrane and incubated for 12 hours.After washing the membrane was incubated for 10 min with a primary antibody directed against the IL-17 receotor.Thereafter a secondary antibody linked to horse radish peroxidase, an enzyme which induces the degradation of luminol in the presence of H 2 O 2 .The luminescence was detected using a photo paper Kodak ® BioMax TM . Statistics Results are shown as means + SEM of n independent results.Each of these n results consists of 7 -9 replicates.Statistical significance was determined using a nonparametric Mann-Whitney test (BBN Software Products Corp.) followed by a post-hoc test (t-test for individual results).P < 0.05 was considered significant. Results In an a priori screening using a Proteome Profiler ® , IL-1, IL-2, IL-4, IL-6, IL-10, Il-13 and IL-17 were highly increased in INS-1 cells by 40 µg/mL BPA (data of 5 µg/mL are not shown, but were rather the same), whereas the effects of FFA [40 µg/mL] and LPS [100 ng/mL; control] were smaller when given alone, albeit higher when both were added in combination (Table 1).IL-17 was not increased by FFA and LPS alone (Table 1).Many other compounds that are not indicated were positively tested, but only effects on distinct cytokines are shown and only these were evaluated later in more detail. BPA increased IL-4 secretion in a concentration dependent manner and LPS had the same tendency (Figures 1(a) and (b)).The effect was time-dependent when data of 6 h (Figure 1 BPA increased the release of IL-6 (Figure 3(a)) and 17β-estradiol [100 ng/mL] did not increase this effect at a high BPA concentration (Figure 3 [100 ng/mL] increased the release of IL-6 (Figure 3(b)) which is superimposed by increasing concentrations of BPA until a maximum BPA concentration of 1.0 µg/mL indicating the same type of mechanism. (a)). 17β-Estradiol Table 2 shows the effect of various concentrations of FFA and LPS on IL-6 secretion.Both compounds increased IL-6 release in a concentration-dependent manner.17β-Estradiol [100 ng/mL] was not able to increase the effect of FFA at 1 µg/mL.BPA, LPS and FFA increased IL-10 secretion (Figure 4(a)).This effect of BPA at 1 µg/mL was further The added IL-10 has been subtracted from data).BPA increased IL-13 secretion which was further increased by IL-4 (Figure 5). FFA and LPS increased IL-13 secretion (Table 3).The effect of FFA, but not of LPS was inhibited by 1 ng/mL IL-4. The IL-17 receptor is expressed by INS-1 cells which was verified by a dot-blot experiment shown in Figure 6. Figure 8 shows the effect of BPA on both STAT-3 and phosphorylated STAT-3.Both factors are increased by BPA.Tofacitinib (JAK3 inhibition, interferes with JAK-STAT signaling pathway) and wortmannin (PI3K inhibitor) inhibited this BPA effect. In Table 4 the effects of various concentrations of BPA, FFA and LPS on STAT-6 expression (phosphorylated and non-phosphorylated) and their inhibition by Tofacitinib (JAK3 inhibitor) and Wortmannin (PI3K inhibitor) is shown. General Comments The immunological answer to compounds of pathophysiological impact such as the chemical BPA or the COMPOUND(S) metabolic intermediate FFAs is strongly influenced by various cytokines such as IL-4 promoting T helper type 2 (Th2) responses and IL-2 promoting T helper type 1 (Th1) responses.Diabetes and β-cell death originating from various cytokines and transcription factor were well described [25,26].The Th2 cytokines are not very well investigated with respect to BPA and FFA.Cytokines which are released by Th2 cells are IL-4, IL-6, IL-10 and IL-13.A polarization into the direction of Th2 is partly the answer to processes mediated by IL-4.BPA on a long-term increases insulin levels though not acutely [27]; but the focus here is on proinflammatory cytokines and their balance to the anti-inflammatory cytokines IL-4, IL-10 and IL-13, especially under the influence of BPA and FFA.17β-estradiol is of interest because it is known to possess an IL-6 agonistic activity.Interestingly BPA possesses only a low affinity to either estrogen receptor subtype, nevertheless has a potent estrogen-like effect.Estrogen receptors are present on β-cells and promote insulin release [28]. Screening by a Profiler When the effects of BPA and FFA were tested using Copyright © 2013 SciRes.PP screening by a Profiler, FFA and BPA induced increases of IL-2, IL-4, IL-6, IL-10 and IL-13 (Table 1).The LPS effects as used for control are those as described in the literature except that IL-6 and IL-17 were not secreted.We mainly concentrated on the screened ILs (Table 1) except IL-17. IL-4 IL-4 is known to influence the differentiation process of Th0 and Th1 which is connected to pathogenesis of type 1 diabetes [29], possibly having a protective effect [30].BPA increased IL-4 secretion in a concentration-and time dependent manner (Figure 1).Thus IL-4, a regulatory cytokine for the adaptive immune response, is regulated by BPA.17β-Estradiol had an additional effect to that of BPA (Figure 2) which results in a rightward shift of the BPA curve including no effect of 17β-estradiol at high BPA concentration.This shift indicates an estradiol-like effect of BPA with probably no divergent mechanism of action.There was no superadditive effect between both compounds which would have indicated a different mode of action of both compounds.This is not surprising since BPA is an endocrine disrupting chemical possibly interacting with estrogen receptors as already mentioned.There was a superadditive effect of FFA and 17β-estradiol. IL-6 IL-6 was released by BPA (Figure 3(a)) as well as by LPS and FFA (Table 2).It is a pro-inflammatory Th2 cytokine being involved in acute phase responses to infection and more important in immune responses.Dysregulation of IL-6-type cytokine signalling is part of inflammation, viral infections and contributes to the onset and maintenance of several diseases, including type 1 diabetes.People suffering from type 1 diabetes show elevated IL-6 levels [31,32].On the other hand IL-6 protects β-cells from pro-inflammatory effect and functional impairment. 17β-estradiol is already known to promote IL-6 secretion and to modulate the IL-6 receptor [33].17β-Estradiol increases IL-6 secretion (Figure 3(b)); it also increases the effect of BPA, at least at low BPA concentrations (Figures 3(a) and (b)) indicating the same type of mechanism.This interaction was already obvious for IL-4 (Figure 2). Our data corroborate data on release of IL-6 by FFA from other tissues [34,35].Interestingly, 17β-estradiol did not modify the FFA effect on IL-6 (Table 2). IL-10 IL-10 was secreted by BPA, FFA and LPS (Figure 4).IL-10 is expressed by many cells of the adaptive immune system, including Th2-and T reg -cells and it is well characterized as an anti-inflammatory cytokine with potent suppressive effects in autoimmune diseases [36].IL-10 function is ambivalent: the absence of IL-10 leads to a better clearance of pathogens with non-enhanced immunopathology, during other infections the absence of IL-10 can be accompanied by a dentrimental immunopathology for the host.Permanently increased IL-10 can induce an early type 1 diabetes manifestation [37] but may also produce anti-inflammatory effects in this situation.Elevated IL-10 (e.g. when stimulated by LPS) is correlated with diabetes.IL-10 is known to inhibit the progression of Th1 cells. Production of IL-10 is regulated in a complex manner: e.g. by the activation of inhibitory pathways and secretion of cytokines from macrophages or by blocking Tolllike receptor mediated MAPK-activation or by IFN-γ inducing the release of glycogen synthase kinase 3 (GSK3) via antagonizing phosphoinositide 3-kinase (PI3K)-Akt activation which leads to inhibition of IL-10 production [38].IL-10 also negatively regulates p38 phosphorylation and thus limits IL-10 secretion (negative feedback).In INS-1 cells, IL-10 is triggered by IL-4, IL-10 or IL-13 in the presence of BPA (Figure 4(b)) which hints at a positive feedback. IL-13 BPA, FFA and LPS induce IL-13 secretion from INS-1 cells (Figure 5, Table 3).IL-13 is a critical mediator of (allergic) inflammation and shares many functional properties with IL-4 which is obvious by sharing a common receptor unit and common signaling pathways.During the development of type 1 diabetes IL-13 has an inhibitory effect on IFN-γ secretion [39] which is known to be important for diabetes [37].The situation is complex because IL-13 is a modulator: it inhibits IL-1, IL-6 or IL-12. IL-17 IL-17 and its receptor (IL-17R) are basic members of a newly described family of cytokines and receptors, which are different from other cytokine families.IL-17, the hallmark cytokine of the newly defined T helper 17 cell subset, has an ambivalent role by on the one hand protecting the host against extracellular pathogens, and on the other hand in promoting inflammatory pathology in autoimmune diseases such as type 1 diabetes.IL-17 levels are low in diabetes, but are increased by exercise.The impact of IL-17 on the development of type 1 diabetes is discussed and not clear at present [40].The presence of the IL-17R was demonstrated for INS-1 cells (Figure 6).A measurement of IL-17 was not possible at present, but the profiler experiment has shown a BPAinduced release (Table 1). Akt/STAT Signal Transduction Akt (protein kinase B) is known to modify insulin activity on many enzymes and is activated by PI3K which is involved e.g. in IL-10 action (see above).pAkt stimulated by BPA and FFA may positively act via PI3K which is demonstrated by the inhibitory effect of Wortmannin (Figure 7). The important cytokines involved in the Th1 and Th2 cell response often trigger Janus kinase (Jak)-STAT signalling pathways, whereas IL-17 family cytokines being different (see above) mediate signalling through the pro-inflammatory NFκB.Phosphorylated STATs are found in rat pancreatic cells as an answer to released cytokines [41].STAT-3 is known to be induced by IL-6 and IL-10 (and insulin), STAT-6 by IL-4 and IL-13.Gene knockout studies gave information on STATs being involved in development and function of the immune system.Studies using STAT-6 deficient mice revealed that IL-13 signalling uses the Jak/STAT-6 pathway [42].Phosphorylated and non-phosphorylated STAT-3 can be activated by a BPA-and FFA-incubation (Figure 8). Also STAT-6 signal transduction is activated by BPA, FFA and LPS also activate (Table 4).The inhibitory effect of Wortmannin (Figure 7) shows that PI3K may be involved in the BPA effect on Akt.Altogether these signaling pathways are known to be involved in the development of type 1 diabetes. STAT-6 is an important regulator of inflammation induced by Th2 cells.It is known to be activated by IL-4 and IL-13.STAT-6 is activated e.g. by IL-4 which firstly is phosphorylated as a result.IL-4 also acts on PI3K; STATs and PI3K are used by IL-6; PI3K acts on Akt.Interestingly STAT-3 inhibition by Tofacitinib, a jak/ STAT inhibitor, is more pronounced than STAT-6 inhibition (Figure 8 and Table 4). With respect of INS-1 cells, activation releases a set of primary inflammatory mediators known for acute-phase response such as IL-1, IL-2 and IL-6.We only present here a two-dimensional view which is in reality much more complex: IL-1 for example is known to inhibit IL-6 induced acute-phase protein synthesis in hepatocytes [43].NFκB was identified as a mediator for IL-1 dependent suppression of IL-6 in liver cells [44].The impact of IL-1 towards the secretion of IL-6 in INS-1 cells could not be proven in this work (data not shown). Data obtained from rodents may be relevant for humans although it has to be admitted that rats are less sensitive to e.g.estrogens.The metabolisms of BPA is similar albeit rats elimate the glucuronidated products via faeces in contrast to humans (via urine) [45]. Summary and Conclusion Altogether BPA, LPS and FFA can increase the release of Th2-cytokines from INS-1 cells (IL-4, IL-6, IL-10 and IL-13) and mediate pro-and anti-inflammatory effects.Data underline the major pathophysiological relevance of BPA and FFA.BPA induces the insulin-stimulated Akt phosphoryation.FFA and BPA activate-probably by one of the investigated released cytokines, STAT-3 and STAT-6 as signal transduction pathways, and also the PI3K/Akt signal transduction pathway.Thus it cannot be excluded that these signals induced by BPA and FFAs may influence development of type 1 diabetes or at least disturb glucose homoeostasis.This involvement should be a matter of further detailed investigation. (a)), 12 h (data not shown) and 24 h incubation (Figure 1(b)) were compared.For all further experiments 24 h incubations were preferred.The IL-4 secretion induced by BPA at three different concentrations was superimposed by 17β-estradiol except at high, possibly maximally effective BPA concentration (Figure 2(a)) indicating a right shift of the BPA curved by 17β-estradiol.The IL-4 secretion induced by FFA at three different concentrations was superimposed by 17β-estradiol (Figure 2(b)). Figure 1 .Figure 2 . Figure 1.(a) and (b) Effect of BPA and LPS on IL-4 secretion from INS-1 cells.Various concentrations of LPS and BPA were used for a 6 (a) and 24 hour (b) incubation.Blanks (no addition) were subtracted.DMSO (solvent; 1%) and a high concentration of IL-4 [3000 pg/mL] were used as negative and positive controls.Mean + SEM, n = 3; * p < 0.05 vs. DMSO control. Figure 4 . Figure 4. (a) and (b) Effect of BPA, LPS and FFA on IL-10 secretion from INS-1 cells.Two concentrations of LPS [50 and 100 ng/mL], BPA [500 and 1000 ng/mL] and FFA [1000 ng/mL] were used.Blanks (no addition) were subtracted.DMSO (solvent; 1%) and a high concentration of IL-10 [1250 pg/mL] were used as negative and positive controls.When IL-10 was added, this concentration was subtracted from data shown in column 6 of Figure 4(b).Mean + SEM, n = 3; * p < 0.05 vs. DMSO control, + < 0.05 vs. BPA effect alone.increased by IL-4, IL-13 and IL-10 (Figure 4(B)) (Note: The added IL-10 has been subtracted from data).BPA increased IL-13 secretion which was further increased by IL-4 (Figure5).FFA and LPS increased IL-13 secretion (Table3).The effect of FFA, but not of LPS was inhibited by 1 ng/mL IL-4.The IL-17 receptor is expressed by INS-1 cells which was verified by a dot-blot experiment shown in Figure6.BPA and FFA increased p-Akt in a concentrationdependent manner (Figure7).The effect of FFA was inhibited by Wortmannin (PI3K inhibitor) (Figure7).Figure8shows the effect of BPA on both STAT-3 and phosphorylated STAT-3.Both factors are increased by BPA.Tofacitinib (JAK3 inhibition, interferes with JAK-STAT signaling pathway) and wortmannin (PI3K
2017-11-01T00:15:27.136Z
2013-08-14T00:00:00.000
{ "year": 2013, "sha1": "165e1218e421908bf7204fcb5a7ad0668043b629", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=36221", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "165e1218e421908bf7204fcb5a7ad0668043b629", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
265226178
pes2o/s2orc
v3-fos-license
Biology of the Endemic Endangered Swallowtail Butterfly, Papilio desmondi teita (Lepidoptera: Papilionidae), on Wild Citrus Species in Taita Hills, Kenya , was reared in captivity on wild citrus ( Introduction Butterfies are among most signifcant component of biodiversity within the ecosystem [1].Some butterfies such as papilionids and pierids and some groups of nymphalids and hesperiids are pollinators of wild fowering plants and cultivated crops [2].Approximately 87.5% of fowering plant species depend on animal pollinators for their production [3].In the agricultural sector, 87 of the leading world food crops and 35% of the total crop production volumes are dependent on animal pollination [4].Most butterfies are typically herbivores in their larval stages and majority are host specifc with a close relationship with their host plants [5].Also, most adults have a specialized mouthpart, the proboscis which is long and tubular for extraction of nectar, a primary source of nutrition for adult butterfies [6,7].While feeding on plants' nectar, butterfies play a critical role as pollinators through the transfer of pollen from one fower to another [6].However, some butterfy species particularly in the family Nymphalidae feed on other liquid substances including fruit juice, rotting fruit, tree sap, and even animal droppings [6][7][8]. Swallowtail butterfy species are endangered and have been experiencing a reduction in diversity, occurrence, and abundance across the globe [9][10][11].Te loss of pollinator swallowtail butterfies is linked with the reduction in plants that they depend on through habitat change and destruction activities such as deforestation, agriculture conversion and intensifcation, alteration of pastures, and urbanization [12].Tese alterations within their natural habitat have a negative impact on their fundamental life functions such as mating, breeding, foraging, and many others [13].Tis has raised concerns about the stability of ecosystem function and food security [14].Tese factors may also cause a local extinction or migration of the endemic swallowtail butterfies due to the loss of their preferred host plants within their habitats [15,16].Among the tropical swallowtail butterfy species, Papilio desmondi teita (Van Someren) (Lepidoptera: Papilionidae) is an endemic and endangered species to Taita Hills in Kenya [9,17] that form the Eastern Arc Mountains (EAM) [7,12,18]. Te knowledge of the relationship between swallowtail butterfies and their host plants is very scanty.According to a recent study, compared to farmlands, forest edges are home to a greater diversity of butterfy species [19].A crucial step in achieving the conservation goal of swallowtail butterfies is through understanding their biology within their natural habitats.Once biology is understood, a more sustainable management and conservation strategy could be designed and adopted for use.Currently, there is no known information on the biology of P. desmondi teita with very little information documented on its larval host plants by Larsen [20] and Congdon et al. [21].Terefore, the present study was conducted to document the biology of the swallowtail butterfy P. desmondi teita on wild species of citrus (Rutaceae), horsewood Clausena anisata (Willd) and orange climber Toddalia asiatica (Lam), which were previously reported as the species' larval host plants according to Larsen [20] and Congdon et al. [21] as well as to understand the larval host preference between the selected host plants. Clausena anisata, a wild citrus species, is a small, deciduous shrub with imparipinnate compound leaves that are heavily gland-dotted and have a strong, anise-like scent when crushed [22].According to Mukandiwa et al. [23], the shrub is the only member of the Clausena genus found in tropical Africa [24].Its country-specifc geographical range includes bushveld, riverine thickets, and forests throughout sub-Saharan Africa, with the exception of the driest regions.Toddalia asiatica, a woody liana that is a wild citrus species, can grow up to 10 meters high in the forest [25].According to Gopal et al. [26], it is extensively distributed throughout Madagascar, Asia, and Africa's tropical regions.In the Maasai and Kipsigis communities of East Africa, the plant is traditionally used as a hedge and as a browser for goats [27].Te host plants in this study were specifcally chosen as the larval host plants due to their frequent occurrence in the Taita Hills and their utility to the local butterfy farmers during the reconnaissance survey.Tis study will yield valuable information for the broader conservation of swallowtail butterfies and the plants that serve as their hosts in Kenya and throughout sub-Saharan Africa. Te Study Site. Taita Hills which form part of Eastern Afromontane Biodiversity Hotspot [7,28,29] with an elevation range of 1,200 to 2,200 meters above sea level are home to an exceptionally high number of endemic plant and animal species (Figure 1).Te study was conducted at the Dawida Biodiversity Center, which is situated in the coastal region of Wundanyi Subcounty, Taita Taveta County, Kenya, at the Ngangao Forest fragment of the Taita Hills. Te study was carried out at two locations within the study site: a garden (a) and a laboratory (b) between March and September of 2021.In a garden under shade, oviposition was observed in buckets holding the mated females and the two host plants, C. anisata and T. asiatica.Te garden recorded ambient temperature and relative humidity of 23.87 ± 0.93 °C and 66.83 ± 3.22%, respectively, during the frst season (March-May 2021) and an ambient temperature and relative humidity of 18.25 ± 0.57 °C and 74.83 ± 1.51%, respectively, in the second season (June-September 2021).Subsequent studies on development from egg (collected from oviposition experiment) to adult were conducted inside the laboratory at an ambient room temperature and relative humidity of 22.19 ± 0.09 °C and 73.86 ± 0.33%, respectively, during the frst season (March-May 2021) and ambient room temperature and relative humidity of 18.26 ± 0.13 °C and 74.71 ± 0.50%, respectively, during the second season (June-September 2021).All the experiments were carried out at 12L : 12D in both seasons.Two wild citrus species, Clausena anisata (Willd.)and Toddalia asiatica (L.) Lam.(Rutaceae) were chosen as the host plants for the experiment on swallowtail butterfy, P. desmondi teita developmental attributes.From the forest edges, young branches with new leaves from each host plant were gathered and brought to the laboratory.After washing the branches to get rid of dust and any potential predators, they were submerged in water to prevent desiccation.Te branches were placed in plastic buckets (325 mm dia.× 375 mm•h) labeled with the host plant species for use in colony maintenance. Swallowtail Butterfy, Papilio desmondi teita, Culture. A colony of P. desmondi teita from eggs (collected from an oviposition experiment), larval instars, pupae, and adults was maintained inside the laboratory (situated in Ngangao Forest) in plastic buckets (Figure 2(b)) on C. anisata and T. asiatica.Te stock colony was established from the adults collected at the forest edges in November 2020.Te adults were collected using a sweep net while foraging on the fowers from various plant species, and some females were collected on the selected host plants while laying eggs.To avoid rubbing of the scales on the upper surface of the wings, the butterfies' wings were folded above their backs before being placed in waxed paper envelopes.After being brought to the laboratory, they were placed in rearing fight cages that were 70 cm by 70 cm by 70 cm in size.A mosquito net was placed over the cages to provide ventilation.Tey Psyche: A Journal of Entomology were also kept from escaping by the nets.Twice a day, they were given 10% sugar solution soaked in cotton wool as food.To ensure successful mating, the males and females were artifcially mated after 24 hours using the hand pairing method [18,30].Mated females were individually placed into laying buckets containing either young branches of the host plant, C. anisata or T. asiatica, for oviposition.From the oviposition experiment, all eggs laid on the same day and at the same age on each host plant were collected and placed into clearly labeled transparent plastic Psyche: A Journal of Entomology buckets that measured 325 mm in diameter by 375 mm in height.Tese buckets were then used for incubation and further development in the laboratory.Troughout both seasons, the P. desmondi teita rearing cultures were kept at room temperature and relative humidity.Te mesh-covered, detachable lids on the buckets improved air circulation.After hatching, the young larvae were placed in clean rearing plastic buckets (325 mm Dia × 375 mm•H) with young, fresh host plants inside, and they were separated from the unhatched eggs using a soft camel-hair brush.Te young, fresh leaves of the selected plants were fed to the larvae. Te pupae were removed after they had pupated by gently pulling them of and scratching the corner of the silk girdle that the cremaster was attached to.Te pupae were moved to the new buckets, and twigs and white serviette paper were added to the base to support the emerging adults.Up until their adult emergence, the pupae were kept in the same laboratory conditions.After a day, the newly emerged adults were fed with a 10% sugar solution and separated from the pupae.By using the hand pairing method, six pairs were chosen from the population that was reared from each host plant, and they were mated after two days [30].Within the rearing bucket, the mated females were exposed to their host plants in preparation for oviposition.Te second season experiment followed the same procedure used during the frst season. Efects of Host Plants on Egg Incubation Period and Mortality Rate.After being separated from the other adults, six mated females were put in buckets with young, fresh branches from each host plant in preparation for oviposition.Eggs of the same age (laid on the same day) were kept in the buckets as mentioned in the previous section, with their lids turned upside down for ventilation, to determine the egg incubation period on each host plant.Until they hatched, the eggs were watched every day.Te incubation period was determined by counting the days until the eggs hatched.To determine the egg mortality rate, the number of unhatched eggs from each host plant was counted.Mortality was calculated as a percentage. Percentage egg mortality � number of unhatched eggs total number of laid egg ×100. (1) Efect of Host Plants on Larval Development Period, Mortality Rate, and Larval Weight. In plastic rearing buckets with adequate ventilation and a branch of young, fresh leaves from the host plants, a cohort of 165 larvae (in the frst season) and 192 larvae (in the second season) was placed.Te duration of larval instar was determined by recording the number of days taken to complete each instar stage identifed by diferences in size, marking and pattern, and head capsule coloration.Te larval weight was determined by weighing 60 individual larvae using JA-SERIES (JA203) electronic analytical balance (0.001 mg accuracy).By monitoring larval cohorts at each instar and keeping track of the number of survivors from the start of each instar to the fnish, the mortality rate was determined.Te mortality rate (by instar) was calculated with the formula larval mortality rate � number of dead larvae at the end of instar total number of larvae at the beginning of instar ×100. Efect of Host Psyche: A Journal of Entomology the experiment, the pupal mortality rate was also recorded.Te time taken from pupal formation to the adult butterfy emergence was recorded in days as the pupal period.An electronic analytical balance (Type JA203H; JA-SERIES) with a 0.001 mg accuracy was used to weigh each pupa.To calculate the pupal mortality rate, the number of dead pupae (those that did not fully transform into adults) were counted and recorded.Abbott's [31] formula was used to calculate the mortality rate: pupal mortality rate � number of emerged adults total number of pupae formed ×100. ( Immediately upon hatching, the frst instar larvae were feeding on their empty eggshells before moving toward the host plant that was provided.Te larvae fed on T. asiatica successfully completed the fve instar stages of development.Tose reared on C. anisata died between 2 and 3 days in both seasons.Te larval development period on T. asiatica varied from the 1 st to 5 th instar larvae in both seasons.Te longest period was recorded in ffth instar stage for both seasons, and the shortest period was recorded in 1 st and 2 nd instar in 1 st and 2 nd seasons, respectively (Table 3).Te total larval period for the larvae reared on T. asiatica was 44.55 ± 0.27 days in the frst season and 66.05 ± 0.66 days in the second season.Te larval instar mortality rate observed on T. asiatica during the frst season varied from 8.51% for frst instar to no mortality for the 5 th , with 24.34% mortality for all larvae reared on T. asiatica.Similarly, the mortality rate observed on T. asiatica in the second season varied across the instar stages with the highest rate recorded in the frst instar and lowest rate in the ffth instar.Te overall mortality rate for the larvae reared on T. asiatica during the second season was 32.61%.However, the larvae that were reared on C. anisata recorded 100% mortality in the frst instar, hence leading to the end of the experiment in both seasons.Terefore, the overall larval mortality recorded in both seasons for the larvae reared on C. anisata was 100% (Table 3). Egg Incubation Te host plants signifcantly infuenced the weight of the frst instar larvae at t � 3.637; df � 188; P � 0.0004 during the frst season.Te frst instar larvae reared on C. asiatica were heavier (1.42 ± 0.03 mg) than those reared on T. asiatica (1.29 ± 0.02 mg).However, the frst instar larvae reared on 4).Te female to male sex ratio was 1 : 0.97 and 1 : 0.95 during the frst and second season, respectively. During the frst season, females of P. desmondi teita reared on T. asiatica had a mean weight of 481.47 ± 11.85 mg which was signifcantly higher than 300.80 ± 8.51 mg recorded for males (Table 4), and the same trend was recorded for the second season.Te number of eggs laid by the females of P. desmondi teita on T. asiatica and C. anisata did not difer signifcantly within the frst two days.(t = 0.65; df = 4; P = 0.550) during the frst season..However, during second season, the host plant species signifcantly infuenced the number of eggs laid by the butterfies at t = −2.94;df = 4; P = 0.042.Te mean number of eggs laid was 42.00 on T. asiatica and 35.00 on C. anisata. Discussion Te host plants selected in this study were found to infuence the development and oviposition of P. desmondi teita.Te study showed that the female laid eggs singly on the upper and lower sides of the fresh leaf, bucket surface, and branch surface.Te highest number of eggs was laid on the lower leaf surface of T. asiatica while on C. anisata, it was observed that most eggs were laid on the bucket surface.Te higher number of eggs laid on the lower side of T. asiatica leaf was meant to protect the eggs from being washed away by rainfall, direct sunlight, and natural enemies including predators and parasitoids [33].Tis observation was consistent with what Stamp [34] reported about the majority of papilionid butterfy species.However, the observation made when P. desmondi teita was exposed to C. anisata for oviposition could be attributed to the contact chemical stimuli on the host plants which plays a great role in female fnal decision to lay eggs [35], which in the present case, the females were trying to avoid this host plant which was unsuitable for their larval survival as exhibited from this fndings.Te same observation about egg-laying habits was also made in a study on P. demoleus and P. polytes in Bangladesh by Islam et al. [36,37].Te habit of laying a single egg prevents the possibility of larval feeding resources being depleted, allowing for the efcient use of host plant resources [38].Higher nutrient accumulation and smooth textures lead to the oviposition observed on the host plants' young, fresh leaves [33]. Te egg incubation period for P. desmondi teita eggs varied on each host plant.Generally, eggs laid on T. asiatica had a shorter incubation period as compared to those on C. anisata.Te variation in incubation period might be accounted for by the diferent host plants due to the variation of accumulated secondary plant semiochemicals [39].Findings by Reddy and Guerrero [40], Hardie et al. [41], and Kahuthia-Gathu [42] confrmed that plants' semiochemicals play a major role in female oviposition preference to host plant choice.Te incubation period might also vary due to fuctuations in temperature during the period of study as observed in our study.Te frst season in March 2021 recorded the highest mean temperature compared to the second season in June 2021.Tese results are similar to the fnding of Al-Mehmmady [43] who reported incubation period of Earias vittella (Fabricius) at 2.42 days during September at 31.3 °C and 2.15 days during August at 32.6 °C.Similarly, Syed et al. [44] reported the shortest incubation period of E. vittella during September when the average laboratory temperature was 32.6 °C and the longest incubation period during October with an average temperature of 30.6 °C. In this study, no signifcant diferences were registered in the percentage egg mortality between the host plants for P. desmondi teita in both seasons.Tese results could probably be attributed to the female oviposition behaviour as a determinant of successful reproduction [45].Similarly, females have the capability to invest in the survival of their progeny by inducing mechanisms such as secretion of sticky substances that protect the eggs against parasitism, endowing the eggs with substances that deter predation, and increasing the egg size [46,47].However, the percentage mortality recorded during the study could be associated with host toxins and infection, environmental factors in the two seasons, and host contact with xenobiotic factors beyond control during the experiment [48].However, we could not justify whether the mortality rate registered on the eggs was a result of the host plant response factors or due to the natural causes within the environment, thus creating the need for further investigation on the actual causes of death. Te developmental periods and larval mortality varied considerably within the two seasons.Te observed variation could be associated with biotic and abiotic factors such as diferences in nutrition components and semiochemicals released from the host plants and changes in environmental conditions experienced during development [49].Tis study revealed that P. desmondi teita larvae could only complete all successive larval instar developmental stages on T. asiatica and were unsuccessful on C. anisata.Tis observation could be related to unsuitability of C. anisata as a host plant for P. desmondi teita [50] which led to the death of frst instar larvae as a result of starvation.Congdon et al. [21] reported a similar observation that T. asiatica was host plant for P. desmondi.Te present study however did not confrm the report by Larsen [20] that Clausena species was the host plant for the butterfy species.100% mortality was recorded in the frst instar larvae reared on C. anisata in both seasons in the present study.In addition, the frst instar larvae of P. desmondi teita were observed avoiding the C. anisata host plant.Te 100% mortality on C. anisata could have been a result of larvae starvation due to avoidance of the host plant.Tis could be attributed to the biochemical compounds present in C. anisata that might be repellant to frst larval instar of P. desmondi teita.Tis conforms to the fndings of Visser [51] who reported that herbivorous insects have developed adaptive mechanisms to identify suitable hosts and evade nonsuitable hosts using their scents.Te study also showed that young larvae had a greater mortality rate than older stages.On the Anaphe panda (Boisduval) Psyche: A Journal of Entomology 8 Psyche: A Journal of Entomology (Lepidoptera: Taumetopoeidae) at Kakamega Forest in Kenya, Mbahin et al. [52] reported similar fndings.In the present study, P. desmondi teita registered higher mortality in frst instar and ffth instar stages as compared to other stages of larval development on T. asiatica.Te factors that might have likely contributed to mortality of the frst instar larvae were unknown; however, the mortality registered on ffth instar larvae was likely caused by sharp internode spines present on the stem and lower surface of the leaves of T. asiatica [53].Tese fndings are consistent with those reported on the survival of silkworm, A. panda, in Kakamega Forest of Western Kenya by Mbahin et al. [52].Duration of development can prolong when the amount of food consumed is reduced, and this results in the insect's reduced length and weight [54] as on P. desmondi teita for the larvae reared during the two seasons on T. asiatica.When P. desmondi teita was reared on T. asiatica, the ffth instar's larval duration was longer than the frst, second, third, and fourth instars.Te ffth instar also recorded the highest weight which was a result of the longer feeding duration registered during the study while compensating for the low feed intake observed in the previous instars.Carvalho and Vasconcello-Neto [55] found a similar observation about the larval performance and host plant selection in Mechanitis polymnia casabranca (L.), a species of neotropical butterfy.Similar fndings on P. polytes, a distinct species within the same family reared on citrus plants, were documented by Suwarno et al. [56].Te results on P. desmondi teita showed that ffth instar larvae had the highest weight which difered greatly from the weight of fourth instar larvae in both seasons.Tis could be attributed to increasing feeding at the ffth instar stage and prolonged feeding duration as reported by Hochuli [57] and Tithi et al. [58]. Te pupal period of P. desmondi teita reared on T. asiatica varied within the seasons, ranging from 27.31 days to 31.73 days in the 1 st and 2 nd seasons, respectively.Variable pupal period and total development period could be a result of diferences in climatic conditions between the seasons.Furthermore, the nutritional stage attained during larval development typically determines pupation after the larval phase [42].Once the ffth instar larva has accumulated sufcient reserve for successful pupation and adult emergence, pupation takes place.Te reserve accumulation is greatly infuenced by the host plants [59].Findings reported by Al-Mehmmady [43] showed that the pupal period of E. vittella during August and October were 6.45 and 7.78 days with an average temperature of 32.6 °C and 30.5 °C, respectively.Furthermore, the duration of the development period increases with a decrease in temperature as reported by Syed et al. [44] on Earias vittella when reared on okra, Abelmoschus esculentus L., China rose, Hibiscus rosasinensis L., cotton, Gossypium hirsutum L., and Indian mallow, Abutilon indicum L. Tey conducted their study of the three life cycles of E. vittella, keeping the temperature regime at a difference of 2 °C between diferent life cycles in a laboratory and recorded an increase in the duration of the life cycle when the temperature decreased in July. Up to 80% of the food provided to the larva during its ffth instar is devoured by it; hence, its growth increases in direct proportion to the amount of food it consumes.If the amount of available food is limited, this could result in small pupa with low weight even if the diet is nutritionally suffcient [60].Tis could be the case with P. desmondi teita reared on T. asiatica which had a variation in pupal weight in the frst and second seasons.Te lowest mortality rate was observed at the pupal stage of development as compared to the larval stages in P. desmondi teita.Tese results are similar to the fndings reported by Kahuthia-Gathu et al. [42] on Plutella xylostella (L.) (Lepidoptera: Plutellidae) reared on cultivated and wild crucifer species.Islam et al. [37] reported a mortality rate of 7.69% on the same genus P. polytes reared on citrus species in the laboratory which was within the range reported in the present study.Halloran and Wason [61] reported the highest mortality during the egg stage.Additionally, Suwarno [62] found the highest mortality at the ffth instar.Te variations reported in mortality rates from previous fndings and the current study may be associated with weather changes, human error, and host plant efects.Tese fndings are indications that the adoption of bucket rearing technology has great potential to increase the survival of the species to adults which could contribute to its conservation and management.Also, the technology could be adopted for commercial butterfy farming for the species used for trade. In the present study of P. desmondi teita, the highest adult longevity was recorded in the frst season and the lowest in the second season.Females generally lived longer than males.Te higher longevity in females could be related to the need of fnding mates as described by Pereira et al. [63].Similar fndings were reported by Al-Mehmmady [43] on E. vittella male and female longevity reared during August and October in Saudi Arabia.However, the present study was in contrast with Syed et al. [44] who reported that males of E. vittella within the same order Lepidoptera lived longer than females.Jahnavi et al. [64] reported that females of Papilio demoleus (L.) (Lepidoptera: Papilionidae) lived longer than the males when reared on acid lime in Tirupati.Females were generally heavier than the males in our study.Mackey [65] and Lederhouse et al. [66] reported that females achieve their greater size than males because of their larvae in various larval stages which feed more and develop for a longer period than the larval stages of the males.A previous study by Scriber and Slansky [67] observed that Lepidoptera females often weigh more than males over the majority of their life cycle, a characteristic that has been linked to the function of laying eggs.Swallowtail butterfy, P. desmondi teita, had a female-biased sex ratio in the present study.Tis fnding showed that T. asiatica was a suitable host plant for the mass production of the endangered swallowtail butterfy species endemic to Taita Hills. Te result from the present study showed variation in the efect of the host plants on female fecundity of P. desmondi teita exposed to both C. anisata and T. asiatica.Generally, more eggs were laid on T. asiatica compared to those laid on C. anisata.Te fnding of this study was clear evidence that P. desmondi teita larvae reared on T. asiatica produced females that showed generally a better performance on T. asiatica than on C. anisata, a clear demonstration that T. asiatica was a more proftable host plant for P. desmondi Psyche: A Journal of Entomology teita female oviposition.Kahuthia-Gathu et al. [42] observed that adult P. xylostella reared on wild crucifers produced females that performed better than those reared on cultivated crucifers.Following a previous study that demonstrated that most female butterfies will make higher oviposition within the frst two days after mating [18], this study explored oviposition on both host plants within the frst two days after mating.Similarly, David and Gardiner [68] discovered that females have a greater egg load at this age following mating.Findings reported on the same genius P. demoleus by Patel et al. [69] showed that the fecundity ranged at 110.80 ± 4.46 eggs per female on citrus.Similarly, Maheswarababu [70] reported almost similar fecundity of 112 eggs per female P. demoleus on citrus species.In the present study, the fecundity ranged between 35.00 to 53.33 eggs per female P. desmondi teita were recorded on the target host plants, which was lower than those reported above by Patel et al. [69] and Maheswarababu [70] on P. demoleus in the same genus. Te fndings of this study revealed that P. desmondi teita could only complete development on T. asiatica and was unsuccessful on C. anisata.Tis was clear evidence that T. asiatica was the most suitable host plant for the species, and thus the study recommends the conservation of T. asiatica within its natural habitat for the survival of this species assessed as an endangered species according to IUCN [9] species red list assessment.With this knowledge, more sustainable conservation strategies for this endangered swallowtail butterfy species may be designed accordingly.Furthermore, policymakers and other stakeholders should use this study's fndings as a starting point when developing policies aimed at achieving long-term conservation goals for this butterfy species and its natural environment. Figure 2 : Figure 2: Egg-laying site outside the laboratory at a garden (a) and laboratory rearing in the laboratory (b) within Ngangao Forest, Taita Hills. Toddalia asiatica (L.) Lam., Tabl.Encycl.2: 116 1797.(syn.Aralia labordei H. Lév.; Cranzia aculeata (Sm.)Oken; Cranzia asiatica (L.) Kuntze; Cranzia nitida Kuntze; Limonia oligandra Dalzell (Unresolved); Paullinia asiatica L.; 7 2.7.Efect of Host Plant on Adult Longevity, Sex Ratio, Weight, and Oviposition.By examining the morphology of the forewing, hindwing, and abdomen, adult butterfies were sexed.Te sex ratio was calculated by counting and recording the number of adult males and females.Te adult longevity of males and females fed on 10% sugar solution was investigated by recording the time from each individual emergence to death on a daily basis.Te number of eggs laid in the frst two days was used to calculate the oviposition. [32]r a day, at 6 pm, when they were less active, sixty newly emerged adults were weighed individually on an electronic analytical balance (0.001 mg accuracy; Type JA203H; JA-SERIES).All experiments were replicated three times and in two seasons.2.8.Statistical Analysis.All the data were analyzed in R statistical software package, version 4.1.1[32].Mean and SEM (standard error of mean) were computed for the duration of developmental stages, egg incubation period, oviposition, egg mortality rate, larval and pupal mortality, and weight of developmental stages.Te independent sample t-test was used to compare the means from each host plant.Te Shapiro-Wilk test and the Bartlett test of homogeneity of variance were applied to the fecundity data on various substrates and submitted to one-way analysis of variance (ANOVA) using general linear model (GLM) procedure of R statistical package software.Tukey's multiple comparison test was used to separate the means between the substrates.P ≤ 0.05 was designated as the signifcance level. Table 2 . P < 0.001) the incubation period with the eggs laid on C. anisata hatching in 8.35 days compared to 7.27 days for those laid on T. asiatica.However, during the second season, the host plants did not signifcantly infuence the incubation period at t = −0.53;df = 182; P = 0.595 for eggs laid on C. anisata and T. asiatica host plants.Te percentage mortality for the eggs laid on both host plants did not difer signifcantly at t = −0.14;df = 4; P = 0.895 and t = 0.00; df = 4; P = 0.998 in both rearing seasons as outlined in Period and Mortality.Te females of P. desmondi teita laid eggs singly on the branch, lower and upper sides of the leaves, and bucket.Tey laid signifcantly more eggs on the lower leaf surface (F 3,76 = 50.19;P<0.001) compared to other oviposition substrates when exposed to T. asiatica while on C. anisata, most eggs were laid on the bucket surface followed by the lower leaf surface and the least was laid on the stem surface as shown in Table1.In the frst season, the host plants signifcantly infuenced (t = 16.15;df = 188; 3.2.Larval Development and Mortality Rate.Swallowtail butterfy, P. desmondi teita, larvae recorded fve instars in both seasons. Table 1 : Oviposition preference and mean (±SE) number of eggs laid by Papilio desmondi teita on Clausena anisata and Toddalia asiatica (as per plant) in March 2021 under feld conditions.Means (±SE) in the same column followed by the same letter do not difer signifcantly at P ≤ 0.05 (Tukey's test). Table 2 : Mean (±SE) incubation period (days) and mortality rate (%) of P. desmondi teita eggs laid on Clausena anisata and Toddalia asiatica in the frst season (March-May 2021) and the second season (June-September 2021) under laboratory conditions. Table 3 : Mean (±SE) of larval instar duration (days), total larval period (days), and larval instar specifc percentage mortality of Papilio desmondi teita reared on Clausena anisata and Toddalia asiatica during the frst season (March-May 2021) and second season (June-September 2021) under laboratory conditions.Journal of Entomology similar trend was recorded for the second season during which females recorded 6.89 ± 0.08 days and males 5.12 ± 0.11 days at t � 12.69, df � 68, P < 0.001.Overall, the development time from egg to adult ranged between 81.13 days in the frst season and 112.15 days in the second season both reared on T. asiatica (Table Table 4 : Mean (±SE) adult weight, fecundity, and development time to adult of Papilio desmondi teita reared on Toddalia asiatica in the frst season (March-May 2021) and second season (June-September 2021) under feld conditions (for fecundity) and laboratory conditions (for development time to adult).(±SE) followed by the same lower case letters in the same column and same upper case letters in the same row do not difer signifcantly at P Means
2023-11-17T16:28:37.500Z
2023-11-14T00:00:00.000
{ "year": 2023, "sha1": "927c435f0a8523e6ba9e4ce922b25d034ceba42b", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/psyche/2023/5538627.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "701a0e6dba374ae5e1a30d66d36a093b3af9f202", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [] }
209892074
pes2o/s2orc
v3-fos-license
The expression of glucocorticoid receptor in patients with small cell lung cancer with or without ectopic adrenocorticotropic hormone syndrome Purpose: Ectopic adrenocorticotropic hormone (ACTH)-secreting syndrome (EAS) is a relatively rare disease. EAS could not be inhibited by endogenous or exogenous glucocorticoid, which may be its most important characteristic. We aimed to explore the expression of glucocorticoid receptor (GR) in small cell lung cancer (SCLC) with or without EAS. Materials and Methods: In this study, we first reported one patient with EAS caused by SCLC, and we examined the expression of ACTH and GR on pulmonary tissue in normal people and SCLC patients with or without EAS. Results: Immunochemistry analysis showed that there was no obvious difference in the expression of GR in SCLC without EAS compared with normal people. While in the EAS patient, GR expression was absent in the tissue. Conclusions: Therefore, our study found that lower expression of GR in small cell lung cancer (SCLC) with EAS, which may contribute to the no inhibition by endogenous or exogenous glucocorticoid. Introduction Ectopic Adrenocorticotropic Hormone (ACTH)-secreting syndrome (EAS) is a relatively rare disease with an incidence of one per million per year [1]. EAS is reported to be associated with many malignant tumors [2]. During recent years, several tumors such as Small Cell Lung Carcinoma (SCLC), neuroendocrine tumors, phaeochromocytomas, medullary carcinoma of the thyroid have emerged as causes of EAS [3,4], while the major source is eventually demonstrated in the lung. Clinically, the presentation of EAS is similar to Cushing Disease (CD) and poses a diagnostic challenge in localization of the ACTH source. The manifestations of EAS include hypertension, edema, hypokalemia, weakness and abnormal glucose tolerance and so on. Biochemical testing showed obviously increased plasma ACTH and cortisol level, which could not be inhibited by endogenous or exogenous glucocorticoid [5]. Glucocorticoids, a class of stress-induced steroid hormones synthesized by adrenal cortex, which is strictly under the control of the hypothalamic-pituitary-adrenal axis [6]. Glucocorticoids in humans are known to regulate diverse cellular functions including development, homeostasis, metabolism, cognition and inflammation. Endogenous glucocorticoid levels in the serum display a classic circadian pattern, peaking at the beginning of the period of highest activity. High level of ectopic serum ACTH cannot be suppressed by endogenous or exogenous glucocorticoid, and this is the cardinal characteristic of ectopic ACTH syndrome (EAS). Glucocorticoids mediate their effect through intracellular GR, which belongs to a large family of transcription factors known as the nuclear hormone receptors. It is well documented that the level of GR protein determines the magnitude of glucocorticoid response. Therefore, we guess the difference in the expression of GR may play an important role in EAS. Previous studies in SCLC cell line showed decreased or absent expression of Glucocorticoid Receptor (GR) [7,8]. While the evidence is still relatively weak. Therefore, we aimed to compare the ACTH and GR expression in SCLC patients with or without EAS. Immunochemistry The study was approved by Ethics Committee of Beijing Luhe Hospital. We collected pulmonary tissue from healthy, non-EAS-SCLC, EAS-SCLC patients. Meanwhile, we collected tissue from pituitarium as positive control. The expression of GR and ACTH were examined by immunochemistry. The case and laboratory examinations A 70-year-old female visited her primary care for worsening fatigue with lower limbs for more than 20 days. The patient has a history of more than 60 years of heavy smoking, 10 cigarettes per day. Laboratory testing as an outpatient revealed hypokalemia of 2.15 mmol/l (3.5 to 5.5 mmol/l). In addition, the patient showed increased blood pressure, blood glucose level and hypoproteinemia. There was no history of diabetes mellitus or hypertension. The patient was then admitted to the hospital. The vitals were heart rate of 74 bpm, blood pressure of 144/66 mmHg, and respiratory rate of 18 bpm. Physical exam showed edema in the lower limbs, the rest of exam was within normal limits. After admission, her hypokalemia was refractory although continuous oral and intravenous potassium supplements. The 24-h urine potassium was high (97.53 mmol/L), suggesting renal loss. Serum ACTH level of 0 am-8 am-4 pm were 328.79, 384.08 and 288.35 pg/mL respectively. Serum cortisol level of 0 am-8 am-4 pm were 64.38, 70.88 and 68.52 ug/dl respectively. The 24-h urine free cortisol level was 4101.9 ug. Recumbent test: renin-AII-ALD: 13.17-171.24-119.14 pg/mL; standing test: renin-AII-ALD: 12.82-165.04-105.16 pg/mL. No obvious abnormality was observed in pituitary MRI plain scan. Parathyroid ultrasonography showed low echo nodule in right parathyroid subgroup. Thyroid ultrasonography showed multiple nodules in thyroid. Subsequent contrast enhanced CT scan of the lung showed multiple enlarged lymph nodes in mediastinal and right lung portal areas. Density nodules of soft tissue under the pleural membrane of the lower right lung, pulmonary interstitial fibrosis, complicated with pulmonary infection, bilateral pleural hypertrophy, arteriosclerosis. Neither low-dose or high-dose dexamethasone test showed no inhibition of cortisol (>63.44 ug/dl). The metabolic findings, high ACTH level with radiologic and histological evidence, made ectopic ACTH syndrome from small cell lung cancer the most likely diagnosis. Primary laboratory results of the patient are summarized in Table 1. ACTH and GR Expression in SCLC Patients with or Without EAS High-dose dexamethasone suppression test is well known to be an important test in the diagnosis of ectopic ACTH syndrome. We examined the ACTH and GR expression in SCLC patients with or without EAS. The pituitary gland tissue was also stained with ACTH and GR antibodies as positive control. There was no difference in the ACTH and GR expression between control and SCLC patients without EAS (Figure 1). Compared with control, ACTH expression obviously increased while GR expression reduced in the SCLC patient with EAS (Figure 2), suggesting the reduction of GR expression may contribute to the no inhibition of high-dose dexamethasone. Discussion The anti-inflammatory and immunosuppressive effects of glucocorticoids are exploited extensively for the treatment of many inflammatory conditions. Endogenous glucocorticoids are stress-induced hormones synthesized under the control of the hypothalamic-pituitary-adrenal axis. High level of glucocorticoid could inhibit the secretion of ACTH, which is named as feedback. While the regulation is out of control in EAS. High-dose dexamethasone suppression test has been applied in the diagnosis of EAS based on the characteristic. However, the underlying mechanism is not clear. Multiple mechanisms have been proposed to explain the phenomenon. Reduced GR expression is thought to be an important factor in mediating glucocorticoid resistance. The GR is a ubiquitously expressed protein, found in almost all human cell types and tissues at appreciable levels. It is well-established that the level of GR expression is closely correlated with the magnitude of the glucocorticoid response [9]. Therefore, GR expression level may be an important determinant of the glucocorticoid response [10]. Several studies have shown that reduced GR expression in primary acute lymphoblastic leukemia cells is associated with initial resistance to glucocorticoid therapy, relapse, and poor prognosis [11,12]. In such cell lines including SLC cell lines, reduced GR expression was reported [10,13]. In our study, we collected pulmonary tissue from healthy, SCLC patients with or without EAS. Meanwhile, we collected tissue from pituitarium as a positive control. Immunochemistry analysis showed that there is no obvious difference in the expression of GR in SCLC patients without EAS compared with control group. While in SCLC patients with EAS, no obvious GR expression was seen in the pulmonary tissue. The results indicate that reduced GR expression may explain the no response to high-dose dexamethasone suppression test. Several studies explored the mechanism of GR downregulation in cell lines. It is reported that glucocorticoid-induced downregulation of GR mRNA has been attributed to reduce transcription of the GR gene as well as decreased stability of the GR mRNA [7,8]. While no data available of GR expression of pulmonary tissue in the SCLC patient with EAS. Suggesting the reduction of GR expression may contribute to the no inhibition of high-dose dexamethasone. More samples are still needed to confirm the phenomenon and further studies are needed to illuminate the mechanism of GR downregulation or absent. Conclusion In SCLC patients with EAS, the reduction of GR expression may contribute to the no inhibition of high-dose dexamethasone. Ethical Approval All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. The study was approved by Ethics Committee of Beijing Luhe Hospital. This article does not contain any studies with animals performed by any of the authors.
2020-01-02T21:47:57.929Z
2019-10-01T00:00:00.000
{ "year": 2019, "sha1": "e5fca633b82b00083313c347b1bad0e4c5724999", "oa_license": null, "oa_url": "https://doi.org/10.4103/ed.ed_29_19", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "f794dca0df1d1f13c7a240a7b9939941b55c29a9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
25305030
pes2o/s2orc
v3-fos-license
Transcriptional down-regulation of epidermal growth factor receptors by nerve growth factor treatment of PC12 cells. Treatment of PC12 cells with nerve growth factor leads to a decrease in the number of epidermal growth factor receptors on the cell membrane. The mRNA for the epidermal growth factor receptor decreases in a comparable fashion. This decrease appears due to a decrease in the transcription of the epidermal growth factor receptor gene because first, there is no difference in the stability of the epidermal growth factor receptor mRNA, second, newly transcribed epidermal growth factor receptor mRNA is decreased in nerve growth factor-differentiated cells, and third, constructs containing the promoter region of the epidermal growth factor receptor gene are transcribed much less readily in nerve growth factor-differentiated cells than in untreated cells. The decreases in mRNA are not seen in the p140(trk)-deficient variant PC12nnr5 cells nor in cells containing either dominant-negative Ras or dominant-negative Src. Treatment with nerve growth factor also increases the cellular content of GCF2, a putative transcription factor inhibitory for the transcription of the epidermal growth factor receptor gene. The increase in GCF2, like the decrease in the epidermal growth factor receptor mRNA, is not seen in PC12nnr5 cells nor in cells expressing either dominant-negative Ras or dominant-negative Src. The results suggest that nerve growth factor-induced down-regulation of the epidermal growth factor receptor is under transcriptional control, is p140(trk)-, Ras-, and Src-dependent, and may involve transcriptional repression by GCF2. they elaborate neurites, become electrically excitable, stop dividing, and will synapse with appropriate muscle cells in culture (3). Overall, the cells change from a chromaffin-like phenotype to one very similar to that of a mature sympathetic neuron. A great deal of attention has been directed toward the mechanism by which NGF instructs the cells to undergo this global change in character. One interesting property of PC12 cells is the appearance of both NGF receptors and epidermal growth factor receptors (EGFR) on their surface (4). Because NGF inhibits PC12 cell division and epidermal growth factor (EGF) stimulates PC12 cell division (4), this observation has motivated a number of studies on comparative signal transduction in these cells. These experiments have led to the conclusion that the temporal aspects of cellular signaling, as well as the exact nature of the signaling components, are important in determining the eventual changes caused by a ligand on its target cell (5,6). Another question posed by this observation had to do with the consequences of treating the cells simultaneously with an agent that stops their growth and one that stimulates it (4). The result of such dual treatment was that the differentiating agent, NGF, caused a decrease in the receptors for the mitogen, EGF (4). Although the mechanism of this down-regulation was not known, it was suggested that the decrease was, at least in part, the way in which NGF instructed the cells to stop dividing and differentiate, by blinding them to the mitogens that normally control their growth. Recently it has been shown (7) that the down-regulation of the EGFR by NGF is dependent on the Ras-Raf-MAP kinase pathway. That is, the down-regulation does not occur in PC12 variants that cannot signal by this pathway. It was also shown that the down-regulation is mediated by the p140 trk receptor. Finally, it was demonstrated that although the down-regulation accompanies NGF-induced morphological differentiation in these cells, it occurs whether or not the cells are allowed to differentiate morphologically. However, the molecular mechanisms controlling NGF-induced EGFR down-regulation remained unknown. In the present work, we have shown that the down-regulation has a transcriptional basis. Further, we have demonstrated that this decreased transcription is mediated by the p140 trk receptor and the Ras-Raf-MAP kinase pathway. Finally, we have found that increases in the transcription inhibitor GCF2 (transcriptional repressor of the epidermal growth factor receptor gene) accompany the decrease in the EGFR and that these increases are also mediated by p140 trk and the Ras-Raf-MAP kinase pathway. EXPERIMENTAL PROCEDURES Materials-Mouse NGF and rat type I collagen were purchased from Becton Dickinson (Bedford, MA). Monoclonal antibody against the EGFR (6F1) was obtained from Medical and Biological Laboratories, * The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. The nucleotide sequence(s) reported in this paper has been submitted to the GenBank TM A DNA fragment of human GCF2 (GenBank accession number U69609) corresponding to amino acids 51-705 that had been ligated with BamHI linker was ligated into the BamHI site of pGEX1T (Pharmacia Biotech Inc.). The direction was confirmed by DNA sequencing. The human GCF2 glutathione S-transferase fusion protein (GST-GCF2) was purified on a glutathione-Sepharose affinity column from isopropylthiogalactoside-induced E. coli according to the manufacturer's protocol. The polyclonal antibody was the protein A-Sepharosepurified IgG fraction from antiserum raised in rabbits against GST-GCF2. Cell Culture-PC12 cells were grown in Dulbecco's modified Eagle's medium supplemented with 5% fetal bovine serum, 10% horse serum, 100 g of streptomycin/ml, and 100 units of penicillin/ml. For NGF treatment, 100 ng of NGF/ml were added to the culture medium. In all experiments involving extended treatment with NGF, the medium was changed and fresh NGF was added every other day. PC12nnr5 cells (8) were grown in RPMI 1640 medium (Life Technologies, Inc.) supplemented with 5% fetal bovine serum and 10% horse serum. The PC12 cell variants M-M17-26 expressing the Ha-ras Asn-17 gene under the transcriptional control of the mouse metallothionein-I promoter (9) and the srcDN2 expressing the K295R mutant (kinase-dead) form of chicken Src under the control of the cytomegalovirus promoter (10) were grown, as were PC12 cells, in Dulbecco's modified Eagle's medium supplemented with 5% fetal bovine serum and 10% horse serum. In all experiments, cells were cultured on collagen. Collagen coating of culture dishes and flasks was performed according to the manufacturer's protocol (5 g/cm 2 ). For mRNA stability experiments, cells treated for 5 days with NGF were incubated with actinomycin D (10 g/ml) in Me 2 SO for 2, 4, or 6 h. Untreated control cultures contained the same concentration of Me 2 SO. Immunoblot Analysis-The cells were harvested with 5 mM EDTA/ phosphate-buffered saline, pH 7.4, and washed twice with saline. To prepare whole cell extracts, 1 ϫ 10 7 washed cells were treated with 10% trichloroacetic acid for 20 min at 4°C. The precipitate was collected by centrifugation, and the pellet was solubilized in 100 l of 2ϫ SDS sample loading buffer and sonicated. The pH of the sonicated lysate was adjusted to neutral by adding 1 M Tris. Protein concentration was estimated using the Bio-Rad protein assay system. Samples were resolved on a 7.5% SDS-polyacrylamide gel and then transferred to a polyvinylidene difluoride membrane (Millipore, Bedford, MA). After blocking with nonfat dry milk, the blot was incubated with either 6F1 anti-EGFR antibody (0.6 g/ml) or anti-GFC2 antibody (1:5,000 dilution). Bound antibodies were detected by sheep anti-mouse Ig or donkey anti-rabbit Ig antibody conjugated with horseradish peroxidase (Amersham Corp.) and analyzed with the Supersignal Chemiluminescent Substrate (Pierce). Northern Blot Analysis-Poly(A ϩ ) RNA was isolated from untreated control PC12 cells and NGF-treated PC12 cells using FastTrack (Invitrogen, San Diego, CA). The RNA was analyzed on 0.8% formaldehyde-agarose gels, transferred to Hybond-N nylon membranes, and probed with a 32 P random-primed 0.85-kb PstI-XbaI fragment of the rat EGFR (nucleotides 463-1311; the kind gift of Dr. H. Shelton Earp), followed by autoradiography. After stripping with a boiled 0.1% SDS solution, the membrane was reprobed with a 0.68-kb fragment of rat GAPDH (nucleotides 379 -1060). Competitive RT-PCR-Total cytoplasmic RNA was isolated from untreated control PC12 cells and NGF-treated PC12 cells by RNA STAT-60 (Tel-Test "B", Inc., Friendswood, TX). Single strand DNA was generated from 1 g of total RNA with Superscriptase Preamplification System (Life Technologies, Inc.). PCR MIMIC for competitive RT-PCR was made utilizing a construction kit according to the manufacturer's protocol (CLONTECH; Palo Alto, CA). The target fragment (nucleotides 154 -843) derived from EGFR was amplified using an upstream primer, 5Ј-ATGCGACCCT CAGGGACTGC GAGAAC-3Ј, and a downstream primer, 5Ј-GTCGCTAGGG GACCTGCCAC GACAAC-3Ј. The size of the PCR products of the EGFR target and the PCR MIMIC was 690 and 538 bp, respectively. The GAPDH sequence (nucleotides 379 -1060) was amplified using an upstream primer, 5Ј-TGGAGAAGGC TGGGGCT-CAC CTGAAG-3Ј, and a downstream primer, 5Ј-GCCATGTAGG CCAT-GAGGTC CACCAC-3Ј, and PCR products of the GAPDH target and the PCR MIMIC were 682 and 488 bp, respectively. Six tubes of 2-fold serial dilutions of PCR MIMIC were prepared for each competitive PCR sample. The cycle parameters for PCR were 94°C for 1 min, 56°C for 1 min, and 72°C for 1 min, and the cycle numbers for EGFR and GAPDH were 24 and 20, respectively. Nuclear Run-off Transcription-The double strand cDNA clones for rat EGFR, rat NGFI-B, and rat GAPDH were used as templates for hybridization. The 1.35-kb NsiI-PstI fragment of the rat EGFR (nucleotides 462-1815) and the 1.46-kb NarI-NheI fragment of rat NGFI-B were each ligated into pBluescript II KS(ϩ) utilizing its PstI site and ClaI-XbaI sites, respectively. The GAPDH fragment, which is identical to the target fragment used for competitive RT-PCR, was also ligated into pBluescript II KS(ϩ). 2 g of linearized template DNA was immobilized on GeneScreen nylon membrane (NEN Life Science Products) according to the manufacturer's instructions, using a slot blot apparatus (Schleicher & Schuell). Nuclei from untreated PC12 cells or PC12 cells treated with NGF (100 ng/ml) or with actinomycin D (10 M) were prepared using Nonidet P-40 lysis buffer (10 mM Tris-HCl, pH 7.4, 10 mM NaCl, 3 mM MgCl 2 , 0.5% (v/v) Nonidet P-40) according to standard methodology (11). Isolated nuclei were resuspended in glycerol storage buffer (50 mM Tris-HCl, pH 8. The labeled RNA was resuspended in 100 l of 10 mM Tris-HCl, pH 7.4, 1 mM EDTA, and 0.1% SDS and denatured at 95°C for 10 min before hybridization. Labeled RNA (6 ϫ 10 6 cpm) was hybridized to the immobilized template DNAs at 52°C in hybridization solution (6ϫ SSPE, 5ϫ Denhardt's, 0.1% SDS, 0.1% sodium pyrophosphate, 100 g/ml denatured salmon sperm DNA, 50 g/ml yeast tRNA, pH 7.6) for 60 h. Because the total radioactivity in the labeled RNAs from actinomycin D-treated PC12 cells was only 1 ⁄25th of that from the other cells, all the labeled RNA was used for hybridization. The filters were washed for 20 min once in 2ϫ SSC, 0.1% SDS at 60°C, twice in 2ϫ SSC, twice in 2ϫ SSC containing RNase A (0.25 g/ml) at room temperature, once in 0.2ϫ SSC containing 0.1% SDS at 60°C, and once in 0.2ϫ SCC containing 1.0% SDS at 60°C. Filters were processed for PhosphorImager analysis using STORM860 (Molecular Dynamics) and analyzed by autoradiography. Reporter Gene Assay-The plasmids pER6-luc, pER9-luc, and pER10-luc (12) were obtained by inserting the promoter regions from pERCAT6, pERCAT9, and pERCAT10 into the HindIII site of the pGL3-basic plasmid (Promega). PC12 cells cultured in collagen-coated 6-well plates (Nunc, Naperville, IL) were transfected using Lipo-fectAMINE with 1 g of pER-luc plasmid and 0.1 g of the internal control pRL-TK (Promega), which contains Renilla luciferase downstream of the Herpes simplex virus-thymidine kinase promoter. After NGF treatment (100 ng/ml) for various periods of time, cells were transfected for 1 h. 9 h after transfection, cell lysates were prepared with the Dual-Luciferase Reporter Assay system (Promega), and both firefly and Renilla luciferase activity were measured in an LB 9507 luminometer (Berthold, Wildbad, Germany). The transfection efficiency was normalized by the Renilla luciferase activity. The data are expressed as the means Ϯ S.D. In some experiments, RNA templatespecific PCR of the luciferase gene from PC12 cells transfected with pER6-luc was used (13). Total cytoplasmic RNA from transfected cells was reverse transcribed using a chimeric primer (5Ј-CCGCGCGGCC GCTCTAGAAC TAGTGGCGCG GTTGTTACTT GACTGGCG-3Ј), where its 3Ј-region is complementary to a region of the target luc gene (nucleotides 1614 -1593 of pGL3-Basic) and its 5Ј-region is a tagged sequence. PCR was performed with the sequence-specific first strand cDNA as a template, an upstream primer, 5Ј-ACCTGCAGCT TCTGGT-GGCG CTCCCCTC-3Ј (nucleotides 1024 -1046 of pGL3-Basic), and the tagged primer, 5Ј-GCGCGGCCGC TCTAGAACTA GTGGC-3Ј. NGF-induced Down-regulation of EGFR- The level of EGFR during NGF-induced differentiation of PC12 cells was examined by Western blot analysis (Fig. 1). A reduction in the level of the EGFR protein in whole cells became apparent after 1 day of NGF treatment, and this reduction was gradual and progressive. A 67% decrease was seen after 5 days of treatment. This reduction was consistent with the reduction in [ 125 I]EGF binding during NGF-induced differentiation observed in previous reports from this laboratory (7). NGF-induced Down-regulation of EGFR mRNA- Fig. 2 shows the levels of EGFR mRNA during NGF-induced differentiation determined both by Northern blot analysis ( Fig. 2A) and by competitive RT-PCR (Fig. 2, B and C). The time points are the same as those in Fig. 1. Both methods show a progressive reduction in EGFR mRNA levels during NGF treatment. Three species, one major (9.6 kb) and two minor (6.5 and 5.0 kb), of EGFR mRNA have been reported (14), along with a 2.7-kb species considered to be a truncated variant, the protein product of which lacks both transmembrane and intracellular domains. Northern blot data show that all three species decrease in the course of NGF treatment, and this decrease precedes the reduction in the protein. Competitive RT-PCR, a quantitative procedure for analyzing mRNA levels that is also more sensitive than RNA blot techniques, such as Northern blotting, which provide only semi-quantitative results (15), shows a very similar decrease in EGFR mRNA. Fig. 2B depicts the actual PCR products visualized on gels obtained from untreated control cells and from cells treated for 5 days with NGF. The arrows in Fig. 2B indicate the cross-over points as determined visually. Even by visual estimation, a shift in the crossover point could be clearly seen. The relative level of EGFR Fig. 1. Northern blot analysis was performed as described under "Experimental Procedures." Relative intensity of the major band (9.6 kb) compared with untreated control PC12 cells was as follows: EGFR 3 h, 105%; 6 h, 63.6%, 1 day, 41.2%; 3 days, 27.9%; 5 days, 18.2%; GAPDH 3 h, 97.1%; 6 h, 98.7%; 1 day, 100%; 3 days, 92.5%; 5 days, 75.8%. d, day(s). B, changes in EGFR mRNA levels induced by treatment of PC12 cells with NGF for 5 days as estimated by competitive RT-PCR. First strand cDNA was obtained by reverse transcription with random primers from total RNA of untreated control PC12 cells and from PC12 cells treated with 100 ng/ml of NGF for 5 days (NGF 5d). Serially diluted solutions of competitor fragments (MIMIC) were added to each PCR with first strand cDNA obtained from each PC12 cell culture in the presence of EGFR target primers. PCR products were resolved and visualized by 1. (Fig. 2C). The region selected as a target fragment for PCR amplification was almost identical to the sequence of the EGFR DNA fragment used as a probe for Northern blot analysis. The decrease in EGFR mRNA measured by competitive RT-PCR began after 6 h of NGF treatment (Fig. 2C), and an 85% reduction was seen after 5 days of NGF treatment, a result very similar to that obtained by Northern blot analysis. The reduction became statistically significant after 1 day of treatment. GAPDH mRNA, the internal control, was constant during NGF treatment. EGFR mRNA Stability-The decay of EGFR mRNA in untreated control cells and in cells treated for 5 days with NGF was measured in the presence of actinomycin D by competitive RT-PCR (Fig. 3). The decay of EGFR mRNA was faster than that of GAPDH mRNA. Neither EGFR mRNA nor GAPDH mRNA showed any significant changes in their decay due to NGF treatment. The same result was obtained by Northern blot analysis of samples from cells treated for 3 or 5 days with NGF (data not shown). Nuclear Run-off Transcription of EGFR Gene-The nascent transcript levels of EGFR were measured in untreated control cells and in cells treated with NGF for 45 min or 5 days (Fig. 4). Two different experiments were performed, and the data obtained from these two experiments were almost identical. The transcription of EGFR was decreased by 55% in PC12 cells treated with NGF for 5 days. By way of control, a 2-fold increase in NGFI-B transcription was observed after 45 min of NGF treatment. NGF treatment had no effect on the transcription of the housekeeping gene, GAPDH, and there were no transcripts from any of these genes in cells treated with actinomycin D. NGF-induced Decrease in EGFR Promoter Activity-To study the EGFR promoter activity and to confirm the decrease in the transcriptional activity in NGF-treated cells, three luciferase gene plasmids with different lengths of promoter were used for reporter gene assay of PC12 cells. Compared with the promoter activities from the longer promoter plasmids (pER6luc and pER9-luc), the activity of the shortest promoter plasmid (pER10-luc) was very low (Fig. 5A). After NGF treatment, the normalized EGFR promoter activity of pER6-luc and pER9luc transfectants was almost completely repressed (Fig. 5B). The decrease in the promoter activity of pER10-luc was not as strongly inhibited. To evaluate the time course of the reduction in the promoter activity, PC12 cells were treated with NGF for different periods of time before transfection. Even 12 h after NGF treatment, a significant reduction in promoter activity was observed, and this activity decreased with time (Fig. 5C). The slow but progressive decrease in the promoter activity was consistent with the changes in EGFR mRNA seen during NGF treatment (Fig. 2). Table I shows the effect of introducing different amounts of promoter plasmid DNA into the cells on the extent of NGF-induced down-regulation of EGFR promoter activity. Untreated PC12 cells and cells treated with NGF (100 ng/ml) for 5 days were transfected with either 1 g or 0.1 g of pER9-luc EGFR promoter DNA, and luciferase assays were performed. The down-regulation of EGFR promoter activity was seen for up to 15 h after transfection with 1 g of DNA but disappeared after 24 h. On the other hand, transfection with 0.1 g of DNA showed the down-regulation even 24 h after the transfection. Transcriptional Control of NGF-induced Decrease in pER6luc Expression-Because the NGF-induced decrease in luciferase activity could occur at either the transcriptional or the translational level, the levels of luciferase mRNA were measured by RNA template-specific PCR. Fig. 6A shows an experiment validating the PCR measurement. With the combination of the upstream and tagging primers, sequences derived from luciferase mRNA that had been tagged with unique sequence during reverse transcription are amplified preferentially, whereas contaminating DNA derived from the transfected plasmid that lacks the unique tag is not amplified. Lane 1 of Fig. 6A shows the result of PCR reaction of pGL3-Basic as a template with the combination of the upstream primer and the tagging primer. Lane 2 of Fig. 6A shows the product (597 bp) amplified by the PCR reaction of pGL3-Basic as a template with the combination of the upstream primer and the downstream chimeric primer. Lane 3 of Fig. 6A shows the product (622 bp) amplified by the reaction of single strand DNA derived from pER6 transfectant as a template with the combination of the upstream primer and tagging primer. From these results, PCR with the combination of upstream primer and tagging primer selectively amplifies luciferase mRNA. Fig. 6B shows the RNA template-specific PCR products from the pER6-luc transfected PC12 cells after 5 days of culture in the presence or the absence of NGF. Compared with untreated control cells (Fig. 6B, lane 1), NGF-treated cells showed much lower amplification of the PCR product (Fig. 6B, lane 2), demonstrating that the regulation of the expression of this plasmid is at the transcriptional level. Involvement of p140 trk in Transcriptional Repression of EGFR-PC12nnr5 cells are a variant of PC12 cells that express little or no p140 trk , the high affinity site for NGF (8). 5 days of NGF treatment did not induce the down-regulation of EGFR in these cells (Fig. 7A), nor did it change the level of receptor mRNA (Fig. 7B). PC12nnr5 cells also did not show a significant reduction in EGFR promoter activity after 5-day treatment with NGF (Fig. 7C). NGF-induced Increase in GCF2 Levels-GCF2 is a recently identified transcriptional repressor of the EGFR gene, 2 which migrates as a 160-kDa protein on polyacrylamide gels. To examine the expression level of this protein in PC12 cells during NGF treatment, Western blot analysis was performed. As shown in Fig. 8, GST fusion protein expressed in a bacterial system was recognized by an antibody raised against GCF2. In PC12 cells, a 160-kDa major protein band and a minor protein band at about 150 kDa were detected. The level of these proteins was clearly increased by NGF treatment in a time-dependent manner. FIG. 5. EGFR promoter activity in untreated control PC12 cells and in cells treated with NGF. A, relative promoter activity of the human EGFR gene and of deletion mutants in PC12 cells. Chimeric reporter genes with various lengths of the EGFR promoter (pER6-luc, pER9-luc, and pER10-luc) were generated by inserting the promoter region from pERCAT6, pERCAT9, and pERCAT10 into the pGL3-Basic vector at the HindIII site. 1 g of each reporter gene plasmid and 0.1 g of the internal control pRL-TK were transfected into PC12 cells. 1 h after transfection, the solution was removed, and culture medium was added. 9 h after transfection, the cells were harvested, and luciferase activity was measured. Relative promoter activity measured by firefly luciferase activity was normalized by Renilla luciferase activity derived from pRL-TK. The values are the means of triplicate values. Luciferase activity of the empty vector (pGL3-Basic) was 7% of the activity obtained from the pER6-luc transfection. B, EGFR promoter activity in NGF-treated PC12 cells. After 5 days in culture in the presence or absence of NGF, PC12 cells were transfected with 1 g of either pER6luc, pER9-luc, or pER10-luc and 0.1 g of pRL-TK. After transfection the solution was removed, and culture medium was added. 9 h after transfection the cells were harvested, and luciferase activity was measured. NGF was present throughout for the NGF-treated cells. 6. NGF-induced changes in the luciferase reporter gene mRNA levels in PC12 cells transfected with pER6-luc. A, validation of PCR primers used to amplify the luciferase RNA template derived from pER6-luc transfected cells. Reverse transcription was performed with the chimeric primer composed of the luciferase genespecific and the tagged sequences. The upstream primer was designed utilizing the 5Ј-luciferase gene-specific sequence. With the combination of the upstream and tagging primers, sequences derived from luciferase mRNA that had been tagged with unique sequence during reverse transcription are amplified preferentially, whereas contaminating DNA derived from transfected plasmid that lacks the unique tag is not amplified. Lane 1 shows the result of PCR reaction of pGL3-Basic as a template with the combination of the upstream primer and the tagging primer. Lane 2 shows the product (597 bp) amplified by the PCR reaction of pGL3-Basic as a template with the combination of the upstream primer and the downstream chimeric primer. Lane 3 shows the product (622 bp) amplified by the reaction of first strand cDNA derived from pER6 transfectant as a template with the combination of the upstream primer and tagging primer. B, RT-PCR of pER6-luc transfected PC12 cells. Cells were untreated or were treated with 100 ng/ml of NGF for 5 days, and total RNA was prepared from cells 5 h after the transfection. 1 g of total RNA was used for reverse transcription with the chimeric primer. After the reverse transcriptase reaction, PCR was performed using a combination of the upstream primer and the tagging primer. Lane 1, untreated control PC12 cells; lane 2, NGF-treated PC12 cells. PC12 cells are dependent upon both Ras and Src (7). To begin to explore the relationship between the increase in GCF2 and the decrease in the EGFR, the NGF-induced increase in GCF2 was inspected for its dependency on Ras or Src. Two PC12 cell variants, one stably overexpressing the dominant-negative mutant Ras N17 (M-M17-26), the other stably overexpressing a kinase-inactive, dominant-negative Src (SrcDN2) were treated with NGF for 5 days, and Western blot analyses of GCF2 and EGFR were performed. Fig. 9 shows the data from these variants and from PC12nnr5 cells as well. Unlike in the wild-type, there was neither an up-regulation of GCF2 levels nor a decrease in EGFR levels in these variant cell lines upon NGF treatment. It is interesting to note the very high basal levels of GCF2 in the Src dominant-negative cells and the absence of the 160-kDa GCF2 band in PC12nnr5 cells, even after NGF treatment. DISCUSSION The observation that both mitogenic and anti-mitogenic receptors occur on the same cell and the further finding that the presence of the ligand for the anti-mitogenic receptor, NGF, cause a profound decrease in the levels of the receptor for the mitogen, EGF, permitted the suggestion that this down-regulation could be one way in which NGF instructs its target cells to stop dividing and differentiate (4). The mechanism by which this down-regulation occurs has not been described. There have been two studies (17,18) dealing with rather short term NGFinduced changes in EGFR levels on PC12 cells, but these clearly deal with a different phenomenon than the one described here. The evidence that this regulation is exerted at the transcriptional level seems quite persuasive. There is a decrease in EGFR mRNA levels comparable with the decrease seen in the receptor itself. This decrease is clearly not caused by any difference in mRNA stability. Direct proof was obtained by nuclear run-off assay in which about 55% reduction of the nascent mRNA for the EGFR was observed as a result of long term treatment of the cells with NGF. Furthermore, evidence to support the transcriptional regulation was provided by transfecting EGFR promoter constructs linked to luciferase into untreated control and NGF-treated cells. The decreased luciferase activity in the treated cells, together with the direct measurement of luciferase mRNA to confirm that the decrease was at the transcriptional level, provides conclusive proof of the transcriptional regulation of this expression. It should be noted that this decrease in expression was evident only for short times after the transfection. By 24 h after transfection there was no difference between the NGF-treated cells and the controls. That this was not due to differences in transfection speed or efficiency was clear from the Renilla controls, which were expressed equally at every time point in untreated control and NGF-treated cells. A possible explanation for these data is that the large amounts of EGFR promoter FIG. 7. Changes in the level of EGFR protein, mRNA, and promoter activity in NGF-treated PC12nnr5 cells. A, 15 g of whole cell lysates from untreated control PC12nnr5 cells and from PC12nnr5 cells treated for 5 days with NGF (100 ng/ml) were subjected to Western blot analysis for EGFR. B, 5 g of poly(A ϩ ) RNA isolated from the same batch of PC12nnr5 cells as in Fig. 6A were subjected to Northern blot analysis for EGFR. C, after 5 days in the presence or absence of NGF (100 ng/ml), PC12nnr5 cells were transfected with 1 g of pER9-luc and 0.1 g of pRL-TK. After transfection, the solution was removed, and culture medium was added. Luciferase activity was measured 9 h after the transfection. NGF was present throughout all procedures for NGF- Whole cell lysates were subjected to immunoblot analysis with 6F1 anti-EGFR antibody and anti-GCF2 antibody, as described elsewhere. Because the levels of expression of GCF2 in the Src DN2 cells were constitutively high, the total amount of protein of this clone was reduced to 20 g for the GCF2 immunoblotting. region that are produced in the cells simply exhaust the inhibitory transcription factors available; such depletion has been suggested by observations with the transfected chromogranin A promoter in PC12 cells (19), in which the high dose of the transfected promoter DNA may saturate and deplete the transacting factor. Data consistent with such a possibility are presented in Table I in which transfection with a low dose of EGFR promoter DNA resulted in down-regulation even 24 h after transfection. The regulation of EGFR expression is complex, involving at least five stimulatory transcription factors, Sp1 (20), ETF (21,22), TCF (23), RPF-1 (24), and p53 (25,26), and four inhibitory transcription factors, GCF1 (27), GCF2, 2 ETR (28), and WT1 (29). Interactions of NGF with Sp1 and with p53 have been reported before. NGF has been shown to induce a subunit of the N-methyl-D-aspartate receptor (30) and the gene for the light neurofilament protein (31), at least in part, through an Sp1 site and the apoptotic death of PC12 cells after NGF withdrawal appears to lower Sp1 levels (32). NGF also appears to interact with p53 in that PC12 cells overexpressing p140 trk show an association between the receptor and p53, and p53 can induce an NGF-like response in these cells in the absence of NGF (33). Further, 3T3 cells show a Raf-dependent phosphorylation of p53 and a potentiation of the transactivation potential of p53 (16), and Raf is part of the Ras-Raf-MAP kinase pathway activated by NGF. But this is the first report of an NGFinduced alteration in one of the inhibitory factors. GCF2 was identified by differential hybridization and library screening 2 using a cDNA for GCF1 (27). Contransfection assays show that GCF2 acts to repress transcription from the EGFR promoter as well as from those for SV40 and Rous sarcoma virus. Gel shift assays using His-tagged GCF2 protein have revealed two binding sites in the human EGFR promoter: one is a strong binding site (Ϫ384 to Ϫ164 relative to the AUG translation initiation codon) and the other is a weaker binding site (Ϫ154 to Ϫ15). 2 Both pER6-luc and pER9-luc constructs have the promoter region that includes both binding sites, but pER10-luc contains only the weak binding site. These observations could explain the much weaker effect of NGF treatment on the transcription of pER10-luc. Clearly, the data presented here do not prove that GCF2 is involved in the decreased expression of the EGFR in NGF-treated PC12 cells, but the coincidence of the increased expression of GCF2 with the decreased expression of the EGFR and the fact that both events are dependent on p140 trk , Ras, and Src are consistent with that possibility. Further experiments testing that possibility are underway. In any case, the decreased expression of the EGFR in PC12 cells caused by NGF differentiation is clearly transcriptional in nature. The increasing number of factors that appear to control that transcription indicate that the regulation is quite complex. But because the expression of EGFR and its homologs, the ErbB family, appear to be involved in the control of the growth of a number of tumors, the details of the control of that expression would seem to be worth pursuing.
2018-04-03T04:19:18.707Z
1998-03-20T00:00:00.000
{ "year": 1998, "sha1": "6b2fdd4d66a6ce62547a241a1df76e924c600d0c", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/273/12/6878.full.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "1d581c2362e00f052310389b6f476fa87340e9ef", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
284810
pes2o/s2orc
v3-fos-license
Potential worldwide distribution of Fusarium dry root rot in common beans based on the optimal environment for disease occurrence Root rots are a constraint for staple food crops and a long-lasting food security problem worldwide. In common beans, yield losses originating from root damage are frequently attributed to dry root rot, a disease caused by the Fusarium solani species complex. The aim of this study was to model the current potential distribution of common bean dry root rot on a global scale and to project changes based on future expectations of climate change. Our approach used a spatial proxy of the field disease occurrence, instead of solely the pathogen distribution. We modeled the pathogen environmental requirements in locations where in-situ inoculum density seems ideal for disease manifestation. A dataset of 2,311 soil samples from commercial farms assessed from 2002 to 2015 allowed us to evaluate the environmental conditions associated with the pathogen’s optimum inoculum density for disease occurrence, using a lower threshold as a spatial proxy. We encompassed not only the optimal conditions for disease occurrence but also the optimal pathogen’s density required for host infection. An intermediate inoculum density of the pathogen was the best disease proxy, suggesting density-dependent mechanisms on host infection. We found a strong convergence on the environmental requirements of both the host and the disease development in tropical areas, mostly in Brazil, Central America, and African countries. Precipitation and temperature variables were important for explaining the disease occurrence (from 17.63% to 43.84%). Climate change will probably move the disease toward cooler regions, which in Brazil are more representative of small-scale farming, although an overall shrink in total area (from 48% to 49% in 2050 and 26% to 41% in 2070) was also predicted. Understanding pathogen distribution and disease risks in an evolutionary context will therefore support breeding for resistance programs and strategies for dry root rot management in common beans. Introduction The spatiotemporal distribution of plant diseases follow the changes undergone by agriculture, attributed to climatic variation and technological shifts. Large monocultures are predominantly subjected to disease outbreaks or frequent disease occurrences, thus usually relying on high-input technologies to sustain production and avoid yield losses. In these circumstances, maps of disease risks are therefore safeguarded to prevent large-scale outbreaks or relevant problems with crops and food security threats, supporting crop management in regular, welldefined climatic conditions [1,2]. However, the dynamic nature of plant diseases brings new uncertainties to crop management and to breeding programs required to ensure yields in the future [3]. The climate requirements of soilborne pathogens, such as temperature and precipitation [4], define disease favorability in specific regions where the host crop is present, and determine the array of practices required for disease management. The unbalance in well-defined requirements for disease occurrence [3,4] by the predicted changes in climate around the world will therefore potentially change disease distribution and disease management in several crops [5,6]. Therefore, overall disease modelling should be concerned about these complex interactions that affect pathosystems, in order to estimate pathogen spread and disease risks in different scenarios [7]. If we aim to prevent yield losses and improve food security, disease scouting is crucial to adjust research and development efforts that will enhance disease management, supporting decisions at the farm level and public policies [2]. Potential distribution models are commonly used in conservation biology and other research topics to predict species distribution through climate (environmental) requirements [8]. They are correlational approaches based on how environmental constraints may limit the occurrence of specific "objects"-usually species or populations-at broad spatial scales [9]. By modelling the environmental conditions associated with an object's occurrence or abundance, spatial projections of favorability can therefore be assessed. Such models highlight potentially suitable regions for the event of interest, in the current scenario or in climate change forecasts [8]. Although this framework has traditionally been used in conservation-related research, it conceivably may be useful in plant disease epidemiology studies, with macro-ecological tools [7]. It is possible to estimate both pathogen and disease geographical distribution because both objects may coexist in regions where the climate is more or less favorable for them. Regarding crop protection, potential distribution maps have been used to anticipate risks for vectorborne plant diseases [10] and insect pests using species distribution models [11]. Thus, models of potential distribution may shed light on unanswered epidemiological questions concerning the spatial distribution of diseases, and the shifts in the intensity of disease episodes expected with climate changes [8]. Disease scouting and risk mapping are especially important for staple foods, such as common beans (Phaseolus vulgaris L.), an essential protein source for developing countries in Latin America and Africa [12]. Soilborne pathogens, such as the Fusarium solani species complex (FSSC), regularly challenge common bean crops [13,14]. Broadly distributed in the world, the FSSC is a generalist pathogen that is well adapted to a wide array of environmental conditions and hosts [15]. The FSSC causes root rot and yield losses up to 100% [16], leading to chronically lower yields in most regions [13]. In this study, we used the inoculum threshold approach represented by inoculum density, a highly used measure for epidemiological studies [17], easily estimated in field-soil samples. Although widely distributed in tropical regions of South America and Africa [18], no consensus has been reached on the lower inoculum threshold associated with dry root rot occurrence in common beans. Moreover, the link between FSSC spatial distribution and dry root rot records has not yet been spatiality investigated. Here, we have provided the first attempt to model the worldwide geographical distribution of common bean dry root rot. Using an innovative approach, we speculated if the propagule optimal threshold of the FSSC could be used as a proxy of dry root rot occurrence in common beans in Brazil. That spatial proxy of the disease occurrence was then examined to model its worldwide distribution. We therefore tested the following hypotheses: 1) An optimal inoculum density threshold is spatially linked to disease occurrence; 2) root rot risk areas coincide with the main regions for common bean cropping; and 3) climate change will affect the current disease distribution. Materials and methods In this study, we modeled the potential distribution of common bean dry root rot, via species distribution model/ecological niche model procedures, which required field records of the disease, referred to as the object of interest. Precise, geo-referenced records for dry root rot in common beans are usually lacking. Governmental disease reports usually provide information solely on the state or municipality where a disease was detected, from which GIS-based information, i.e., coordinate references, cannot be accurately derived. To overcome such a limitation, we derived a method for refining our disease occurrence dataset, by using a reliable spatial dataset on the inoculum density of common bean dry root rot. We used the FSSC inoculum density that was most spatially correlated to disease occurrence (explained below), assuming that the common bean is mostly a highly susceptible crop and that all isolates from the main Brazilian common bean growing regions are pathogenic [19]. Spatial proxy calculation The inoculum density of soilborne pathogens is dependent on the climate and cropping system [14]. Even though some authors report favorable temperatures for the disease (e.g. above 18˚C) [14](e.g. between 22-32˚C) [18], and growth, survival and chlamydospore germination, the adaptation to climatic variations in both temperate and tropical regions, turns the FSSC into a cosmopolitan pathogen [15].Growing drivers, such as the temperature and soil moisture, coupled with the nutrients and organic matter content in soil, directly affect the viability of chlamydospores (the resistance structures of several Fusarium species) and the seasonality of the FSSC [20]. The same variables affect directly host root development in different cropping systems [14]. FSSC chlamydospores are broadly distributed in soil throughout the year, and growing hyphae exhibit great saprophytic ability [21]. In this study, a database of FSSC inoculum density records managed by Embrapa Arroz e Feijão (Santo Antônio de Goiás, Brazil) supported the pathogen spatial proxy estimate. That database was composed of 2,311 soil samples from commercial farms that belonged to 103 municipalities in 10 Brazilian states assessed from 2002 to 2015. Common beans were always present in the cropping history of sampling sites. In general, samples were taken at 0-10 cm topsoil from commercial farms where common beanmaize-soybean is the main cropping sequence, chosen by farmers due to commercial demands. Eventually, cropping sequences included sorghum, millet, vegetables, or forage crops. Each soil sample was submitted to serial dilution and deep plating in Nash-Snyder semi-selective medium. The FSSC colonies were identified and their colony forming units (CFUs) per soil gram considered to estimate FSSC inoculum density. Former strain pathogenicity tests with the inoculum layer method [22] showed that all isolates caused average or high dry root rot severity to the common bean [19], supporting the modeling studies with feasible field records of inoculum density (S1 Table). Disease data were retrieved from publications with field records of dry root rot on common bean crops. Scientific papers, short communications, and technical documents were selected according to the infestation histories of the sampling sites. Only those publications for which experiments had been conducted in sites naturally infested with the FSSC or in sites with historical records of dry root rot were kept. Ten publications sufficiently met the selection requisites and the geographic coordinates of their experimental sites, and therefore, were registered to support 46 disease occurrences (S1 Table). To find the best spatial proxy for the disease, with a reliable spatial reference, we estimated the Spearman's correlation coefficient between the disease occurrence and FSSC inoculum density records. Three inoculum densities were selected according to published data regarding the disease severity/inoculum density relationship. The inoculum densities adopted were the following: 1200 CFU/g of soil based on field experiments and controlled environment tests with the common bean and FSSC [23]; 3700 CFU/g of soil in greenhouse experiments with the same pathosystem as above [24]; and 4500 CFU/g of soil based on greenhouse tests to estimate the dry root rot severity according to increasing concentrations of FSSC chlamydospores [17]. The inoculum density that most correlated with dry root rot records was then considered a spatial proxy for the disease. Climate and climate change information Climate data, used here to calibrate models of potential distribution, were downloaded from the WorldClim website (www.worldclim.org/current). This database was produced via the interpolation of data from ground weather stations referring to the years of 1950-2000 [25]. On World-Clim, climate data are available as raster files containing grid-based information at different spatial resolutions. We upscaled the downloaded rasters to the resolution of 0.5 degrees of lat/long. Although our inoculum dataset allowed a finer resolution analysis to be conducted, we opted for a relatively coarser spatial resolution. Because the exact location of the sampling sites was not known, the resolution of spatial data on disease occurrence (extracted from publications) was usually rough. Several papers indicated only the municipality and not the exact locations (and the associated environmental conditions) of the experimental sites. However, most municipalities to which studies usually referred were smaller than 50km 2 . Therefore, we assumed that a 0.5 lat/long degree could adequately encompass the average enviromental conditions associated with all of the possible locations where experiments could have been conducted. In this way, the loss of some important local environmental information was compensated for by more accurate patterns on a worldwide scale, due to more generalized relationships between climate and disease occurrence. In addition to estimating the potential distribution of dry root rot in the present-day, we also evaluated the potential impacts of climate change on disease spread. To do so, we projected the disease distribution models into future climate scenarios. Scenarios of climate change were taken from two "Representative Concentration Pathways"-RCPs-from the Intergovernmental Panel on Climate Change (IPCC). Each RCP is based on a greenhouse gas concentration trajectory that the IPCC adopted for its Fifth Assessment Report (AR5) [26]. The RCP 2.6 estimates an increase of global warming of 1˚C by 2050 (average for 2041-2060), whereas the RCP 8.5 projects an increase of 2˚C. The same scenarios are used for 2070 (average for 2061-2080) with the RCP 2.6 (1˚C) and the RCP 8.5 (3.7˚C) [26]. Each RCP scenario is calculated from monthly temperature and precipitation averages, which the climate forecasts known as Atmospheric and Oceanic General Circulation Models (AOGCMs) generated [27]. Bioclimatic variables therefore are calculated from monthly temperature and precipitation values, representing annual climatic trends, seasonality, and extremes. Those climate parameters are usually considered environmental constraints for biological species and systems [28]. Although AOGCMs have been widely used to project the impacts of climate change on species distribution, different forecasts may provide different outcomes [27]. Variation in the AOGCMs chosen to project distribution models for the future can thus affect maps of potential distribution, creating a well-known source of uncertainty [29]. We therefore accounted for that known source of uncertainty by selecting different AOGCMs and projecting the disease distribution models onto all of them. The following climate forecasts were chosen: the Community Climate System Model (CCSM4) version 4 [30], the Hadley Centre Global Environmental Model version 2 (HADGEM2) [31], and the Model for Interdisciplinary Research on Climate (MIROC5) [32]. These climate forecasts are based on robust modelling algorithms, although each one has a different bias in terms of temperature and precipitation estimates [27]. The CCSM4, for example, has biases in average precipitation distribution in the tropical Pacific Ocean [30], and the HADGEM2 shows some bias toward warmer temperatures in the continental Northern Hemisphere and a colder bias in South America [31]. Lastly, MIROC5 also shows a cooling bias in the North Atlantic [32]. Therefore, no AOGCM is efficient in everything [27]. Such biases may, however, be ameliorated by considering different AOGCMs as a source of variation in model outcomes to improve the predictions of distribution models [33]. Climate forecasts provide predictions for several environmental variables based on temperature and precipitation-the bioclimatic variables. However, not all of these predictors are used in modelling procedures, to avoid correlation and collinearity-related problems. We therefore selected the predictors to be used in modelling procedures, by using logistic regressions. Logistic regressions are meant here to evaluate which bioclimatic variables would be the best disease predictors by calculating slopes without or with low collinearity and, at the same time, showing satisfactory biological explanations. All bioclimatic variables were considered as predictors. The response variable in such logistic regressions was disease occurrence (presence or absence), defined by the best spatial proxy, which was described earlier in the text. Logistic regression was performed with the R function glm, the binomial family, and the logit link. The binomial family is adequate for presence-absence data, and the logit link is the natural log of the odds in which the response variable equals one of the categories (zero and one categories). Non-significant predictor variables were removed. Further, collinearity between predictor variables was avoided with a cutoff on Pearson's correlation coefficient (r < 0.25), which is considered statistically significant (p < 0.05) [34]. Those procedures led to the following bioclimatic variables, to be used as predictors in subsequent distribution models: Isothermality (mean diurnal range/temperature annual range), maximum temperature of the warmest month, precipitation seasonality (coefficient of variation), and precipitation of the warmest quarter. The bioclimatic variables were therefore selected to allow appropriate statistical analyses. However, biological and ecological reasons also exist for justifying their choice as the best climate predictors [35]. Two chosen bioclimatic variables are based on the maximum and seasonal temperature, which admittedly alters the occurrence of diaseases. The precipitation seasonality is also a known driver of soilborne pathogen growth [4]. The rationale of modelling disease distribution based on optimal temperature ranges may not be efficient for predicting where disease does not occur [35]. Climatic constraints (maximum and minimum) may, however, lead to insights on the growth, development, and fitness of species over a period of time [35]. Periods of drought and rain in regions that exhibit high climatic seasonality may therefore influence the FSSC. These climate constraints [36,37] are indeed known to affect chlamydospore and conidia germination, spore viability, and agressiveness in different cropping systems [14]. Distribution models for the Fusarium solani species complex In this work, the study model comprised a soilborne pathogen (FSSC) and a host (common bean), used here to assess dry root rot distribution. As explained before, we used a spatial proxy defined by the inoculum threshold that best represented disease occurrence. Therefore, we modeled the pathogen environmental requirements found at locations where in-situ inoculum density seems ideal for disease manifestation. The environmental requirements of a pathogen are usually linked to disease occurrence via conditions (abiotic factors) and resources (biotic factors, which may be consumed and are subjected to competition) required for the pathogen's survival and breed [38]. However, models based solely on ecological niches of pathogens may not lead to accurate maps of disease risk because the presence of a pathogen does not necessarily implicate in disease manifestation [39]. Thus, by using not only the pathogen presence but also the density most correlated to disease manifestation, we modeled the environmental conditions most strongly related to the dry root rot occurrence. The resulting host × pathogen × environment interaction is therefore the pathogen niche projected for disease occurrence and represented in maps of climatic suitability of dry rot root in common beans. Common bean crops account for about 3.1 million hectares cropped in all Brazilian regions, especially in the South, Southeast, and Center-West regions, which themselves are responsible for 3,327.8 thousand tons of the crop [40]. In Brazil, diseased fields are frequently reported, especially due to conducive weather and the omnipresence of susceptible cultivars [41]. In this study, models of FSSC distribution were calibrated with data retrieved for the Brazilian territory extent, although projections were made worldwide. We did so because the access to the exclusive database on inoculum densities allowed us to characterize regions in the pathogen's climatic niche, e.g., the environmental preferences and constraints [42][43][44], in a way not repeatable with other countries' datasets. By calibrating our distribution models exclusively with Brazilian data, we therefore assume that the relationship among the disease occurrence, inoculum density, and environmental requirements can be extrapolated to other regions with similar climatic conditions. This approach, although possibly new in plant disease epidemiology, is widely used in other nicherelated areas, such as biological invasion risk assessments [45]. By using such an approach, our attempt here is to provide a first visual map of the dry root rot potential distribution worldwide, but not a guide for supporting local farmers and crop managers. Our models probably do not capture local climate peculiarities due to territorial data restrictions, even though disease projections may match disease records elsewhere. Therefore, we caution that the world maps provided here should not be used beyond their intended purpose. Another known source of uncertainty in distribution models, beyond AOGCM climate forecasts, is the modelling method used to establish the relationship between the occurrence of the object of interest and the environment it occupies. Different modelling methods may provide dramatically distinct projections [8]. To minimize such discrepancies, we considered only a few different methods and weighted their projections according to their performances. That model-weighting procedure, also called "model ensembling", assumes that by encompassing different possible projections and their respective performances, uncertainty in the modelling method is therefore minimized [46]. Such ensembles should not, however, encompass all classes of modelling methods, as they may have different requirements for model building and thus non-comparable results [47]. We therefore accounted for model uncertainty by ensembling among projections from different methods but also respecting theoretical restrictions regarding outcome comparisons. Our ensembles of distribution models were performed within two main classes of models: 1) the statistical methods and 2) the machine-learning methods. No ensemble was performed between both classes of methods. Statistical or regression-based methods (e.g., a generalized linear model [GLM], a generalized additive models [GAM], and multivariate adaptive regression splines [MARS] allow for encompassing a large number of relationships between occurrences and environmental factors and usually have good explanatory power regarding the distribution-related ecological processes [47]. In this class of methods, precision and generality are balanced, which leads to moderately flexible although accurate predictions [48]. Machinelearning methods, in contrast, employ data-mining algorithms-such as GARP (Genetic Algorithm for Rule Set Production), random forest, and artificial neural networks-in an attempt to maximize the relationship between environmental predictors and biological responses [47]. Machine-learning methods usually lead to highly accurate but more clinger distribution predictions (rigid predictions) [48]. In machine-learning methods, generality is penalized for the sake of accuracy, and model complexity usually prevents a clear interpretation of parameter relationships [47]. The regression-based methods used in this work were a GAM and a GLM because they are compatible in building requirements [48]. The GAM was implemented with the function of gam using the binomial family and selecting 10,000 pseudo-absences randomly sampled from all occurrence points within our defined extent. Meanwhile, the GLM was adjusted with a linear function using a binomial family, being performed for stepwise proceeding, and using 10,000 pseudo-absences with random sampling, to avoid collinearity issues [48]. All analyses were performed in "sp" [49], "raster" [50], and "Biomod2" [51] packages within the R environment (R Development Core Team). The machine-learning methods used in this study were the random forest and classification tree analysis. Random forest is a general technique of random decision forests. Here, we used 1,000 pseudo-absences collected with a random selection of points outside of the suitable area estimated with a rectilinear surface envelope from the large presence number (surface range envelope model "SRE") [48]. Finally, we performed a classification tree analysis (CTA) with the same random forest framework described above, to assure model outcome compatibility. All models were fitted with 100 runs. In a data-splitting process, 75% of our occurrence data were used for training and 25% for testing the model performance [48]. Model weighting was based on the True Skill Statistics (TSS), a measure of model performance that the prevalence of occurrences does not affect [52]. Sensitivity is the probability that the model correctly rates the presence of data, and specificity is the probability that it correctly rates an absent data. Values of TSS (TSS = sensitivity + specificity -1) range from -1 to +1, where values close to +1 indicate high accuracy, whereas values equal to or smaller than zero are usually considered not better than random. Less biased than other criteria, the TSS is the measure of choice for distribution predictions [53]. We used a cutoff (TSS < 0.5) to select exclusively the models with the best accuracy, i.e., models with TSS smaller than this figure were removed and not considered in posterior weighting procedures. Therefore, a weight (the TSS value) was attributed to each cell-based prediction of environmental suitability, which resulted in an averaged ensemble for each modelling method. Ensemble models were projected onto current climate maps to provide estimates of potential distribution for the FSSC. The same ensemble models were then projected onto predictions of future climate forecasts, which were previously collected. Therefore, suitable regions were identified for disease distribution in the present-day time and in different scenarios for climate change. Projections were made on both a country (Brazil) and a worldwide scale to predict the disease risks of dry root rot on growing regions of common beans elsewhere [54]. All of the modelling procedures were performed with the R "Biomod2" package in R (R Development Core Team). Uncertainty analysis Despite considering different projections, i.e., the ensembling procedure, we were also interested in determining which were the main drivers of total uncertainty in the model outcomes. To disentangle the variation present in our results, we adjusted linear mixed models with residual maximum-likelihood estimation (REML). Prior to that, we performed an estimability test of the effects in the mixed model in R [55] to check if it could be used to correctly describe factor rankings. We therefore built the mixed model considering the "year" (year 2050 and year 2070) as a random factor and considering "AOGCMS," "RCPs," and "modelling methods" as fixed factors. The Gauss-Markov function of the parameters was considered to be estimable if built by a linear combination of mathematic expectations. The best empirical linear unbiased estimates (eBLUEs) for fixed factors and the empirical best linear unbiased predictors (eBLUPs) were also performed for random factors. Thus, we separated the percentage of variation attributed to each factor. Because our model outcomes are maps, uncertainty was calculated for each grid cell. Therefore, uncertainty maps could be obtained for total variation and for the variation attributed to each factor. All maps presented in this paper are original and were created in R and ArcGIS 10.1 (ESRI, Redlands, CA, USA). Results All three inoculum thresholds of the FSSC per soil gram showed correlation with disease occurrence: 1200 propagule/ soil gram (ρ = 0.79; p = 0.001), 3700 propagule/ soil gram (ρ = 0.85; p = 0.001) and 4500 propagule/soil gram (ρ = 0.81; p = 0.001). Therefore, the best correlation between propagules density and root rot occurrence in the common bean was 3700, as a lower threshold, considering the current disease spatial distribution. The 3700 lower threshold was therefore chosen as a proxy of the occurrence of common bean dry rot root in distribution models (78 occurrences). Indeed, the area of the disease distribution varied according to the proxy chosen: between 3700 propagules/soil gram and 1200 propagules/ soil gram the area reduced 3% and between 4500 propagules/ soil gram and 3700 propagules/soil gram the area reduced 21%. We projected also the distribution of the disease using the three proxies to 2050 and 2070, by the statistical method. We observed that to 2050, between the two RCPs, there was no a pattern of the disease distribution-an increasing or decreasing of its range with the increase of average temperature between RCPs. However, to 2070 we found the following pattern: the increasing of the inoculum threshold (from 1200 to 4500) is associated to an enhanced shrink of the disease distribution (S2 Table, S1 Fig). Although the predictions of the current disease distribution in Brazil were not identical between statistical and machine-learning models, both methods predicted high disease risk in the central, southeastern, and southern portions of the country (Fig 1). Statistical methods predicted larger distributional areas for disease occurrence compared with machine-learning methods. Similar pattern was found in worldwide maps of the current disease distribution. Areas predicted as climatically highly suitable for disease occurrence were quite convergent between statistical and machine-learning methods (Fig 2). Again, on a worldwide scale, statistical methods produced less stringent distribution predictions compared with machine-learning methods. We considered two future scenarios according to two extreme emission rates of greenhouse gas (the "optimistic" RCP 2.6 and the "pessimistic" RCP 8.5), from the IPCC-AR5, estimated for the years of 2050 and 2070. In Brazil, an overall reduction in the suitable area for rot root occurrence in the common bean is expected in the future (Fig 3). In year 2050, a 49% reduction of the potential distribution is expected under the optimistic greenhouse gas emission rates, and up to 48% under the pessimistic climate change scenario. The high-climate-suitability area moved toward Southern Brazil, keeping disease occurrence outside of the Center-West Region of Brazil. That same reduction is observed in year 2070 projections (Fig 4). In 2070, for the RCP 2.6, reductions average 26% of the total area, and for the RCP 8.5, the reduction in the disease distribution area reached 41%. More details about the projections are in Table 1. The methods of estimation showed high values of TSS, being 75.5 to statistical and 92.3 to machine-learning. The machine-learning showed more precise models than statistical models. Most models had high overall accuracy (TSS>0.5), and bioclimatic variables presented different weights in estimation, according to the modelling method, with isothermality (up to 43.5%), maximum temperature of warmest month (up to 43.84%) and precipitation seasonality (up to 42.92%) the most important variables. Only isothermality showed congruence between Machine-Learning and statistical methods. ( Table 2). The uncertainty analysis showed differences in climate suitability estimation by the method, AOGCMs, and climatic scenario. However, the greatest source of uncertainty was the modelling method (statistical and machine learning). In addition, uncertainty varied geographically in a splash-like pattern, and it concentrated in regions predicted as inadequate for disease occurrence. That splashed pattern was evident in both the years of 2070 and 2050. Moreover, the relative contribution attributed to each factor (i.e., variation ranking) did not change with the variation of fixed factors observed via eBLUEs ( Table 3). The climate suitability of disease was lower in the scenario RCP 8.5 compared to RCP 2.6, which is expected avoid to RCP 8.5 to be pessimist, that is, shows a higher elevation of the average temperature. The low eBLUP values also show low interference with climate suitability as a response variable in 2050 and 2070. For each year, in 2050, the climatic favorability increased by 4.17 units above average, and in 2070, it decreased by 4.17 units above average (eBLUPs). This represents a low influence that only different years would explain about the climate suitability of the disease. Discussion Plant pathogens and their diseases will likely follow the climate-mediated distributional shifts on crops [4], thus creating a dynamic scenario in the management of plant disease epidemics in the face of climate change. Highlighting which regions are expected to be most at risk of The model performed in Brazil was used to predict the dry root rot distribution in different places in which the host crop is grown, such as Central America, North of USA, Europe, Africa, and Asia. The legend shows the climatic suitability: 1 is the most adequate, and 0 is the least suitable. disease is therefore crucial for disease management policies [6], and therefore, maps that anticipate disease risks can potentially increase efficiency and reduce the costs of disease management strategies in the future [2]. In this study, we modelled the distribution of common bean dry root rot, which stems from the FSSC. To do so, we used a spatial proxy for disease occurrence, represented by the inoculum density most correlated to disease distribution. An intermediate inoculum density (3700 propagules per soil gram) was the best proxy of disease occurrence, perhaps a more representative estimate of infestation by the FSSC in Brazilian common bean fields. The potential distribution of the disease in both the present and future times was also remarkably convergent to the predicted tropical areas (Central and South American and African continent) as highly suitable for common bean crops [54]. Even though records about the spatial distribution of dry root rot are scarce in the literature, our results regarding the world distribution of the disease also match the disease reports in countries such as Kenya [56], Rwanda [56], Burundi [56], Zaire [56], Mexico (such as the Aguascalientes, Veracruz, and Guanajuato states) [57] and the USA (such as Minnesota and North Dakota) Potential distribution of Fusarium dry root rot in common beans based on the optimal environment [58]. All reports highlighted the importance of dry root rot as a result of relevant yield losses and difficulties of control. In this study, the dry root rot distribution in the common bean was linked to all inoculum densities, but the intermediate threshold was the best spatial proxy for disease occurrence. The correlation between the inoculum threshold and disease occurrence therefore exhibited an intermediate saturation baseline [59]. The density-dependent relationships of the population growth of soilborne pathogens may be the cause of such patterns in epidemiology [60] as well as interspecific relationships in the soil community [61]. Soilborne pathogens may be endemic and cause diseases in natural ecosystems, but their main nutrient resource, the host, is obviously a crucial driver of their distribution. However, pathogen populations are also dependent on a certain within-host density that does not overcome the host's carrying capacity, to avoid its complete depletion [59]. This is the case of FSSC × common bean, which results in stunted, low-yield plants and rarely in plant death. A host supportability therefore exists so that the mechanisms of intra and inter competition might regulate the abundance of soilborne pathogens on a small scale [62]. Potential distribution of Fusarium dry root rot in common beans based on the optimal environment That intermediate aggressiveness paradigm has been observed on several root rot pathosystems in different regions of world. For example, the inoculum densities of Verticillium dahliae, Cylindrocladium clotalariae, Rhizoctonia oryzae, F. oxysporum f.sp. gossypii and phaseoli, and incidences of wilt in cotton [63], root rot on peanuts [64], root rot on barley [65], and wilt incidence in cotton [66] and the common bean [17] respectively are all density-dependently modulated, thus suggesting a general pattern. The intermediary aggressiveness of pathogens may be a result of evolutionary mechanisms that regulate density-dependent populations to maintain their fitness in the long term [59]. Using the intermediate inoculum density as a spatial proxy for disease occurrence allowed us to project the distribution of dry root rot in the future scenarios of climate change. Its occurrence will be probably reduced overall in the future based on the reduction of areas that are climatically suitable for the disease. In Brazil, the common bean dry root rot distribution is expected to shift toward the southern and southwestern regions of the country, which is more representative of small-scale farming than the Center-West Brazil is. On the other hand, regions that nowadays respond to high yields of the common bean, such as the Brazilian Center-West, will probably lose climatic suitability in the future. Other diseases are also projected to move toward colder regions due to climate change [67]. Crop diseases stemming from soilborne pathogens, such as F. nivale, F. culmorum, Macrophomina phaseolina, Sclerotinia minor, and Pythium ultimum, on the European continent are all Potential distribution of Fusarium dry root rot in common beans based on the optimal environment predicted to migrate in the direction of cooler areas [4]. Such results suggest that policies on disease management might benefit from focusing on the areas predicted to gain agricultural relevance, such as in the Brazilian scenario. Meanwhile, the breeding of resistant cultivars and the development of other environmentally friendly practices for disease integrated management plans may anticipate such changes and reduce food security risks. We found a strong convergence of the statistical modelling in the tropical areas predicted as highly suitable for disease occurrence and the suitable areas for common bean cropping from an independent work. Besides predicting similar areas throughout the world, the same shrinkage pattern we found here was projected for common beans, from projections of physiological mechanistic models [54]. Projections in temperate regions, however, showed less convergence and were not so clear, probably because our dataset did not encompass temperate environmental conditions, underrating disease occurrence in North American [58] and Canadian regions, where dry root rot is relevant [68]. So far, no comparative data exist between disease severity between tropical and temperate regions. However, the European region shows low importance for the common bean because other crops are more important there, such as wheat [69]. At least in the tropics, both the host and the pathogen seem to share the same environmental requirements and constraints. The integration of an environment conducive to pathogen infection and pathogen infectivity, via pathogen-host co-evolution mechanisms, may be one of the main drivers of disease dynamic distribution in the face of climate change [70]. Climate seasonality also affects soilborne pathogens' density. Consequently, disease occurrence may shift according to climate seasonality. Temperature and precipitation regimes are also important drivers of soil pathogen distribution. Understanding the resilience of such soilborne pathogens against abiotic stress can potentially guide disease management plans [4]. The overlap of the predicted distribution of the disease and host therefore suggests that climate-mediated ecological and evolutionary mechanisms are the likely drivers of the distribution of common bean dry root rot. Indeed, co-evolutionary mechanisms between natural pathogen populations and their hosts [71], coupled with pathogen evolution in agricultural landscapes [72], have been attributed as the main drivers of disease occurrence. Although a discussion on host and pathogen co-evolution is not the purpose here, our results indicate the high climatic favorability of dry root rot in the Mesoamerican region, a hot spot of diversity where wild P. vulgaris has its origin [73]. This is an area where the relevance of dry root rot and the diversity of Fusarium species is well documented [57]. Climate change will affect the distribution of the common bean [54]. Here, we found that the distribution of common bean dry root rot will probably follow the same standard. In such cases, disclosing which regions of higher disease risk will drive greater attention to crop and disease management. Maps of disease risk are therefore crucial if we are to prevent economic losses, stemming from climate change, in regions that currently do not exhibit high pressure of diseases [6,74]. Here, we provide the first worldwide maps on the potential distribution of the common bean dry root rot. The statistical-based map has straightforward applications for disease management, especially developing countries from tropical regions, such as Latin America and the African continent [75]. By anticipating maps of disease risk, our work may help with the prioritization of financial and technological resources toward high-risk areas, thus possibly reducing the costs of disease management in the future [2]. Moreover, our approach may the adjusted to other pathosystems to predict disease occurrence and improve food security.
2018-04-03T04:58:10.012Z
2017-11-06T00:00:00.000
{ "year": 2017, "sha1": "a4c552bdcd272e77da281c6cee9398a754822746", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0187770&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "14dbda617d57b56a03e8206ce13df2dc143326d0", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
267357852
pes2o/s2orc
v3-fos-license
Increasing Awareness of Sustainable Oral Health in The Elderly With The Peer Group Methods at Klinik Pratama 'Aisyiyah Moyudan Sleman Yogyakarta 2023 . The link between disease and age is extremely complicated. Age explains more variation in disease incidence than any other known factor, including oral diseases. Periodontal infections and dental caries are the most frequent human diseases and the leading causes of tooth loss. Both disorders can impair nutrition and have a detrimental impact on self-esteem and quality of life . The aim of this study is to maximize self-efficacy and health literacy through health empowerment, elderly adults with Diabetes Mellitus (DM), hypertension, or other systemic disorders require dental health promotion by creating dental and oral health cadres, the goal is to raise elderly's awareness of dental and oral health through peer groups. Forming and educating cadres in oral health was one of the tasks done. Two men and two women who were patients at the Klinic Pratama ‘Aisyiyah Moyudan Sleman Yogyakarta Indonesia and also members of chronic disease management Program (PROLANIS), were chosen as cadres. The cadres were given tooth brushing aids and dental and oral health posters, which were then used to train 30 elderly participants on how to brush their teeth properly and correctly. The training participants worked on pre-test and post-test questions before and after the training. Oral health cadres have been selected and trained to provide education to fellow elderly and participants. The education conducted by the cadres for 30 other participants generated very good results, where the average post-test score experienced a high increase (90) compared to the pre-test average score (60). In the second and third month after the program's execution, cadres' activities were independently monitored. Dental and oral health cadres could teach the people how to brush their teeth effectively. Promotive and preventive efforts carried out by cadres who were fellow elders employing peer group methods get excellent results, whereas education carried out from and by the internal community was more targeted and achieved excellent results. Introduction The number of elderly people in Indonesia is 20 million people, equivalent to 8.03% of the entire population of Indonesia.In 2020 there will be 28.8 million people, equivalent to 11.34% of the entire population of Indonesia.The elderly population in the Special Region of Yogyakarta ranks first, namely 14.7 million people in 2020 1 .Epidemiological studies show that the prevalence of DM and Impaired Glucose Tolerance (GTG) increases with age 2 .Complications of DM in the oral cavity include periodontitis, periapical lesions, dental caries, xerostomia or hyposalivation, taste disturbances, burning mouth syndrome, and oral mucosal lesions.Dental caries and periodontal disease are sensitive alarms to unhealthy diets, and predict future disease onset 3 . The main risk factors for periodontitis include poor oral hygiene, smoking, DM, medication, stress, and aging.Gingival inflammation, tooth luxation and halitosis greatly affect the quality of life 4 .Periodontitis is the main cause of tooth loss, which affects masticatory dysfunction.Decreased masticatory efficacy is a predisposing factor to malnutrition.Tooth loss causes malocclusion and TMJ disorders, and is directly related to a decrease in a person's quality of life 5 .Periodontal infections and dental caries are the most frequent human diseases and the leading causes of tooth loss.Both disorders can impair nutrition and have a detrimental impact on self-esteem and quality of life 6 .Periodontal disease can have an impact on an individual's quality of life, with a greater degree of disease severity that will have a greater impact 7 .Hypertension drugs consumed in the long-term cause xerostomia, gingival hyperplasia, salivary gland pain, changes in the sense of taste and paraesthesia.The number of people with hypertension is expected to continue to increase to 1.5 billion in 2025 with a mortality rate of 9.4% 8 . Elderly people who suffer from DM and/or hypertension, or other systemic diseases need oral health and dental health promotion to achieve optimal self-efficacy and health literacy through health empowerment.Health promotion and health empowerment are needed in the elderly, both in general and dental and oral health to improve the quality of life 9,10 .Health promotion is a process of assisting individuals and communities in improving their abilities and skills to control various factors that affect their health, so as to improve their health status.Health promotion is a combination of health education approaches and organizational, economic, environmental approaches, all of which support the creation of conducive behaviour in the health aspect.Health empowerment, health literacy, and health promotion are placed within a comprehensive approach framework 11 .Education related to DM needs to be carried out by involving the community 12 .The formation and training of health cadres for the elderly community aims to create resilient elderly people during the Covid-19 pandemic 13 .Formation and training of health cadres, especially dental and oral health cadres, is very useful for realizing community dental and oral health empowerment.Community empowerment through dental and oral health cadres can increase the knowledge, awareness, and good behavior of community dental and oral health 14,15 . Methodology The activities conducted included the formation and training of oral dental health cadres followed by dental and oral health education by the cadres to all members of PROLANIS (Fig. 1).Four participants, 2 men and 2 women of the PROLANIS were selected and acted as dental health cadres.Each cadre was expected to provide education and training on how to brush teeth and dentures properly and correctly.The cadres selected were part of the PROLANIS participants, who were expected to invite their friends, so they would know, understand, and were aware of the importance of maintaining good dental and oral health behavior so that the quality of life of the elderly increased.The four cadres were trained on how to provide counselling, how to brush teeth and dentures properly and correctly.The cadres who had been selected and trained then conducted education and training on brushing teeth and dentures maintenance properly and correctly for 30 elderly PROLANIS participants (Fig. 2).These educational activities included daily healthy behavior, both in general and in the oral cavity.The education was carried out by using posters and props.The four cadres that had been selected then conducted a training on how to brush teeth and clean dentures properly and correctly.The cadres were given toothbrushes which were used to train 30 elderly PROLANIS participants on how to brush their teeth and dentures properly and correctly. Figure 2. Education and Counselling by Cadres The indicator of success for this community service activity was the increase of knowledge and awareness of the community in general, and the PROLANIS participants in particular.These indicators were measured using measurement tools in the form of pre-test and post-test questions.The training participants worked on pre-test and post-test questions before and after the training.The measurement results were in the form of the average value of the pre-test and post-test.The measurement results found that there was an increase in the average value of the pre-test and post-test, as shown in table 1 below.The evaluation and follow-up of cadres' activities independently were carried out in the second and third month after the implementation of the program.The dental and oral health cadres were able to educate the general public on how to brush their teeth properly and correctly (Fig. 3). Results and Discussion The result of the community service with the aim of increasing awareness of dental and oral health in the elderly through peer groups with the formation of dental health cadres is the increase in the knowledge and awareness of the elderly regarding dental and oral health.The education carried out in this program was the peer group method, health cadres were selected from the internal community, namely PROLANIS, it was expected that they could invite their friends and the surrounding community to behave healthily in everyday life, both in general and dental health.Elderly people who suffer from DM and/or hypertension, or other systemic diseases need oral health and dental health promotion to achieve optimal selfefficacy and health literacy through health empowerment.Health promotion and health empowerment are needed in the elderly, in general, dental, and oral health to improve the quality of life 9,10 ).Health promotion is a process of assisting individuals and communities in improving their abilities and skills to control various factors that affect their health, so as to improve their health status. Health promotion is a combination of health education approaches and organizational, economic, environmental approaches, all of which support the creation of conducive behavior in the health aspect.Health empowerment, health literacy, and health promotion are placed within a comprehensive approach framework 11 .Empowerment is a dynamic process which starts with the community learning directly from action.Communities must understand about various types of diseases including how they are infected, transmitted, and treated.Understanding will lead the community to make the right decisions about the actions that must be taken.Communities are expected to communicate health issues to other communities.Education related to DM needs to be carried out by involving the community 12 .The formation and training of health cadres for the elderly community aims to create resilient elderly people during the Covid-19 pandemic 13 .Formation and training of health cadres, especially dental and oral health cadres, is very useful for realizing community dental and oral health empowerment.Community empowerment through dental and oral health cadres can increase community dental and oral health knowledge, awareness and behavior 14,15,5 . Conclusion Empowerment is a process of helping to strengthen the ability of the community, so that it can bridge the communication distance between providers and target groups.Health empowerment carried out through the peer group method by forming dental and oral health cadres is able to increase knowledge and awareness of dental and oral health in the elderly. Figure 1 . Figure 1.Forming and Training Cadres Figure 3 . Figure 3. Dental Cadre Conducts Tooth Brushing Education in 2 nd and 3 rd Month after Implementation Program Table 1 . Average Pre-test and Post-test Scores
2024-02-01T16:21:58.282Z
2023-12-30T00:00:00.000
{ "year": 2023, "sha1": "3c16609794588f3aea30e7ad6f3272b170f161c6", "oa_license": "CCBY", "oa_url": "https://prosiding.umy.ac.id/iccs/index.php/iccs/article/download/192/216", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "c0574ea6fededb0b67343feb310fd21f81a2ab9f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
2888595
pes2o/s2orc
v3-fos-license
Hypoacetylation, hypomethylation, and dephosphorylation of H2B histones and excessive histone deacetylase activity in DU-145 prostate cancer cells Background Hypoacetylation on histone H3 of human prostate cancer cells has been described. Little is known about the modifications of other histones from prostate cancer cells. Methods Histones were isolated from the prostate cancer cell line DU-145 and the non-malignant prostatic cell line RC170N/h. Post-translational modifications of histone H2B were determined by liquid chromatography-mass spectrometry (LC-MS)/MS. Results The histone H2B of the prostate cancer cell line DU-145 was found to have hypoacetylation, hypomethylation, and dephosphorylation as compared to the non-malignant prostatic cell line RC170N/h. H2B regained acetylation on multiple lysine residues, phosphorylation on Thr19, and methylation on Lys23 and Lys43 in the DU-145 cells after sodium butyrate treatment. Conclusions The histone H2B of DU-145 prostate cancer cells are hypoacetylated, hypomethylated, and dephosphorylated. Histone deacetylase inhibitor reversed this phenotype. Epigenetic agent may therefore be useful for prostate cancer therapy and worth further investigation. Electronic supplementary material The online version of this article (doi:10.1186/s13045-016-0233-x) contains supplementary material, which is available to authorized users. Hypoacetylation, Prostate cancer To the editor The histone proteins H3 and H4 hetero-tetramer are flanked on each side by an H2A and H2B hetero-dimer, with H3-H4 and H2A-H2B each interacting with different parts of the nucleosomal DNA. Aberrant activities of DNA methyltransferases and acetyltransferases lead to epigenetic remodeling of chromatin and have been implicated in carcinogenesis [1][2][3]. At this time, not much is known about the post-translational modifications of histones other than H3 in prostate cancer cells. Recent studies in yeast have revealed the importance of H2B in transcriptional regulation [4]. In this study, posttranslational modifications of histone H2B from the human prostate cancer cell line DU-145 and the nonmalignant prostatic cell line RC170N/h were analyzed by liquid chromatography-mass spectrometry (LC-MS/MS) (see Additional file 1) [5][6][7]. The status of H2B acetylation in DU-145 cells was illustrated in Fig. 1a, with acetylation determined on a single lysine (K) residue at amino acid sequence position 20 (K20). Lysine at position 23 (K23) was found to be di-methylated as shown in Fig. 1a. The acetylation at K20 and dimethylation at K23 were observed on the tryptic peptides, 16 KAVTKAQK 23 and 17 AVTKAQKKDGKK 28 . Analyses of H2B from RC170N/h cells revealed acetylation at K5, K16, and K20 (Fig. 1b). The methylations were found to be tri-methylated at K15 and K120, dimethylated at K23, and mono-methylated at K116 (Fig. 1b). The same peptides 16 KAVTKAQK 23 and 17 AVTKAQKKDGKK 28 that were analyzed for the DU-145 cells, together with other peptides including 1 PDPSKSAPAPKKGSKKAVTKAQK 23 and 109 HAVSEG TKAVTK 120 , were examined for these modifications. The non-malignant RC170N/h cells clearly had more acetylated and methylated lysine residues on H2B than the DU-145 cancer cells. To evaluate the histone deacetylase (HDAC) activity, DU-145 cells were treated with sodium butyrate, an inhibitor of HDACs. After butyrate treatment of these cells, acetylation on lysine residues, K5, K11, K12, K16, K20, and K27 of H2B, was identified (Fig. 1c). Specifically, the acetylation changes were detected in the following peptides, 6 SAPAPKKGSK 15 , 16 KAVTKAQK 23 , 1 PEPAKSAPAPK 11 , and 17 AVTKAQKKDGKK 28 . The fact that acetylation on H2B in the DU-145 cells was detected in multiple additional lysine residues after HDAC inhibition by sodium butyrate suggests that there was an excessive HDAC activity in the DU-145 cells. Acetylation of K5, K16, and K20 was also observed in the non-malignant RC170N/h cells (Fig. 1b, c). These data showed that the DU-145 cancer cells had a single K20 acetylation site, compared to the non-malignant RC170N/h cells that had three sites at K5, K16, and K20. The NaB-treated DU-145 cells had six acetylation sites at K5, K11, K12, K16, K20, and K27. The differences in the acetylation sites were detected at the N termini, without involving the alpha helices which start at amino acid residue 37 of H2B (Fig. 1, underscored sequences). The small lung carcinoma cells have six sites at K5, K11, K12, K15, K16, and K20 [8]; The Jurkat cells have three sites at K12, K15, and K20 [9], whereas the untreated DU-145 cells in this study have one acetylated K20. These results indicate that there are clear differences in acetylation sites among the human cell lines. These differences constitute the epigenetic signatures of individual neoplastic clones. We next examined changes of the H2B methylation status in the DU-145 cells upon HDAC inhibition. After sodium butyrate treatment, additional methylation on K43 was found (Fig. 2c), as compared to only K23 methylation in the untreated DU-145 cells (Fig. 1a). The increase in phosphorylation of H2B in DU-145 cells after NaB treatment is particularly intriguing. This provides the first evidence of the possible cross talk of HDACs and serine/threonine kinases. As illustrated in Fig. 2, we postulate that the abnormal HDAC activity could turn the reversible histone modifications into irreversible, leading to perpetual aberrant epigenomes. An enhanced HDAC activity or a reduced K-acetyltransferase activity would tip the balance towards deacetylation. These could perpetuate the epigenetic changes and result in carcinogenesis. Targeting epigenetic modifications by inhibiting HDACs and DNA methyltransferases has become novel cancer therapies [2,3,[10][11][12]. This study also suggests that HDAC inhibitors may be a potential therapeutic option for prostate cancer. In conclusion, the histone H2B of DU-145 prostate cancer cells are hypoacetylated, hypomethylated, and dephosphorylated. Histone deacetylase inhibitor reversed this phenotype. Epigenetic agent may therefore be useful for prostate cancer therapy and worth further investigation. Fig. 2 Hypothetical pathways of carcinogenesis from prostatic stem cells. Histone hypoacetylation leads to disruption of the normal epigenome in prostatic stem cells. The aberrant epigenome with hypoacetylation may be established when the reversible alterations in acetylation become irreversible. This is due to the abnormal histone deacetylase activities. As a result, caretaker phenotype and critical genes are inactivated. These eventually lead to carcinogenesis
2016-05-12T22:15:10.714Z
2016-01-12T00:00:00.000
{ "year": 2016, "sha1": "57f45ea56f8310cfe072000f3fd2101f9688382e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s13045-016-0233-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "57f45ea56f8310cfe072000f3fd2101f9688382e", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
231961945
pes2o/s2orc
v3-fos-license
Bone marrow-derived mesenchymal stem cells modulate autophagy in RAW264.7 macrophages via the phosphoinositide 3-kinase/protein kinase B/heme oxygenase-1 signaling pathway under oxygen-glucose deprivation/restoration conditions Abstract Background Autophagy of alveolar macrophages is a crucial process in ischemia/reperfusion injury-induced acute lung injury (ALI). Bone marrow-derived mesenchymal stem cells (BM-MSCs) are multipotent cells with the potential for repairing injured sites and regulating autophagy. This study was to investigate the influence of BM-MSCs on autophagy of macrophages in the oxygen-glucose deprivation/restoration (OGD/R) microenvironment and to explore the potential mechanism. Methods We established a co-culture system of macrophages (RAW264.7) with BM-MSCs under OGD/R conditions in vitro. RAW264.7 cells were transfected with recombinant adenovirus (Ad-mCherry-GFP-LC3B) and autophagic status of RAW264.7 cells was observed under a fluorescence microscope. Autophagy-related proteins light chain 3 (LC3)-I, LC3-II, and p62 in RAW264.7 cells were detected by Western blotting. We used microarray expression analysis to identify the differently expressed genes between OGD/R treated macrophages and macrophages co-culture with BM-MSCs. We investigated the gene heme oxygenase-1 (HO-1), which is downstream of the phosphoinositide 3-kinase/protein kinase B (PI3K/Akt) signaling pathway. Results The ratio of LC3-II/LC3-I of OGD/R treated RAW264.7 cells was increased (1.27 ± 0.20 vs. 0.44 ± 0.08, t = 6.67, P < 0.05), while the expression of p62 was decreased (0.77 ± 0.04 vs. 0.95 ± 0.10, t = 2.90, P < 0.05), and PI3K (0.40 ± 0.06 vs. 0.63 ± 0.10, t = 3.42, P < 0.05) and p-Akt/Akt ratio was also decreased (0.39 ± 0.02 vs. 0.58 ± 0.03, t = 9.13, P < 0.05). BM-MSCs reduced the LC3-II/LC3-I ratio of OGD/R treated RAW264.7 cells (0.68 ± 0.14 vs. 1.27 ± 0.20, t = 4.12, P < 0.05), up-regulated p62 expression (1.10 ± 0.20 vs. 0.77 ± 0.04, t = 2.80, P < 0.05), and up-regulated PI3K (0.54 ± 0.05 vs. 0.40 ± 0.06, t = 3.11, P < 0.05) and p-Akt/Akt ratios (0.52 ± 0.05 vs. 0.39 ± 0.02, t = 9.13, P < 0.05). A whole-genome microarray assay screened the differentially expressed gene HO-1, which is downstream of the PI3K/Akt signaling pathway, and the alteration of HO-1 mRNA and protein expression was consistent with the data on PI3K/Akt pathway. Conclusions Our results suggest the existence of the PI3K/Akt/HO-1 signaling pathway in RAW264.7 cells under OGD/R circumstances in vitro, revealing the mechanism underlying BM-MSC-mediated regulation of autophagy and enriching the understanding of potential therapeutic targets for the treatment of ALI. Introduction The pathological processes of acute lung injury (ALI) are normally accompanied by broad infiltration, increased autophagy, decreased phagocytosis, and excessive production of inflammatory mediators of pulmonary macrophages, which largely contribute to severe pulmonary edema. [1][2][3] One of the biological characteristics of macrophages is their role in autophagy. Generally, a regulated amount of autophagy is considered to be a survival mechanism to protect cells from hypoxia, starvation, and infection, [1,2] but overactivated autophagy causes apoptosis or necrosis. [4,5] To date, an increasing number of studies have shown that the level of autophagy mediated by alveolar macrophages (AMs) is crucial in ALI induced by ischemia/reperfusion injury (IRI). [4,6,7] injured sites and produce anti-apoptosis, depression of inflammatory factors as well as participate in the regulation of autophagy. [9][10][11][12][13] For example, studies have demonstrated that BM-MSCs appear to be effective for therapy [14,15] in restoration [9][10][11] and treatment [16][17][18] of lung injuries, such as pulmonary hypertension and chronic obstructive pulmonary disease. However, the regulation of autophagy by BM-MSCs in macrophages and the related signaling pathways under IRI-induced ALI remain unclear. The phosphoinositide 3-kinase/protein kinase B (PI3K/ Akt) signaling pathway is involved in the regulation of a wide range of cellular processes, [19] including nutrition, metabolism, proliferation, and apoptosis. [20,21] Furthermore, the PI3K/Akt pathway also regulates autophagy through downstream molecules. [20,21] It has been recognized that heme oxygenase-1 (HO-1) is a molecule that is downstream of the PI3K/Akt pathway, and the upregulation or overexpression of HO-1 might play a protective role under the stimulation of hypoxia, oxidative stress, and IRI. [22][23][24] Previous studies have shown that reactive oxygen species or transforming growth factor-beta 1 up-regulate the expression of HO-1 via the PI3K/Akt pathway, [25] and activated PI3K/Akt up-regulates the expression of HO-1 via Nrf2. [25][26][27] Interestingly, HO-1 is also involved in the modulation of cellular autophagy. In liver IRI, HO-1 was found to induce autophagy to protect hepatic cells from injury. [28] Conversely, in acute renal injury, knocking out HO-1 increased oxidative stress and autophagy, causing cellular death. However, knocking down HO-1 decreased autophagy and reduced oxidative stress. [29] However, it is still a question whether HO-1 is expressed in macrophages under IRI circumstances. Thus, in the present study, we established a co-culture system of macrophages (RAW264.7) with BM-MSCs under oxygen-glucose deprivation/restoration (OGD/R) circumstances in vitro to determine whether BM-MSCmediated regulation of autophagy in RAW264.7 cells occurs via the PI3K/Akt signaling pathway and to screen the downstream gene of HO-1 based on a whole-genome microarray assay. Human embryonic lung fibroblasts (HELFs) (catalog No. GNHu5) and RAW264.7 mouse macrophages (catalog No. TCM13/SCSP5036) were purchased from the cell bank of the Chinese Academy of Sciences. The culture conditions for HELFs and RAW264.7 cells were the same; they were cultured in high glucose Dulbecco modified Eagle medium (DMEM) supplemented with 10% FBS and 1% P/S and were incubated at 37°C in 5% CO 2 for 6 to 7 days until 70% to 80% confluence was reached. HELFs and RAW264.7 cells (passages 3-6) were used for experiments. All procedures were performed according to the manufacturer's instructions. In vitro co-culture of RAW264.7 cells with BM-MSCs or HELFs under OGD/R conditions A six-well Transwell co-culture system (0.4-mm pore polycarbonate, Corning) was used to establish co-culture of RAW264.7 cells with BM-MSCs or HELFs. RAW264.7 cells were grown (50%-70% cell density) in six-well plates. The 3rd to 6th passages of BM-MSCs (1  10 5 cells/well) were seeded in the upper wells of a six-well Transwell system and then were cultured overnight. Then, the upper wells of BM-MSCs were transferred to the lower wells of the six-well plate that contained RAW264.7 cells, producing a coculture system. The co-culture system of RAW264.7 cells with HELFs was the same as that of BM-MSCs. The OGD/R model was established as previously reported. [7,30,31] Briefly, the standard DMEM culture medium was replaced with an equal volume of glucosefree DMEM and incubated at 37°C in 1% CO 2 for 8 h (to mimic ischemia). Then, the glucose-free DMEM was replaced with standard medium and incubated at 37°C in 5% CO 2 for 12 h (to mimic reperfusion). Detailed procedures were performed according to the manufacturer's instructions. Briefly, the infection of RAW264.7 cells with Ad-mCherry-GFP-LC3B occurred under optimized infection conditions (multiplicity of infection [MOI]), which was based on the formula: plaque-forming unit = cell number  MOI. RAW264.7 cells were cultured overnight in high glucose DMEM at 37°C in 5% CO 2 to reach 50% confluence. Based on the variable MOI values being tested, a certain proportion of Ad-mCherry-GFP-LC3B virus was added into wells containing RAW264.7 cells, and then were incubated at 37°C in 5% CO 2 away from light. The best infection conditions were determined, including the best MOI, best observation time point, and 20% to 70% infection efficiency of adenovirus, which enabled the autophagic vitality of RAW264.7 cells could be investigated. Western blotting Proteins were extracted from cultured RAW264.7 cells using radioimmunoprecipitation assay buffer (Beyotime Biotechnology). The Western blotting analysis was performed according to standard procedures as described in earlier studies. Briefly, equal amounts of protein from cells were separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis and transferred onto polyvinylidene fluoride membranes (Bio-Rad, Hercules, California, USA). The membranes were blocked with blocking solution (Beyotime Biotechnology) for 1 h and subsequently were incubated with primary and secondary antibodies. Immunoreactive bands were visualized with a chemiluminescence detection system (Thermo Scientific Pierce, Rockford, IL, USA). All antibodies used in these studies were purchased from Cell Signaling Technology (Danvers, MA, USA). Inhibition of the PI3K/Akt pathway with LY294002 in RAW264.7 cells LY294002 (20 mmol/L Selleck Chemicals, Houston, TX, USA), a PI3K inhibitor, was applied to selectively inhibit the PI3K/Akt pathway in RAW264.7 cells. The procedures were performed according to the manufacturer's instructions. Extraction and clean-up of total RNA from RAW264.7 cells RAW264.7 cells were lysed, and mRNA was extracted using TRIzol reagent (Invitrogen; Thermo Fisher Scientific, Inc., Waltham, MA, USA) according to the manufacturer's instructions. The concentration and integrity of the RNA in each group were measured at optical density (OD) 260/280 nm and OD260/230 nm ratios with an ultramicrospectrophotometer (NanoDrop ND-1000; Agilent, Santa Clara, CA, USA). Total RNA clean-up was performed using a QIAGEN RNeasy Mini Kit (Qiagen 74104, Germany) according to the manufacturer's protocol. Purified RNA should be between 1.8 and 2.1 in the OD260/280 nm ratio and have an OD260/230 nm ratio >1.8. Clear bands representing ribosomal RNAs 5S, 18S, and 28S were visualized without degradation on formaldehyde denaturing gel electrophoresis. Cy3-uridine triphosphate (UTP)-labelled RNA and labeled cRNA quality control Purified RNA was labeled and amplified with Cy3-UTP using a Quick Amp Labeling Kit, One-Color (p/n 5190-0442; Agilent), according to the manufacturer's instructions. Cy3-UTP-labeled cRNA was purified using a QIAGEN RNeasy Mini Kit (74104; Qiagen) according to the manufacturer's protocol. PCR analysis of mRNA expression of differential genes After extraction and measurement of RNA from RAW264.7 cells, quantitative real-time PCR (qRT-PCR) was performed. The primer sequences were designed by BioTNT Corporation (Shanghai, China), and b-actin served as an internal reference. A ReverTra Ace qPCR RT Kit (Hitachi, Toyobo, Japan) was used for RNA reverse transcription, which was performed according to the manufacturer's instructions. Microarray expression analysis One microgram of total RNA from each sample was used to generate amplified and biotinylated sense-strand complementary DNA (cDNA) from the entire expressed genome, which was performed according to the manufacturer's instructions (Quick Amp Labeling Kit, p/n 5190-0442; Agilent; QIAGEN RNeasy Mini Kit, 74104; Qiagen). Agilent Mouse 4  44K Gene Expression Microarrays (v2), which include nearly 40,000 mouse genome informatic points and transcripts based on RefSeq, GenBank, and RIKEN databank, were hybridized with the cDNA samples for 17 h in a 65°C incubator. Then, they were washed and stained and finally scanned using the Agilent Microarray Scanner (p/n G2565BA; Agilent) with Feature Extraction software (Agilent). Scanned fluorescence values were transferred into Genespring GX (ver.12.1; Agilent). The extracted data were used to compare the signal ratio (fold change) between groups. The fold change reflects genome changes. Differentially expressed genes (DEGs) were defined as fold change ≥2.0 and P 0.05 as determined by Student's t test. Based on the R package, [32] gene ontology (GO) analysis was used for locating and identifying DEGs within cells [33] ; Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis [34] was used to identify the various pathway distributions of differential genes. Statistical analysis The values of all of the measurements are presented as the mean ± standard deviation. Comparisons between groups were performed with Student's t test. The analysis was performed with GraphPad Prism 5 (GraphPad Software Inc., San Diego, CA, USA). Each experiment was performed in triplicate. Values of P < 0.05 were considered statistically significant. Whole-genome expression profile and summary of DEGs Based on the patterns of the genes identified by the microarray, the fluorescence intensity in each genetic locus was recorded as original data and treated with Agilent Genespring GX software to obtain the level of transcrip-tion for each gene in RAW264.7 cells. Cluster detection was performed along with the level of gene transcription in each gene. A clustering diagram showed the overall difference in all gene transcription between the OGD/R and control groups, as well as the OGD/R + BM-MSC and OGD/R groups [ Figure 5A]. Volcano plots were used to visualize genome-wide gene expression. Through the detection of the whole-genome expression profile, more than 26,000 genes and transcripts were screened. Afterward, the filtration and comparison of DEGs were performed by analyzing the fold change and Pvalue, as illustrated with volcano plots. The criteria for DEGs were based on the level of change in gene transcription (fold change ≥2.0 and P 0.05) between compared groups [ Figure 5B]. The results showed that 1785 genes were upregulated and 1856 genes were down-regulated in the OGD/ R group compared to the control group. However, 699 genes were up-regulated and 735 genes were down-regulated in the OGD/R + BM-MSC group compared to the OGD/R group. Statistical analysis of the expression of DEGs related to autophagy The selected DEGs between the OGD/R + BM-MSC and OGD/R groups were subjected to GO and BP analysis to confirm the up-and down-regulation of autophagy-related genes. There were a total of 18 genes with significantly different expression between the OGD/R + BM-MSC group and the OGD/R group (fold change ≥2.0 and P 0.05), including 16 up-regulated genes (such as HO-1 and Mapk3) and two down-regulated genes (Atg9b and Wdr45b). The results are shown in Supplementary Table 1, http://links. lww.com/CM9/A349 and are visualized in a cluster diagram [ Figure 5C] and a volcano plot [ Figure 5D]. Identification of the DEG HO-1 In our previous study and increased research, the target gene HO-1 might refer to IRI. We selected the HO-1 gene for further confirmation of its differential expression through pathway analysis with the KEGG databank to identify a link between HO-1 expression and PI3K/Akt under OGD/R conditions. Fluorescent qPCR and Western blotting were used to determine the mRNA expression and protein level of the HO-1 gene. The results showed significant up-regulation of both mRNA (0.89 ± 0.17 vs. 0.40 ± 0.07, t = 4.62, P < 0.05) and protein expression in the HO-1 gene in RAW264.7 cells when they were co-cultured with BM-MSCs under OGD/R conditions (0.70 ± 0.09 vs. 0.48 ± 0.02, t = 4.13, P < 0.05) [ Figure 6A and 6B], which was in accordance with the data from the whole-genome expression profile. Discussion There are complex mechanisms in IRI-induced ALI. Clinically, a major problem in ALI is excessive inflammation. In particular, broad infiltration, increased autophagy, and the increased release of inflammatory mediators by pulmonary macrophages are the leading causes of ALI, and these factors largely contribute to severe pulmonary edema. [1][2][3] Therefore, the reduction of excessive inflammation, and more specifically, modulating the regulation of the autophagic level in macrophages, could be the central point in the alleviation of IRI-induced ALI. In the present study, based on a co-culture system of RAW264.7 cells with BM-MSCs under OGD/R conditions in vitro, we verified that BM-MSCs modulate autophagy in RAW264.7 cells via the PI3K/Akt signaling pathway and tentatively identified the downstream gene expression of HO-1 in the PI3K/Akt signaling pathway with a wholegenome microarray assay. Following pre-treatment with LY294002, the effect of BM-MSCs on autophagy in RAW264.7 cells was blocked. The results confirmed the existence of the PI3K/Akt/HO-1 signaling pathway, and BM-MSCs down-regulated the autophagic level in RAW264.7 cells in this way. This is interesting because this is the first study to uncover the mechanism underlying the BM-MSC-mediated modulation of autophagy in RAW264.7 cells, and, more broadly, this study enriches our understanding of the autophagy of macrophages in ALI as well as areas to target in BM-MSCs for ALI therapy. Autophagy of macrophages is involved in multiple lung diseases. [3,[35][36][37] Regulated moderate amount autophagy is considered to be a survival mechanism that protects cells from hypoxia, starvation, and infection, [1,2] but overactivated autophagy causes apoptosis or necrosis. [4,5] Thus, precise modulation of autophagy is a key link for the maintenance of cellular homeostasis. [38][39][40] Increasing studies have shown that the modulation of the autophagic level of AMs is a crucial process in IRI-induced ALI [4,6,7] ; therefore, trying to find an effective therapeutic strategy related to this process has recently become a focal point, In the present study, with stimulation of OGD/Rinduced hypoxia in RAW264.7 cells, the LC3-II/LC3-I ratio was found to significantly increase, and p62 expression decreased, which was accompanied by the up-regulation of autophagy; however, co-culture with BM-MSCs significantly reduced the LC3-II/LC3-I ratio and increased p62 expression in RAW264.7 cells. The alterations in autophagy induced by IRI were similar to what was observed in previous studies. [40][41][42] In our recent work, pre-treatment with BM-MSCs down-regulated autophagy in AMs and alleviated the inflammatory response in IRI-induced ALI mice; therefore, the present research provided further evidence for an effective therapeutic strategy based on BM-MSCs in IRI-induced ALI. [14,15] The PI3K/Akt signaling pathway is recognized as a typical pathway involved in the regulation of a wide range of cellular processes, [22,43,44] including nutrition, metabolism, proliferation, and apoptosis. [20,21] However, it Moreover, through screening downstream molecules in the PI3K/Akt pathway with a whole-genome microarray assay, the DEG HO-1, which is a downstream molecule of the PI3K/Akt signaling pathway, was identified. Therefore, the existence of a PI3K/Akt/HO-1 signaling pathway was confirmed in RAW264.7 cells based on KEGG database analysis. The final results confirmed that the expression of mRNA and protein of HO-1 were also up-regulated, and the alteration of HO-1 was consistent with involvement in the PI3K/Akt pathway. This finding was interesting because as a downstream molecule of the PI3K/Akt pathway, HO-1 is involved in the modulation of cellular autophagy, and the up-regulation or overexpression of HO-1 might play a protective role following stimulation by hypoxia, oxidative stress, and IRI. [22][23][24] However, in the present study, some other DEGs related to autophagy, such as Mapk3 (extracellular regulated protein kinases 1/2, ERK1/2) and Bnip3/Bnip3l (Nip3-like protein X, Nix), were also identified, which might implicate the mitogen-activated protein kinase/ERK and hypoxia-inducible factor-1a/Bnip3/Beclin-1 pathways, respectively, so future investigations on different genes related to different signaling pathways are needed. There were some limitations in this study, and further investigations are needed to achieve the following: (1) understanding the communication between BM-MSCs and RAW264.7 cells in a co-culture system; (2) Overall, in this study, we verified that BM-MSCs modulate the autophagy of RAW264.7 cells via the PI3K/Akt pathway and downstream signaling molecule HO-1 under OGD/R circumstances in vitro. These findings provide evidence for BM-MSC therapy as a clinical option for IRIinduced lung injury. Funding The work was supported by a grant from the National Natural Science Foundation of China (No. 81490533).
2021-02-20T06:16:19.503Z
2021-02-17T00:00:00.000
{ "year": 2021, "sha1": "3a251a5393376dd0c06d292e56dba9127759f14a", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1097/cm9.0000000000001133", "oa_status": "GOLD", "pdf_src": "WoltersKluwer", "pdf_hash": "14175060cd05d4b2c416d257bc27f782c7840932", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15525937
pes2o/s2orc
v3-fos-license
Chandra Observation of the Starburst Galaxy NGC 2146 We present six monitoring observations of the starburst galaxy NGC 2146 using the Chandra X-ray Observatory. We have detected 67 point sources in the 8'.7 x 8'.7 field of view of the ACIS-S detector. Six of these sources were Ultra-Luminous X-ray Sources, the brightest of which has a luminosity of 5 x 10^{39} ergs s^{-1}. One of the source, with a luminosity of ~1 x 10^{39} ergs s^{-1}, is coincident with the dynamical center location, as derived from the ^{12}CO rotation curve. We suggest that this source may be a low-luminosity active galactic nucleus. We have produced a table where the positions and main characteristics of the Chandra-detected sources are reported. The comparison between the positions of the X-ray sources and those of compact sources detected in NIR or radio does not indicate any definite counterpart. Taking profit of the relatively large number of sources detected, we have derived a logN-logS relation and a luminosity function. The former shows a break at \~10^{-15} ergs cm^{-2} s^{-1}, that we interpret as due to a detection limit. The latter has a slope above the break of 0.71, which is similar to those found in the other starburst galaxies. In addition, a diffuse X-ray emission has been detected in both, soft (0.5--2.0keV) and hard (2.0--10.0keV), energy bands. The spectra of the diffuse component has been fitted with a two (hard and soft) components. The hard power-law component, with a luminosity of ~4 x 10^{39} ergs s^{-1}, is likely originated by unresolved point sources, while the soft component is better described by a thermal plasma model with a temperature of 0.5keV and high abundances for Mg and Si. Introduction ASCA observations have shown that X-ray spectra of starburst galaxies generally possess a hard component above 2 keV (Dahlem et al. 1998, Ptak et al. 1999). However, the origin of such a component is still unclear. In the prototype starburst galaxy M82, we found that most of the hard component is due to the most luminous X-ray source, M82 X-1, and that this object may belong to a new black holes class of objects, with 10 3 − 10 6 M ⊙ , named Intermediate-Mass Black Holes (IMBHs; Matsumoto et al. 2001). In another well-studied starburst galaxy, NGC 253, it is hot plasma with a temperature of 6 keV that accounts for the hard component (Pietsch et al. 2001). 2 T. Inui et al. [Vol. , arcsec −2 after correction for Galactic extinction) is about 6. ′ 0 × 3. ′ 4 (de Vaucouleurs et al. 1991). The dynamical center of the galaxy lies toward a dense dust lane (Young et al. 1988). NGC 2146 has an outflow of hot gas along the minor axis driven by supernova explosions and stellar winds in the starburst region (Armus et al. 1995, Della Ceca et al. 1999. X-ray luminosities of NGC 2146 derived from the ASCA observation were ∼ 1.3 × 10 40 , ∼ 1.8 × 10 40 , and ∼ 3.1 × 10 40 ergs s −1 in the soft (0.5-2.0 keV), hard (2.0-10.0 keV), and total (0.5-10.0 keV) energy bands, respectively (Della Ceca et al. 1999). The unprecedented spatial resolution (0. ′′ 5) of Chandra makes it the most suitable instrument to determine whether the hard X-ray component of NGC 2146 is due to point sources such as IMBHs or to a hot diffuse plasma. In this paper, we present the detailed catalog of point sources detected from the monitoring observations of NGC 2146 with Chandra and discuss their physical natures. Furthermore, we investigate the diffuse emission component and try to shed a light on the origin of the hard X-ray component. Errors and uncertainties in this paper refer to 90% confidence limits (∆χ 2 =2.706) unless otherwise stated. Observations NGC 2146 was observed six times through August to December 2002, with the Advanced CCD Imaging Spectrometer (ACIS; Garmire et al. 2003) on board the Chandra X-ray Observatory (CXO) (Weisskopf et al. 2002). The exposure time of each observation was about 10 ks, yielding a total observing time of 60 ks. Observations dates, exposure times and background levels are listed in Table 1. The nominal position of ACIS-S3 (a back-illuminated CCD on the spectroscopic array (ACIS-S) with good charge-transfer efficiency and good quantum efficiency below 0.5 keV) was selected coincident with the body of NGC 2146. Imaging For the following discussion it is worth to emphasize that, as explained in detail in the next section, there is no appreciable positional offsets among the observations performed at six different epochs. Hence, we were legitimate to combine the images. Using dmimgcalc and csmooth program in the CIAO package version 2.3 1 , we produced two adaptive smoothed maps of NGC 2146 in the two energy bands: 0.3-2.0 keV (Figures 1(a)) and 2.0-10.0 keV (Figures 1(b)), respectively. When compared to the optical image of the galaxy, the X-ray emission is found to be concentrated toward the nuclear region of NGC 2146. Shown in Figure 1(a) and Figure 1(b), the soft diffuse emission brows off from the galactic plane, while the hard diffuse radiation lies along the galactic plane. 1 See http://asc.harvard.edu/ciao/ Source Detection In order to establish the extent of the positional offsets between our multi-epoch Chandra maps, we run the wavdetect program in the CIAO package on the 0.3-10.0 keV image of each epoch. Wavelet scales were from 1 to 16 pixels in multiples of 2. We found that among the six observations the positions of the two brightest sources are consistent within 0. ′′ 2, which is comparable to the Chandra positional accuracy of 0. ′′ 1 2 . Images at different epochs can be combined to increase signal-to-noise ratio and allow the detection of faint sources. Then, we run the wavdetect program on the combined images for three energy bands: total (0.3-10.0 keV), soft (0.3-2.0 keV) and hard (2.0-10.0 keV) energy bands, obtaining a detected source number of 62, 55 and 42, respectively. Since five sources are detected only in the soft (three sources) or hard (two sources) energy bands, we have a total number of 67 X-ray sources in the 8. ′ 7 × 8. ′ 7 ACIS-S field of view (FOV). 41 point sources are within the D 25 ellipse at a 5σ level. We considered that these 41 sources belong to NGC 2146 and produce a log(N ) − log(S) distribution and a luminosity function for NGC 2146. Among these sources, about five sources are expected to be background sources which are within the D 25 ellipse by chance (Giacconi et al. 2001). In our analysis we have adopted the following assumption: the source regions' extent has been taken to be two times the standard deviation of the point spread function (PSF) at the detected positions. In most cases, as the background region, we used an ellipse twice as large as that of the source, excluding the source region. Instead at the galactic center the background region was selected close to the source because it was impossible to do otherwise due to the crowding of the field. Timing Analysis Light curves of the point sources have been created using the mean count rate of each observation and fitted with a constant count rate model. For five sources we obtained reduced χ 2 values larger than 11.05/5(dof), indicating, with a confidence level more than 95%, a significant variability with time. The light curves for these five variable sources are shown in Figure 2. Spectral Analysis We extracted the spectrum of each point source at each epoch and analyzed it with XSPEC version 11.0.1 in XANADU 3 . In order to use χ 2 statistics, we grouped the energy bins of the spectra so that each bin contains at least twenty counts. We fitted the spectra simultaneously with an absorbed power-law model. For each epoch, we used common values of the absorption column density N H and the photon index Γ, while the normalization was left as a free parameter. For bright sources having more than 180 total counts in the six observations, all parameters, N H , Γ and normalizations, were free. For sources having less than 180 total counts, we fixed Γ to 2.0. For the faintest sources whose counts are less than 100 total counts, we combined the six spectra and fitted it with Γ = 2.0 and N H = 4.0 × 10 21 cm −2 , which is the best-fit N H of the diffuse emission (see below). The observed flux and absorption corrected luminosity of the point sources in the 2.0-10.0 keV band and the 0.5-10.0 keV band are shown in the source list (Table 8). Seven sources (ID29,32,33,35,37,42,60) have luminosities of over 10 39 ergs s −1 , and all except ID60 are within the D 25 ellipse and considered to be Ultra-Luminous X-ray Sources (ULXs). ID60 is so remote from NGC 2146 that it should be a background AGN. Diffuse Source We extracted the spectra of the diffuse emission from a circular region at the center of NGC 2146 with a radius of 1. ′ 8 excluding the inner point sources. The background spectrum for each observation was extracted from a source-free rectangular region (about 5 arcmin 2 area) around the position (detX,detY)=(830,830) in detector coordinates. The background level of the third observation was 0.081 counts s −1 arcmin −2 and quite variable with an amplitude of 0.03 counts s −1 arcmin −2 , while those of the other observations were ∼ 0.01 counts s −1 arcmin −2 and stable during the observations. This effect can be ignored in the case of point sources while not in the case of the diffuse component due to the area where counts are accumulated. We therefore did not use the third observation for the analysis of the diffuse emission. The derived spectra for the diffuse component are shown in Figure 3. We identified the strong spectral features as produced by Mg XI (1.33-1.35 keV) and Si XIII (1.84-1.87 keV). This suggests that the emission is originated by optically-thin thermal plasma. It was not possible to fit the spectra with an absorbed single-temperature plasma model with variable abundances ("vmekal" model; Mewe et al. 1985a, Liedahl et al. 1995. Hence, we used instead a combination of a "vmekal" plus a power-law model with the same absorption column density for both models. The equivalent width of Fe K line (6.4 keV) was estimated to be ≤ 3 keV. An additional at- Fig. 3. Spectra of the diffuse emission (except the third observation) and the best-fit thin-thermal plasma plus power-law model. Features are seen between 1 and 2 keV, which are considered to come from Mg XI (1.33-1.35 keV) and Si XIII (1.84-1.87 keV) emission lines tempt has been made to fit the spectra with the "vmekal" plus a thermal bremsstrahlung model. The derived parameters for both fits are summarized in Table 2. Still some line-like residuals are found in Figure 3, however, we cannot improve χ 2 value to add another thermal component. The origin of this line component will be discussed in §4.3.2. Comparison with the ASCA results In order to compare our Chandra data with the previous ASCA results (Della Ceca et al. 1999), we extract the entire NGC 2146 spectra from a circle at the center of the galaxy with a radius of 1. ′ 8 including the inner point sources. Since we spatially resolved point sources, we operate a simple spectral analysis to examine the overall flux variability. We ignore those spectra with energies below 0.6 keV because in this range ASCA has no effective area. We use the same fitting model used by Della Ceca et al. (1999), an optically-thin thermal plasma model plus a power-law model. At first, all parameters except normalizations are fixed at the ASCA best-fit values, N H = 7 × 10 20 cm −2 , the plasma temperature of kT = 0.82 keV, solar abundance and Γ = 1.7. Figure 4 shows the Chandra spectra No. ] Chandra Observation of the Starburst Galaxy NGC 2146 5 Note-Parentheses indicate the 90% confidence limit. of NGC 2146 together with the ASCA model and residuals after the model is applied. It is noticeable that the model causes large residuals below 1 keV. We believe that this excess is real, since, below 1 keV, the Chandra ACIS-S has a better effective area than ASCA. Next, we fit them with all parameters free. The result is reported in Table 3. No significant difference in flux between two models is found while the residuals in the low energy band are improved. The power-law index Γ and the observed flux is comparable with the ASCA results. Therefore we conclude that the X-ray emission from the entire region of NGC 2146 shows no significant flux variability between our Chandra and the ASCA observations. Search for an IMBH The IMBH candidate M82 X-1 is a very hard source with a luminosity of over 10 41 ergs s −1 , flux variation on a time scale of ∼ 10 4 s and off-center position (Matsumoto et al. 1999. In our observations of NGC 2146, six ULXs are found and three of them (ID29,32,42) show time variations. The luminosity of the highest count-rate source ID42 is 2 × 10 39 ergs s −1 but it is a rather soft (HR ∼ −0.14) source. On the other hand, ID29 and ID32 are hard sources (HR ∼ 0.55 and 0.44, respectively) with luminosities of 3 × 10 39 ergs s −1 and 4 × 10 39 ergs s −1 , respectively. The light curve of ID29 ( Figure 2) has two flare-like peaks and the count rate at the maximum peak is about three times larger than the average count rate. We find no point source whose luminosity is comparable to that of M82 X-1. Identifications of the Chandra sources In order to understand the nature of each point source, we searched for the NIR and radio counterparts using the 2MASS All-Sky Point Source Catalog(PSC) 4 and the results of MERLIN+VLA observations and found no positional coincidence between the detected X-ray point sources and those found in the NIR or radio band. The correlation method is explained in the following paragraph. Figure 5 shows the distribution of NIR sources overlaid on the Chandra image. We picked up the closest NIR source from each X-ray source and did vice versa. Forty pairs were commonly selected in both the methods. Among the pairs, we found that three pairs have a distance of less than 0. ′′ 5 between the X-ray and the NIR sources, and we found no pairs whose distance was less than 0. ′′ 2. Next we checked the absolute positional offset of the ACIS-S coordinate. Using comparably nearby 10 pairs within 3. ′′ 0, we shifted the ACIS-S frame to minimize the total distances of these pairs and searched pairs again. However, there were the same three pairs within 0. ′′ 5, and were no pairs within 0. ′′ 2. We then decided to 6 T. Inui et al. [Vol. , 0.82 (fixed) 1.0 (fixed) 1.7 (fixed) 5.9 9.9 260/110 0.15 (0.10-0.20) 0.59 (0.55-0.63) 0.63 (0.10-5.24) 1.7 (1.5-1.9) 5.8 9.8 174/106 Note-Parentheses indicate the 90% confidence limit. Col. (1): Absorption column density (10 22 cm −2 ). Col. (2): Plasma temperature (keV). Col. (3): Abundance (solar abundance). Col. (4): Photon index. Col. (5) and (6): Observed flux of the soft component in the 0.5-2.0 keV band and the 2.0-10.0 keV band (10 −13 ergs cm −2 s −1 ). Col. (7): χ 2 and degree of freedom. use the original ACIS-S coordinate. The positional accuracy of Chandra is 0. ′′ 1 5 , while that of 2MASS sources is ∼ 80 mas 6 . Therefore we concluded that there were no identifications with the NIR sources. We also searched for the radio counterparts using the results of the MERLIN and the VLA observations in the same way as for the NIR identification. As the positional uncertainties of the MERLIN and the VLA are less than 0. ′′ 1 7 , we used the same criterion as 2MASS. We found no radio counterparts, either. The galactic center source We find the X-ray source ID33 with a luminosity of 1 × 10 39 ergs s −1 at the dynamical center of NGC 2146 (R.A.=06 h 18 m 37. s 4 ± 0. s 4, Dec.=78 • 21 ′ 23. ′′ 2 ± 2 ′′ ) derived from a 12 CO rotation curve . Since the total counts of ID33 is only 113, we cannot make a complete spectral analysis. However, this source is a hard source (HR ∼ 0.55) and the nearby source ID35 is also hard (HR ∼ 0.58). As the dense dust lane (Young et al. 1988) is across these sources, it is natural to consider that they are strongly absorbed nuclear sources. The absorption column density under the assumption of a canonical AGN power-law spectral model (Γ = 2.0) is comparable with the value obtained by radio observations (Tarchi et al. 2004). ID33 may be a low-luminosity AGN (LLAGN). We note that the X-ray observations with Chandra have first exposed the nuclear region even under the heavy absorption. The log N − log S distribution and Luminosity Function Figure 6 shows the log N − log S distribution for NGC 2146, where S is the observed 2.0-10.0 keV flux. The log N − log S has a sharp break at ∼ 2 × 10 −15 ergs cm −2 s −1 , that might be due to an observational detection limit. In order to investigate this hypothesis, we have estimated an approximate possible value for such a limit. Within 0. ′ 5 from the center of NGC 2146, where the surface brightness of the diffuse emission is quite large, the typical counts due to the dif- fuse emission integrated for 60 ks within the typical PSF size (0. ′′ 5) is three counts. Therefore, if a source in this region has more than 12 counts, which corresponds to ∼ 1.3 × 10 −15 ergs cm −2 s −1 , will be detected with high significance (more than 5 σ). Since the flux is very close to the break point, we conclude that this break is probably due to the detection limit. In the total band image, the flux of the faintest source in the central high diffuse emitting region is ∼ 2.0 × 10 −15 ergs cm −2 s −1 . In the following discussion, we take this value as a conservative detection limit. We also derive a luminosity function (LF) for NGC 2146 using the absorption-corrected 0.5-10.0 keV luminosities. The LF slope is 0.71. Hence, the LF for NGC 2146 is flatter than that of the normal spiral galaxies, while it is consistent with those of the other starburst galaxies (Kilgard et al. 2002). The fact indicates that the starburst galaxy NGC 2146, similarly to other starburst galaxies, has a large fraction of high X-ray luminosity sources. Combined point source spectra We combine the spectra of the point sources with fluxes larger than the detection limit and fitted them by two-temperature bremsstrahlung model with all parameters free, since a power-law model, a bremsstrahlung or its combination models cannot describe the spectra. Spectra and the best-fit model are displayed in Figure 7. The best-fit parameters are listed in Table 4. The total observed flux of point sources in the 2.0-10 keV band is 7.8 × 10 −13 ergs cm −2 s −1 . Since the Fig. 5. Spatial distributions of the Chandra (green mark, cross), the 2MASS (red mark, circle) and the MERLIN and the VLA (magenta mark, diamond) sources. The magnified image around the galactic center (rectangle area in the left image) is shown in the right. total flux coming from NGC 2146 in the same band is 9.8×10 −13 ergs cm −2 s −1 , about 80% of the hard emission resides in the point sources, resolved by the high spatial resolution of Chandra. Table 5 shows the 2-10 keV emission ratios of detected point sources by Chandra to the total for six starburst galaxies. The hard emission of the starburst galaxies can be divided into two groups; point source dominant and diffuse source dominant. Hard Component The hard spectral component of the diffuse emission constitutes 20% of the hard (2.0-10.0 keV) emission produced in the whole galaxy, with a luminosity of ∼ 4 × 10 39 ergs s −1 . This luminosity is greater than the hard diffuse emission of M82 (∼ 2 × 10 39 ergs s −1 ; Griffiths et al. 2000) and NGC 253 (∼ 1 × 10 39 ergs s −1 ; Pietsch et al. 2001. Considered as a thermal plasma, the temperature is ∼ 2.4 keV ( Table 2). These features are quite similar to M82. To determine whether the origin of the hard component is a hot diffuse gas or a superposition of point sources unresolved even with Chandra, we estimate the contributions of the unresolved X-ray point sources. At first, we try a spectral approach. We assume that the shape of the spectrum produced by a number of unresolved sources can be described through the same model used for the combined spectra of the resolved sources. Hence, we fit the diffuse emission in the same way described in §4.2.5. The model implying a collection of un-8 T. Inui et al. [Vol. ,Fig. 8. Spectra of the diffuse source with the vmekal model plus the unresolved sources model. resolved sources cannot fit the observed spectra, originating large residuals below 1 keV. Hence, we add the vmekal model and try again the fitting procedure and get an acceptable fit. The best-fit model are shown in Figure 8 and the best-fit parameters are listed in Table 6. The observed flux of the unresolved source component in the 2.0-10.0 keV band is 3.2 × 10 −13 ergs cm −2 s −1 . Next, we estimate the contributions of the unresolved X-ray sources by the log N − log S distribution. The best-fit model function of the log N − log S above 2 × 10 −15 erg cm −2 s −1 can be described as According to the equation (2), the flux of the unresolved sources between 0 and 2 × 10 −15 ergs cm −2 s −1 is F unres = 2.9 × 10 −13 ergs cm −2 s −1 , which is very close to the flux derived by the spectral approach. Therefore we conclude that the hard emission of NGC 2146 is probably due to a superposition of unresolved point sources. Soft Component As shown in Figure 8, after having fitted the spectra clear residuals between 1.0 and 1.3 keV are still present. When we fitted this residual feature using a Gaussian function, we obtain a value for the feature's central position and width (FWHM) corresponding to 1.22 +0.01 −0.02 keV and ≤ 96 eV, respectively. Since this feature is present at all epochs, we consider it as real. In this energy range, there are Ni emission lines. However the features cannot be accounted for by increasing the Ni abundance because Ni emission lines also exist at other energies. We tried to change the gain of ACIS with -22 eV, but the statis- 25 keV). This problem will be resolved by the high energy resolution of Astro-E2 XRS 8 . We then estimated the physical parameters simply by using the parameters reported in Table 6. The plasma temperature is determined to be kT ∼ 0.5 keV. The plasma has the high abundances for Mg and Si. The emission integral (EI= n 2 e dV ) was calculated to be 8.0 × 10 62 cm −3 . We assumed that the total volume of NGC 2146 to be a sphere with a radius of 1. ′ 8(=6.3 kpc), and we parameterized the clumpiness of the plasma by a volume(V)-filling factor, f . From these values, we determined the plasma density [n e ∼ (EI/V f ) 1/2 ], the plasma pressure (p ∼ 2n e kT ), the plasma mass (M ∼ n e m p V f ), and the thermal energy of the plasma (E ∼ 3n e kT V f ). These physical parameters are listed in Table 7. The origin of the diffuse soft X-ray emission has been discussed and considered as an outflow gas along the minor axis of NGC 2146 (Armus et al. 1995). Summary We observed the starburst galaxy NGC 2146 with the ACIS-S on board Chandra in six different epochs, with a total exposure time of 60ks. We obtained the following results: 1. We detected a total of 67 point sources in the ACIS-S field of view and compiled an X-ray point source catalog for NGC 2146. 2. We did not detect any source as luminous as that found in the prototype starburst galaxy M82 (M82 X-1; luminosity ∼ 10 41 ergs s −1 ). 3. We found no positional coincidence, and hence, any association, between the detected X-ray point sources and those found in the NIR (2MASS All-Sky PSC) or radio (MERLIN+VLA observations) band. 4. We found a hard X-ray source coincident in position with the dynamical center of the galaxy. It has a luminosity of ∼ 1 × 10 39 ergs s −1 and it may represent a possible low-luminosity AGN (LLAGN) candidate. 5. We derived a log N − log S distribution and a luminosity function (LF) for NGC 2146. The former shows a break that we demonstrated to be 8 See http://www.isas.jaxa.jp/e/enterp/missions/astro-e2/ No. ] Chandra Observation of the Starburst Galaxy NGC 2146 9 † (10 −13 ergs cm −2 s −1 ) 4.9 L 0.5−2.0keV † (10 40 ergs s −1 ) 1.3 unresolved sources (absorbed two-temperature bremsstrahlung) N H (10 22 cm −2 ) 0.99 (fixed) kT brems1 (keV) 0.14 (fixed) kT brems2 (keV) 6.7 (fixed) F 2.0−10.0keV ‡ (10 −13 ergs cm −2 s −1 ) 3.2 L 2.0−10.0keV ‡ (10 40 ergs s −1 ) 0.57 Note-Parentheses indicate the 90% confidence limit. likely caused by detection limit. The slope of LF is 0.71, consistent with that of other starburst galaxies. This represents an indication that NGC 2146, as well as other starbursts, hosts a larger fraction of luminous sources than normal galaxies. 6. We have mapped diffuse emission in both the soft (0.5-2.0 keV) and hard (2.0-10.0 keV) energy bands. The spectra were fitted using a two-component model: a soft and a hard one. 7. The point sources produce most of the hard emission in NGC 2146 and even the hard component of the diffuse emission, with a luminosity of ∼ 4 × 10 39 ergs s −1 , is probably accounted for by unresolved point sources. 8. The diffuse emission soft component is described by a thermal plasma model with a temperature of kT ∼ 0.5 keV with high abundances for Mg and Si. We also determined the physical parameters of the soft X-ray-emitting gas.
2014-10-01T00:00:00.000Z
2004-05-01T00:00:00.000
{ "year": 2004, "sha1": "e1797c0b12f9c41c8a1b6314ef067b2c3ef06ec9", "oa_license": null, "oa_url": "http://arxiv.org/abs/astro-ph/0412249", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "93e32c86d1e685d27078b93f31f7135082ac5621", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
226200257
pes2o/s2orc
v3-fos-license
Implications of $b\to s\mu\mu$ Anomalies for Future Measurements of $B \to K^{(*)} \nu \bar \nu$ and $K\to \pi \nu \bar \nu$ We investigate the consequences of deviations from the Standard Model observed in $b\to s\mu\mu$ transitions for flavour-changing neutral-current processes involving down-type quarks and neutrinos. We derive the relevant Wilson coefficients within an effective field theory approach respecting the SM gauge symmetry, including right-handed currents, a flavour structure based on approximate $U(2)$ symmetry, and assuming only SM-like light neutrinos. We discuss correlations among $B \to K^{(*)} \nu \bar \nu$ and $K\to \pi \nu \bar \nu$ branching ratios in the case of linear Minimal Flavour Violation and in a more general framework, highlighting in each case the role played by various New Physics scenarios proposed to explain $b\to s\mu\mu$ deviations. Introduction. Recent experimental data in B physics hint toward deviations from Lepton Flavour Universality (LFU) in semi-leptonic decays [1] as measured by LHCb [2][3][4] at significances from 2.3σ to 2.6σ. Belle has also recently reported measurements of R K [5] and R K * [6] in agreement with LHCb measurements, but with much larger uncertainties. In addition to these LFU ratios, LHCb data exhibit deviations close to 3σ from the Standard Model (SM) expectation in the P 5 angular observable of B → K * µµ decay [7], and milder deviations are also seen in branching ratios of b → sµµ exclusive decays [8][9][10][11][12]. Deviations are also hinted at in Belle data for B → K * µµ [13,14]. These deviations can be interpreted modelindependently in terms of specific contributions to the effective weak hamiltonian (see e.g. Ref. [15,16]) at the scale m b where the relevant long-distance operators are while chirality-flipped operators O 9 ,10 are obtained from the above expressions with the replacement P L → P R with P L,R = (1 ∓ γ 5 )/2. The global fits to b → s data (see Ref. [17] and references therein) show that these deviations exhibit a consistent pattern favouring a significant additional New Physics (NP) contribution to the short-distance Wilson coefficient C µ 9 (of the order of 25% of the SM contribution) together with smaller contributions to C µ 10 and/or C µ 9 . Among the scenarios improving by 5σ or more the description of the data compared to the SM, one can find the one-dimensional scenarios with the best-fit point and the 68% confidence intervals, favoured on the basis of their pulls with respect tot the Standard Model [17]: Two-dimensional scenarios achieving similarly high pulls with respect to the SM are obtained for NP contributions to (C µ 9 , C µ 10 ) and (C µ 9 , C µ 9 ). Smaller contributions to electron operators are allowed by the data, but not required to achieve a good description. Similar results have been obtained by other groups performing such fits choosing different theoretical inputs and experimental subsets and different statistical frameworks [18][19][20]. We recall that these two branching ratios should obey the Grossman-Nir bound B(K L → π 0 νν) ≤ 4.3 B(K + → π + νν), in which the numerical factor results from the difference in the total decay widths of K L and K + , isospin breaking effects, and QED radiative corrections [29]. In particular, the Grossman-Nir bound constitutes an additional very strong theoretical constraint on B(K L → π 0 νν). The experiment NA62 is planning to eventually measure the rate of K + → π + νν with O(10%) precision [30]. For the neutral decay mode K L → π 0 νν KOTO and KLEVER also aim at making significant progress [31] and resolving the current somewhat ambiguous situation with respect to possible NP effects in these modes [32]. Precise results for all these rare semileptonic b → s and s → d transitions will allow to get much better insight into possible NP effects observed in R K ( * ) . Particularly interesting is the question whether NP is only present in b → s transitions or also in other Flavour-Changing Neutral Currents (FCNC). The measurements of s → dνν and b → sνν rates will help to differentiate among NP models with different flavour and chiral structures in both the quark and lepton sectors. This issue has been already raised in many studies which mostly relied on particular models of NP [21,[33][34][35][36]. The main goal of our approach is to determine the impact of R K ( * ) on future measurements of B → K ( * ) νν and K → πνν in a general effective theory framework and to illustrate the potential correlations among these measurements. where we choose to write the operators in the down-quark and charged lepton mass basis [33,39,40] we assume that the same flavour structure encoded in λ q ij and λ αβ holds for all operators. As it will become clear from the discussion below, this assumption actually does not result in any loss of generality of our main results. It also turns out to be beneficial to classify the NP flavour structure in terms of an approximate U (2) q=Q,D flavour symmetry acting on quark fields, under which two generations of quarks form doublets, while the third generation is invariant. One can write q ≡ (q 1 1) . In the exact U (2) q limit only λ q 33 and λ q 11 = λ q 22 in Eq. (7) are nonvanishing. Since a specific pattern of U (2) q breaking (by the SM Yukawas) is required to accommodate the first two generation quark masses and the CKM matrix, there is an ambiguity in the definition of the singlet field with respect to the down-quark mass basis, which, if chosen arbitrarily, may still result in unacceptably large mixing among generations. To avoid excessive effects in neutral kaon oscillation observables, we thus furthermore impose the leading NP U (2) q breaking to be aligned with the SM Yukawas, yielding a General Minimal Flavour Violating (GMFV) [41] structure where θ q and φ q are fixed but otherwise arbitrary numbers. Therefore respecting the U (2) q symmetry with GMFV breaking one has d 3 [41] is recovered by taking θ q = 1 and φ q = 0 (taking V tb = 1). In (G)MFV, right-handed FCNCs among down-type quarks are suppressed so that we may set C RL = C RR = 0 then. Departures from the (G)MFV limit may manifest through additional explicit U (2) q breaking effects appearing as λ q i =j = 0 and we normalise such effects by the U (2) q symmetric (λ q 33 ) contribution by defining r ij = λ q ij /λ q 33 . For the lepton sector we assume an approximate U (1) 3 symmetry (broken only by the neutrino masses) yielding λ i =j 0 as required by stringent limits on lepton flavour violation. We consider here only (SM-like) lefthanded neutrinos. As discussed in the Introduction, current (LFU) NP hints in rare semileptonic B decays only indicate significant non-standard effects in muonic final states. While a smaller effect in electrons is not excluded, b → sτ + τ − transitions are at present only poorly constrained and could in principle exhibit even much larger deviations than those observed in R K ( * ) [42]. However, the corresponding neutrino flavours are not tagged in current and upcoming rare meson decay experiments. In order to correlate FCNC processes involving charged leptons and neutrinos we need to assume specific ratios of U (1) 3 charges (λ ). In the following we will consider three well known examples from the existing literature: 1. The simplest λ ee = λ τ τ = 0 scenario implies significant NP effects only in muonic final states. Correspondingly only a single neutrino flavour ν (e.g. in the sums of Eqs. (13) and (17)) receives NP effects. This is usually assumed in model-independent EFT analyses. 2. The anomaly-free assignment λ µµ = −λ τ τ and λ ee = 0 allows for gauging of the leptonic flavour symmetry and is thus well suited for UV model building [43,44]. In this case two of the neutrino flavours in Eqs. (13) and (17) receive NP effects, equal in magnitude, but opposite in sign. 3. The hierarchical charge scenario λ ee λ µµ λ τ τ is motivated by models of partial lepton compositeness and flavour models accounting for hierarchical charged lepton masses [45,46]. In this case NP effects in Eqs. (13) and (17) are again dominated by a single (τ ) neutrino flavour, however the effects can be much larger than indicated by the deviations in R K ( * ) . For concreteness in the following we consider λ τ τ /λ µµ = m τ /m µ and again neglect the small effects in λ ee for this scenario. The Wilson coefficients appearing in Eq. (2) can now be expressed compactly as The rare B decays B → K ( * ) νν can be conveniently expressed in presence of NP of the form in Eq. (7) as [21] B(B → Kνν) =(4.5 ± 0.7) × 10 −6 1 where F L is the longitudinal K * polarisation fraction in B → K * νν decays. For each flavour of neutrino ν = ν e , ν µ , ν τ , the two NP parameters can in turn be expressed as where C ν L,R = C ν,SM L,R + C ν,NP L,R and C ν,SM L = −6.38 and C ν,SM R = 0 at µ = m b . Including leading U (2) q breaking effects we can write again with α = e, µ, τ . We wrote these expressions neglecting the small neutrino mass effects setting effectively U PMNS to the identity matrix. Note that any deviations from SM in F L or non-universal deviations in B(B → (K, K * , X s )νν)/B(B → (K, K * , X s )νν) SM would signal the presence of right-handed quark currents (C RL = 0) and thus departures from the (G)MFV limit. Similarly, the rare kaon decays K + → π + νν and K L → π 0 νν can be conveniently expressed in presence of NP of the form in Eq. (7) as [34] B(K + → π + νν(γ)) = (8.4 ± 1.0) × 10 −11 where X i are defined in Ref. [47] and s W ≡ sin θ W 0.48, c W ≡ cos θ W . Numerically, X t = 1.469(17) [47] and (X c + δX c,u ) = 0.00106(6) [48,49] . For each neutrino flavour ν = ν e , ν µ , ν τ , C ν,NP sd receives contributions from three operators of the weak effective Hamiltonian yielding where α = e, µ, τ , we have again neglected neutrino mass effects. Since s → d transitions only appear at quadratic order in GMFV breaking of U (2) q , in this case we have included up to quadratic U (2) q breaking terms, but at the same time kept only the linear U (2) q breaking contributions beyond GMFV, since these suffice for our following discussion. A short comment regarding the recent intriguing results on the K L → π 0 νν from the KOTO collaboration is in order at this point. As noted in Ref. [32], at the 68% CL, the result, if combined with the NA62 bound on K + → π + νν, violates the Grossman-Nir bound [29] and cannot be explained without invoking isospin breaking NP [50,51] and additional long-lived neutral final states in the K L decay beyond the three SM neutrinos, see e.g. Ref [52][53][54]. None of the NP scenarios we consider can thus fully accommodate both measurements. At most we can comment on ways to approach the Grossman-Nir bound. In particular, we note that new CP phases in s → d transitions only appear beyond the (G)MFV limit [41]. In the case of K → πνν decays we can see this explicitly in Eq. (18) since only terms proportional to r ij may carry additional phases. These terms should thus dominate over the first row indicating large departures from the (G)MFV limit. Unfortunately, little can be said about the implications of b → sµµ data model independently in this part of parameter space. A potential future experimental confirmation of C µ,N P 9 = 0 could at best provide circumstantial evidence for the presence of U (2) q breaking beyond (G)MFV. Results in the linear MFV case. We first consider the limit of (linear) MFV in which b → sνν and s → dνν FCNC transitions are rigidly correlated via the corresponding CKM prefactors in Eqs. (15) and (18) and C RL = C RR = 0. Even before considering the implications of R K ( * ) , this immediately implies a very general correlation between B → h s νν and K → πνν rates, driven by the combination of Wilson coefficients C S − C T in Eq (7). For conciseness, we consider the branching ratios normalised to their SM values by intro- The allowed region for these ratios is shown shaded in darker (2ν) and lighter (3ν) grey in Fig. 1, where arbitrary MFV NP effects in two (2ν) or three (3ν) neutrino flavours (with arbitrary λ q 33 λ ) have been considered, respectively. The two ratios R are bounded by the same minimal value (1 − N ν /3) where N ν is the number of neutrino flavours affected by NP. Also shown are the present experimental constraints coming from NA62 [26] and B-factories [23] respectively. An interesting observation is that a pair of future B → h s νν and K → πνν rate measurements outside of this (albeit large) region would be a clear indication of non-MFV NP. On the same plot we also superimpose the three specific U (1) 3 scenarios. In the b → s + − analysis, the MFV limit corresponds to the (C µ,NP 9 , C µ,NP 10 ) scenario. In terms of the EFT operator basis in Eq (7) R K ( * ) measurements (and more generally b → s data) favour non-zero values for both C S + C T and C LR . Since B → h s νν and K → πνν depend on the orthogonal C S −C T combination, interesting implications can only be derived in specific scenarios allowing us to convert the information from b → s + − observables into a constraint on C S and C T . The simplest possibilities (C S = 0 or C T = 0) are indicated in Fig. 1 for U (1) 3 scenarios 1 and 3 respectively. On the other hand, in scenario 2, no significant deviations are expected in either case. We observe that the pure SU (2) triplet (C S = 0) scenario 3 (λ τ τ /λ µµ = m τ /m µ ) was already close to being probed by searches for B → K ( * ) νν at the B-factories. The final projected sensitivity of Belle II could be sufficient to eventually also distinguish between the pure SU (2) triplet (C S = 0) and singlet (C T = 0) limits of scenario 1. Results with right-handed currents. Beyond the linear MFV limit any correlation between b → s and s → d FCNCs is lost in general. Nonetheless, the potential presence of right-handed b → s FCNCs in the Figure 1. Correlation between the ratios R(B → hsνν) (hs = K, K * , Xs) and R(K + → π + νν) in the linear MFV limit. The SM value is represented by the black square. The region allowed for arbitrary NP effects in νµ and ντ only (all three neutrino flavours) is show in dark grey (light grey respectively). Curves are drawn for the specific U (1) 3 scenarios 1 (NP only in muons, red), 2 (opposite NP effects in muons and taus, purple) and 3 (hierarchical NP effects according to the generation, dashed brown). Scenarios with CS = 0 or CT = 0 are indicated as black × and + respectively in the inset plot for scenario 1 and brown ♦ or respectively for scenario 3. (C µ,NP 9 , C µ,NP 9 ) scenario as well as the leptonic flavour structure of NP can both still be probed using correlations among two B → h s νν modes, as shown in Fig. 2 for the case R(B → K νν) vs. R(B → K * νν). First note that in the MFV limit relative NP effects in both modes are expected to be identical as indicated by the diagonal red line. Beyond MFV however, the amount of deviation from the diagonal would directly indicate the number of lepton flavours affected by NP. In scenario 1 (where only muons couple significantly to NP) and scenario 2 (where muons and taus have opposite NP couplings) the b → s fit for (C µ,NP 9 , C µ,NP 9 ) singles out a narrow region around the diagonal in this plane, whereas scenario 3 leaves a much larger region allowed. Conversely, a measurement of the two b → sνν modes outside of the region for scenario 1 would indicate significant (right-handed FCNC) NP couplings to other neutrino species, e.g. ν τ . In absence of information on the size of the righthanded FCNCs from the b → sµ + µ − modes in principle the whole region within the grey 1ν contour could be accessible, with limits corresponding η ν = −1/2 and +1/2 (MFV corresponding to η ν = 0). In presence of signifi- ). The diagonal blue line corresponds to the (G)MFV case. The 1 σ region allowed by b → sµµ transitions yields an allowed region depending on the assumption on the couplings to leptons, inside the solid green line for scenario 1 (NP only in muons), dashed purple for scenario 2 (opposite NP effects in muons and taus) and dot-dashed red for scenario 3 (hierarchical NP effects according to the generation). Without information on the size of the right-handed FCNCs from b → sµ + µ − , the allowed region assuming significant NP couplings to 1, 2, 3 neutrinos is above and on the right of the solid, dashed, dotted grey contours, respectively. The horizontal and vertical bands correspond to the 90 % CL limits on the observables for R(B → K * νν) (orange) and R(B → Kνν) (blue) [23]. cant couplings also to tau neutrinos as e.g. in scenario 3, the whole region within the grey dashed 2ν contour is possible, even when the existing constraints coming from b → sµ + µ − modes are taken into account. Finally, in presence of significant right-handed FCNCs coupling to all three neutrino flavours the whole region within the grey dotted 3ν contour would be possible in principle. Conclusions. In this article, we have investigated the consequences of deviations from the SM observed in b → sµµ transitions for FCNC processes involving down-type quarks and neutrinos. Motivated by the results from the global fits to b → s observables as well as measurements and bounds on FCNC processes with neutrinos, we have considered a general EFT description of FCNC transitions in terms of SU (2) L gauge invariant operators including those with right-handed quarks and charged leptons. This allowed us to describe with the same short-distance Wilson coefficients b → sµµ, b → sνν and s → dνν. We have briefly touched upon the status of K L → π 0 νν, which is only affected by CPV NP, requiring new flavour dynamics beyond (G)MFV. In this case, there is no clear correlation with the other FCNC modes discussed here. The recent KOTO results that violate the Grossman-Nir bound are particularly challenging to explain in conjunction with the NA62 bound on K + → π + νν, and they cannot be accommodated within our framework. Assuming (G)MFV in the quark sector, we have studied the correlation between the branching ratios for B → h s νν and K + → π + νν. Such a correlation is already present without assuming any specific structure for the neutrino NP couplings, but it can be made even more precise once specific NP scenarios assign specific values to these couplings. Moreover, for scenarios with no triplet (C T = 0) or singlet (C S = 0) contributions, the fits to (C µ,NP 9 , C µ,NP 10 ) can be immediately converted into predictions for these two branching ratios in terms of R νν ≡ [R(B → h s νν), R(K + → π + νν)]. In scenario 1 where NP couples only to muons, we find R νν (0.95, 0.97) if C S = 0 and R νν (1.05, 1.03) if C T = 0. In scenario 2 where muons and taus have opposite couplings, the values remain very close to the SM. In scenario 3 where NP hierarchical couplings proportional to the lepton mass are assumed, we find R νν (0.64, 0.65) if C S = 0 and R νν (2.4, 1.8) if C T = 0. Moving beyond the (G)MFV limit, we have investigated the correlation between B → Kνν and B → K * νν, in particular showing that depending on the NP lepton couplings also the scenario with NP in (C µ,NP 9 , C µ,NP 9 ) can yield a tight correlation between the two modes when the b → s measurements are taken into account. For example, in scenarios 1 and 2, the ratio R(B → Kνν)/R(B → K * νν) cannot deviate from unity by more than 8%. More generally however, such measurements could establish NP flavour breaking beyond (G)MFV as well indicate the number of lepton flavours affected by NP. We hope that our results will strengthen the case for more accurate measurements of b → sνν and s → dνν modes, in order to determine which direction should be followed to develop viable NP models describing the hints of deviations in b → sµµ and providing a viable connection with other quark generations at the same time.
2020-05-11T01:00:55.467Z
2020-05-07T00:00:00.000
{ "year": 2020, "sha1": "c6c22bff8cf63206e3436e42a4aba897229c62bb", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.physletb.2020.135769", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c6c22bff8cf63206e3436e42a4aba897229c62bb", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
266854751
pes2o/s2orc
v3-fos-license
Application of Nanobiosensors in Detection of Pathogenic Bacteria: An Update Bacterial infections remain a critical public health concern worldwide, necessitating the development of efficient and sensitive diagnostic tools. Nanobiosensors, comprising nanomaterials, offer a novel approach to bacterial pathogen detection. The present review aimed to explore the current research and applications of nanobiosensors for bacterial pathogen detection. Recent discoveries in nanotechnology have facilitated the development of nanobiosensors with remarkable sensitivity and specificity. These nanoscale sensors are designed to detect specific bacterial pathogens through various mechanisms, including aptamers, antibodies, and molecular recognition elements. Furthermore, miniaturization and integration with microfluidic systems have enabled the rapid and point-of-care detection of bacterial infections. Incorporating nanomaterials such as carbon nanotubes, quantum dots, and graphene into biosensing platforms has significantly enhanced their performance, leading to ultrasensitive detection of bacterial antigens and nucleic acids. Additionally, using nanobiosensors with advanced analytical techniques, such as electrochemical, optical, and piezoelectric methods, has expanded the possibilities for accurate and real-time monitoring of bacterial pathogens. Nanobiosensors represent a promising frontier in the battle against bacterial infections. Their exceptional sensitivity, rapid response times, and potential for multiplexed detection make them invaluable tools for the early diagnosis and monitoring of bacterial pathogens. Developing cost-effective and portable nanobiosensors for resource-limited settings becomes increasingly possible as nanotechnology advances. Introduction As a result of antibiotic (AB) discovery, bacterial infections in humans, livestock, and agriculture were controlled 1,2 .However, multi-resistant bacteria (MDR) have become a global public health issue over the past few years due to the mismanagement of AB.This issue has challenged the use of AB 3 .Over 70% of bacteria are resistant to known anti-bacterial agents, making it necessary to develop new antimicrobial agents or use highly toxic antimicrobial therapies to achieve effective treatment, especially in critically ill individuals 4 .Recent studies estimate that without the development of new molecules, antimicrobial drug-resistant infections will cause the death of 10 million people in the world every year and cost about USD 100 trillion by 2050 4,5 .The World Health Organization (WHO) has established measures to prevent the spread of MDR infections, including controls on AB sales, dosage, and administration 6,7 .Most doses are currently uniformly administered to patients without considering infection progression or clinical characteristics, resulting in treatment failures, which may lead to subtherapeutic or toxic doses 8,9 . The use of therapeutic drug monitoring (TDM) is one of the solutions that measure drug toxicity by tracking changes in pharmacokinetic parameters (PK) of drugs that have narrow therapeutic index (TI) 10 .There are a variety of methods for monitoring, including single or mass-coupled chromatography with various detectors, such as ultraviolet detection, fluorescent detection (explained below), and immunoassays 11,12 .The United States Food and Drug Administration (FDA) has approved several techniques 13 .Despite this, these expensive techniques require trained personnel and specialized laboratories.Nanobiotechnology can be used to overcome this problem, specifically biosensors that can measure drug concentration in body fluids (including blood, urine serum, and plasma) 14 .In addition to being sensitive, specific, and low-cost, these devices can also be miniaturized so that doctors and care providers can easily carry them to patients' bedsides 15,16 . Nanotechnology has received widespread interest in bioanalytical chemistry due to its prominent application 17 .A more efficient chemistry reduces reagent consumption and overall costs from an economic perspective 18 .Nanomaterials can enhance the performance of various bioassays, and improvements in micro-and nanofabrication techniques may facilitate the development of miniaturized devices that can be used in the field 19,20 .Biosensors have several advantages, including low sample volume, reduced reagent consumption, minimal invasive sample collection methods, multiple analyte detection, and short analysis times 21 .In addition to these features, they provide real-time decision-making for individualized therapy 22 .Using nanobiosensors, health, and economic sectors benefit from shortened hospital stays, lower treatment costs, and reduced MDR strain infections that cost health systems millions of dollars annually 23,24 .Consequently, biosensor monitoring offers many advantages, and these devices may one day become indispensable equipment, reducing hospital costs for the health system in the future 25 .Nanobiosensors are extremely small devices, with dimensions of one billionth of a meter, capable of detecting and responding to physical stimuli 26 .It is possible to use nanosensors for food analysis by using them for detecting pathogens, toxins, nutrients, environmental characteristics, heavy metals, particulates, and allergens 27 .There have been several mechanisms reported to exploit nanosensor advances for food analysis. Nanomaterials-based techniques are commonly used in combination with existing technologies, and their high level of compatibility may result in significant improvements 28,29 .The current review aimed to focus on developments in sample preparation techniques and significant detection used in nanobiosensors and nanobioassays for food pathogens. Biosensors and nanobiosensors Biosensors have proven an effective platform for identifying pathogenic bacteria in previous years 30 .As a result of the advancement in bacterial sensing, microfluidic bioassays have been developed to detect pathogenic microorganisms rapidly 31 .Although these advancements have been made, commercial devices have yet to be demonstrated to work in real-world settings.In the ecological niche, bacteria are in low concentrations, and interfering components are present, sabotaging diagnostic performance.As nanotechnology progressed, researchers developed sensitive and effective detection techniques by studying the unique properties of nanomaterials (like their large surface area-to-volume ratio.).As a result, nanoscale materials make it possible to miniaturize sensing devices and build sensitive and rapid diagnostic systems for detecting pathogens 32 .As a result, it is essential to understand how nanobiosensors work. Principle of nanobiosensors Nanobiosensors were developed by combining traditional biosensors with nanotechnology, which is growing rapidly 33 .Nanobiosensors have a biological recognition element and a transduction unit that detects biological molecules at the nanoscale.Nanobiosensors consist of physicochemical transducers and receptors.Molecule recognition is the basis of biosensors 34 .The biological receptors can detect bacteria only when the receptor and the bacteria have a specific molecular recognition.In molecular recognition, lock and key models are the best examples of interaction between antibody and antigen.Bioreceptors are the parts of the sensor that interact with the target.There is an immovable fixation of bio-receptors on the surface of the transducer so that they can bind the target entity (enzymes, antibodies, deoxyribonucleic acid (DNA), cells, and aptamers) stable under various storage conditions 35 .Various methods are employed to immobilize the biological recognition element, such as adsorption, entrapment, cross-linking, microencapsulation, and covalent bonding.In the preparation of nanobiosensors, immobilization of nano components is a challenge.Biologically originated molecules can replace biologically created receptors, including engineered artificial proteins, imprinted polymers 36 recombinant antibodies, synthetic catalysts, and ligands 37 .The performance of these receptors determines a biosensor's selectivity and sensitivity 38 .Transducers (electrodes, semiconductor pH electrodes, thermistors, photon counters, and piezoelectric devices.)detect molecular recognition effects (changes in heat, mass, light, pH, or electroactivity).Measurable signals are converted into energy from the receptor, acting as an interface.Transducers modified with nanoparticles are the highlights of nanobiosensors, allowing rapid detection in a short period.Compared to simple biosensors, nanobiosensors can detect the quantity and presence of analytes 34 . Furthermore, a detector has an electronic component that amplifies or analyzes the electrical signals produced by the transducer and a microprocessor that measures it.Various amplifiers and filters are used to convert analog signals to digital signals.The data is displayed on the device as concentration units or stored as an image, numeric, graphic, or tabular.Detectors based on smartphones have been introduced for detecting analytes in nanobiosensors on-chip or at the point of care 39 .As a result of the characteristics of nanobiosensors, their performance can be enhanced indirectly.They are selectivity, reproducibility, sensitivity, stability, and linearity.Selectivity refers to the ability of the sensor to identify a specific analyte among several others 40 .The detection limits of nanobiosensors are determined by their sensitivity, which correlates with their robustness 41 .When repeated accurately and precisely, the reproducibility of a nanobiosensor result is correlated with its reliability.Working ranges or linear dynamic ranges where concentration is directly proportional to signals are indicators of linearity or accuracy.As a result of sensor stability, analytes can be quantified and detected under different conditions of measurement disturbances without compromising precision and accuracy. Nanobiosensors for Pathogenic Agents Detection The first biosensors were reported in the 1960s, and today they are predominantly utilized for biological detection and environmental monitoring purposes 42 .In biosensors, biological recognition is combined with digital signals, which are translated into information through software 43 .Biosensors can detect substances present in living or non-living systems, the analytes, through their properties, such as electricity, magnetic, electrochemistry, chemicals, optical, or vibration 44 .The device is usually composed of a biorecognition sensor and a transducer.An interaction between the bioreceptor and analyte will generate an electronic signal that can be measured by the transducer.It is achieved by immobilizing the biorecognition elements through covalent interaction, encapsulation, or adsorption 45 .These biorecognition units, or receptors, found within cells (such as glycopeptides, lipoproteins, lipids, glycoproteins, carbohydrates, and receptor proteins), serve various roles.They play a part in infection processes, adhere to cell surfaces and non-cellular substrates, evade the immune system, and facilitate nutrient intake and transport 46 .In addition to their extracellular exposure, receptors have one significant feature in common.They are used as biorecognition elements during the assembly of biosensors.Nanomaterials are used in the construction of biosensors to increase their detection limits.Large surfaces, high electronic conductivity, and plasmonic properties, such as the ability to store light in confined areas, contribute to this 47 .Moreover, nanomaterials as biosensors are capable of transmitting optical or mechanical signals.In the context of biosensors, a nanobiosensor is a material with a size of less than 100 nm 48 .These operate using the fundamentals of optics, spectroscopy, and mechanics.Small detection surface, nanobiosensors require a smaller amount of analyte to detect a measurable result 49 .It is generally more efficient for small spaces to allow higher-density arrays, which can detect more analytes in a single test by maximizing their density.Moreover, the intricacy and expenses associated with pathogen detection tests can be diminished through the use of nanobiosensors, which eliminate certain conventional sample processing steps 49 .Nanobiosensors generally rely on interactions between enzymes, nucleic acids, cells, substrates, bacteria, antibody, and antigen interactions, using biomimetic materials replicating biological processes. Nanobiosensors mechanism Nanobiosensors (NanoBioSS) are analytical devices with a biological sensor and a physicochemical converter 47 .As an essential function of NanoBioSS, it generates a digital electrical signal directly proportional to the sum of one or several molecules being analyzed 50 .These NanoBioSS are assisting some key analytic advances that are being aided as well as supported by advances in nanotech, adding to the evidence that they are both expanding applications and facilitating machinery.This BioSS/ NanoBioSS can precisely and rapidly detect nanomaterials (NMs), making it useful in various industrial, ecological, agricultural clinical, biomedical /healthcare, and other scientific applications 51 .The NanoBioSS design/fabrication process is as diverse as its applications, with each NanoBioSS category containing its advantages and limitations as a result of limitations based on the applications and the parameters essential to their optimum performance.Therefore, BioSS/Nano-BioSS should be selected based on sensitivity, specificity, output mode, dynamic range, usage simplicity, activation time, and engineering simplicity 52 .The NanoBioSS is used in various human endeavors, including diagnosing and managing different diseases and quality environmental and food effluent 53,54 .A significant difference exists between the surface dimension ratios of most commonly used nanomaterials in NanoBioSS, such as quantum dots (QD), noble metal nanoparticles (NPs), and carbon-based nanoparticles as opposed to their bulk arrangement, leading to different and better properties (electrical, chemical, and optical) 55 .As a result of these NMs' enhanced properties, NanoBioSS can detect nanoparticles more rapidly and reproducibly.By incorporating NMs in these bioanalytical devices, NMs enhance the performance and quality of BioSS/NanoBioSS (ETC, magnetic, mechanical, and optical) 56 .Thus, BioSS are more compact and sensitive 56 .There have been several papers describing the use of nanotech BioSS/NanoBioSS in clinical, biomedical, and healthcare applications (for example, identifying pathogen microbes and viruses, detection of cancerous cells, and breath analysis mechanism), environmental science (detection of water, soil, and air pollution), and agricultural applications (climate-smart organic agriculture and identification of animals, plants pests and diseases) 52 .In addition, modern materials science, particularly nanotech, has been suggested as a valuable tool used in COVID-19-related research because it has played a dynamic role in minimizing COVID-19 complications 57 . Nanobiosensors types The categorization of nanobiosensors encompasses a broad spectrum, primarily contingent on the type of nanomaterials integrated into the biosensing process.In addition, the classification here is more complex than with biosensors.Biosensors can be classified based on two criteria, namely, the type of material being analyzed and the mechanism used for signal transduction.For instance, if researchers screen any enzyme or antigen through the biosensors, they can find electrochemical, calorimetric, optical, and acoustic sensors when researchers classify biosensors based on their sensing mechanisms 58 .Each class is associated with various sensor categories overlapping according to the transduction mechanism.Potentiometric and Amperometric biosensors are electrochemical sensors, and optical biosensors are based on surface plasmon resonances or optical fibers 59 .As we observe in classifying nanobiosensors, the criterion for classification is the type of nanomaterials used to improve their sensing abilities.An example of nanoparticle-based biosensors is metallic nanoparticles that enhance the detection of biochemical signals.A nanobiosensor in which carbon nanotubes are used as enhancers of the reaction's efficiency and specificity is called a nanotube sensor 60 .In contrast, a nanowire biosensor uses nanowires as carriers and charge carriers 61 .Below are some of the significant nanobiosensors developed to date, along with those with no practical application.Quantum dots are employed as contrast agents in quantum dots-based sensors for improved optical responses . Acoustic wave biosensors Acoustic wave biosensors can increase the overall precision of biological detection limits by amplifying the sensing responses.With sensors like these, stimulusbased effects can occur in many ways.These sensors are designed to work with antibodies-modified sol particles, which can be conjugated with the electrode surfaces so that antibody molecules are immobilized over the electrode surface in a manner that binds themselves to the electrode surface, which has been complexed with analyte particles.By binding large amounts of particles to the antibody, the quartz platform is subjected to a change in vibrational frequency that serves to detect changes.It is typically preferred for antibody particles to have a diameter between 5 and 100 nm.The preferred particles are titanium dioxide, cadmium sulfide, platinum, and gold 62,63 . Magnetic biosensors Specially designed magnetic nanoparticles are used in magnetic biosensors . Materials based on ferrite are used either separately or in combination.Applications in biomedical science make these sensors very useful.Several analytical applications can be performed using magnetic materials.Because iron and other transition metals are paired, the magnetic compounds used in screening have different properties 64 .Incorporating magnetic nanoparticles into conventional detection devices has enhanced their sensitivity and performance.A few transition metal alloys containing iron and other materials have unpaired electrons in their dorbitals that have been widely studied for their magnetic properties 65 .These are commonly used magnetic bioassay techniques to isolate magnetically labeled targets using magnetometers to isolate them from magnetically labeled targets as a new kind of material has emerged 66 .The magnetic properties of magnetic nanoparticles enable them to rapidly detect biological targets through superconducting quantum interference devices.These devices can screen mixtures for specific antigens by binding antibodies to magnetic nanoparticles 67 .Specifically, nanoscale particles exhibit superparamagnetic effects due to their magnetic properties. Electrochemical biosensors Biochemical reactions are facilitated or analyzed by these sensors using improved electrical methods.Nanoparticles are primarily used in these devices.It is possible to quickly and efficiently perform chemical reactions between biomolecules through metallic nanoparticles, which contribute significantly to the immobilization of a reaction product.By enabling these reactions to be very specific, unwanted side effects are eliminated 68 .An overall biosensor is significantly enhanced by significantly lowering the detection limit using colloidal gold-based nanoparticles that enhance the immobilization of DNA in gold electrodes 69 .It has been proposed to develop biosensors that identify glucose, xanthine, and hydrogen peroxide with enzyme-conjugated gold nanoparticles 70 .A recent study by Xu et al. examined the electrochemistry of enzyme systems containing horsereddish peroxidase immobilized on gold electrodes containing carbon nanoparticles 70 .Based on the results of this study, horse reddish peroxidase showed a faster amperometric response and improved electrocatalytic reduction ability.This resulted in better sensitivity and smaller detection limits than those without nanoparticles in the biosensor. Nanotube-based sensors Carbon nanotubes are a popular nanomaterial in material science and optoelectronics.Because of their extraordinary properties, since their discovery in the 1990s, they have attracted worldwide attention.Among the most important properties are their electronic conductivity, flexible geometries, and dynamic physicomechanical properties, such as high aspect ratios, excellent functionalization capabilities, and high mechanical folding and strength properties.Due to these characteristics, single-wall and multi-wall nanotubes have been used to develop better biosensors 71 .In recent years, the design of glucose biosensors that utilize nanotubes as immobilizing surfaces for the enzyme glucose oxidase has become one of the most popular sensing advances.This enzyme is used to calculate glucose concentrations from several body fluids.Conventionally, enzyme-based sensors predicted glucose concentrations in significant body tissues, but nanotube assemblies have been successfully utilized to determine glucose concentrations even in scarce body fluids like tears and saliva 72 .Among such arrangements, single-walled nanotubes have been used to detect glucose enzymatically, and this innovation has improved enzyme activity significantly 73 .Analyzed the biosensor and found its enhanced performance was mainly due to its high enzyme loading and improved electrical conductivity.The better and smoother electron transfer characteristics of carbon nanotubes have enabled carbon nanotubes to enhance structural flexibility and electrical detection of sensing phenomena.A investigation delved into notable enhancements achieved in catalytic biosensors.These advancements elevated oxidoreductase activity, enabling glucose oxidase and flavin adenine dinucleotide precursors to bind to substrates more efficiently and with enhanced control 60 . Nanowire-Based sensors Nanowires are cylindrical arrangements and measure a few micrometers to centimeters in length and diameter.A nanowire is a one-dimensional nanostructure with excellent electron transport properties.A significant difference between bulk materials and nanowires is the motion of charge carriers.Nanowire sensors are very few, but literature has reported a few exciting examples of nanowires that have improved biological detection and performance .Using silicon nanowires doped with boron, Cui and Lieber reported the performance of biosensors for detecting biological and chemical species using silicon nanowires 74 .The utilization of semiconductor nanowires has been investigated extensively, and they have also been applied to coupling a variety of biomolecules into specific substrates for identification 75 .Streptavidin molecules from a mixture have been detected and isolated with silicon nanowires coated with biotin .In addition to their small size and ability to detect pathogens, these nanowires can also be used to analyze a wide range of biological and chemical data in real time, thus vastly improving the accuracy of current in vivo diagnostic procedures .The materials used for these sensing applications are exact in their dimensions, so they can be used within living cells and in vivo applications .Researchers have used nanosized fibers coated with antibodies in one study to detect toxicants within single cells 76 . Cullum et al. used gold electrodes coated with ZnO nanowires to detect hydrazine using amperometric responses 77 .Compared to conventional sensor systems, they propose high sensitivity, low detection limit, and much shorter response time than those reported at the time of the conventional sensor systems.Two significant advantages of nanowires over nanotubes are their versatility and their performance.By controlling their operational parameters during synthesis, they provide a range of design modifications.Additionally, their surfaces are compatible with a more excellent range of materials, which allows them to be further functionalized.Even though nanowires can be synthesized very quickly, their applications for sensing devices face several challenges.Many related studies report that adding nanowires to sensing systems is difficult, so overall electrical conductivity improvements cannot be realized 78 .According to the Lieber group, semiconductor nanowires were synthesized using combinations of previously known methods in a very advanced study .To detect serum-bone cancer antigens at low levels, a sophisticated onedimensional structure was devised, integrating a minimum of 200 distinct electrical nanowire assemblies 74 (Figure 1). Advantages of nanobiosensors Because of their nanoscale dimensions, nanobiosensors show remarkable sensitivity.Due to this sensitivity, bacterial pathogens can be detected at deficient concentrations, making them valuable diagnostic tools 79 .High-specificity Bacterial pathogens can be identified by nano biosensors that recognize specific molecular markers or receptors.Ensuring a high specificity minimizes falsepositive results, and accurate identification is achieved 80 .Rapid detection As a result of their rapid detection capabilities, nanobiosensors often produce results within minutes of their use.This swift response is critical for timely intervention in bacterial infections or outbreaks 81 .Multiple biomarkers or bacteria can be detected simultaneously by nanobiosensors.In addition, modern materials science, particularly nanotech, has been suggested as a valuable tool used in COVID-19-related research because it has played a dynamic role in minimizing COVID-19 complications 82 . Limitations of nanobiosensors Fabricating nanobiosensors can be time-consuming and technically challenging.Specialized expertise and equipment are required to manipulate nanoscale materials and integrate biological recognition elements 83 .In some cases, cleanroom facilities are also necessary for producing nanobiosensors, which can be expensive.The high cost can prevent widespread adoption, especially in resourceconstrained healthcare settings 84 .By recognizing specific bacteria, the nanobiosensor's recognition elements may have to be customized to accommodate their unique molecular signatures or surface markers.Pathogen optimization is labor-intensive and requires thorough knowledge of the pathogen being targeted 85 .There are some limitations to the shelf life of nanobiosensors, as well as their vulnerability to environmental factors, such as temperature or humidity.It is challenging to maintain their longevity and stability 83 .Nanomaterials and Biorecognition elements are used in diagnostic devices, raising ethical and regulatory concerns regarding safety, data privacy, and environmental impact (Figure 2) 85 . Antibiotic quantification with nanobiosensors Recently, biosensors have become an invaluable tool various industries, such as agriculture and food, as well as clinical diagnostics 86 .These devices are also easy to use, portable, automated, and can be miniaturized, as well as being durable and long-lasting.Sample analysis is inexpensive, requires no complicated pretreatment, and takes a short time 87,88 . Compound quantification with biosensors is made possible by these features .According to the International Union of Pure and Applied Chemistry (IUPAC), Biosensors detect chemicals via specific biochemical reactions mediated by isolated organelles, enzymes, or whole cells, immune systems, and tissues, usually through electrical, optical signals, or thermal 89 .An analytical device called a biosensor incorporates a biological recognition element closely coupled to or integrated with a transducer that allows signal processing based on the interaction between the ligand and the recognition element 90 .Thus, biosensors are classified based on their biological components and transduction systems 58 .Biocatalytic and affinity components are classified as biological components.There are various biocatalytic components, including whole cells, enzymes or multi-enzyme systems, organelles in cells, or tissues in plants or animals.Signals are obtained by measuring the products generated by catalyzed chemical reactions between enzymes and substrates 91 .The affinity bioreceptor generates an analyte-receptor complex through the interaction of the recognition element and analyte, which can be detected by labeling (fluorescent or enzymatic) or observing the transducer's physical-chemical properties .The most common biological components are an antibody, microorganism, aptamer, nucleic acid, and receptor protein 92 .As for transduction systems, it is the biosensor mechanism that converts changes in chemical or physical properties caused by analyte-ligand interactions into a signal.Transducers come in several types, including electrochemicals (amperometry, potentiometry, and impedimetry), opticals (fiber optics, biosensors using total internal reflection fluorescence (SERS), piezoelectrics (quasi crystal microbalances), surfaceenhanced Raman scattering, and nanomechanicals (nanolevers) 90 .An appropriate device can be selected based on the sample type and analyte-ligand interaction . Future Many fields have benefited from nanotechnology's revolutionary potential.A novel analytical tool can be provided by nanomaterials in the detection of food pathogens, and their use can enhance existing methods.While nanotechnology has gained widespread popularity, many pathogen nanosensors or assays are still in their early stages of development.Despite this, nanotechnology has contributed to varying degrees of improvement .Some technologies demonstrate dramatic improvements, whereas others show only modest improvements, particularly in whole-cell detection due to fewer access points and bulkier geometry and reaction centers.As detection becomes more sensitive, matrix interference increases proportionally, compromising certain bacteria's specificity and sensitivity.This challenge further highlights the effective preparation of samples.In addition to the need for systematic studies focused on sample preparation techniques, few studies have examined how samples perform in natural food systems or contexts of competing bacteria.Nanotechnology is multidisciplinary, contributing to this deficiency.Researchers from engineering, chemistry, and material science have contributed the majority of publications on pathogen nanosensors and assays because they need more resources to evaluate and validate large-scale downstream methods.Despite this, advances in rapid detection will continue to be driven by nanotechnology as these issues are resolved.In the future, detection methods will boast high levels of sensitivity and specificity, high sample throughput, minimal instrumentation, robustness, and quantitative capabilities.The flexible nature of nanomaterials and nanofabrication could offer excellent solutions to a wide range of problems associated with the effective use of nanotechnology for foodborne pathogen detection.Two green methods for Ag-GO nanocomposites were compared.Innovative approach Ag-GO-П exhibited superior anti-bacterial and cytotoxic behavior, controlling nucleation 93 .Another study investigates the antioxidant and anticancer properties of black peel pomegranate extract and explores its potential as a dual reducing and stabilizing agent in biosynthesizing silver nanoparticles, expecting enhanced biological activity 94,95 . Conclusions The sensitivity and versatility of nanobiosensors make them useful in a wide range of fields, including clinical, environmental detection, and food safety.Two key factors determined nanobiosensors effectiveness.Firstly, advanced nanomaterials like carbon nanotubes, gold nanoparticles, and quantum dots offer functionalization potential.The second factor is unique properties and optimized biological recognition elements like aptamers and antibodies.Nanobiosensors are expected to become more sensitive, facilitate multiplexed detection, provide point-of-care diagnostics, and provide real-time monitoring in the future. Figure 1 . Figure 1.The diversity of nanoparticle-based sensors Figure 2 . Figure 2. The advantages and limitations of using nanobiosensors for bacterial detection
2024-01-09T16:28:30.342Z
2023-12-25T00:00:00.000
{ "year": 2023, "sha1": "7e87e43beea18116c13d772bf66947f30fc97112", "oa_license": "CCBY", "oa_url": "https://rbes.rovedar.com/index.php/RBES/article/download/22/53", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "3bafbb689dc346adf566a486b108f4182d87fff1", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "extfieldsofstudy": [] }
209918469
pes2o/s2orc
v3-fos-license
Fast Charged Particle Detector with High Dynamic Range at Horizon-10T Cosmic Rays Detector System . Introduction Horizon-10T [1] or H10T is the cosmic rays detector system constructed at Tien Shan High-altitude Science Station (TSHSS), Almaty, Kazakhstan. It is an upgraded version of Horizon-T (HT) detector system [2,3,4] aimed to study Extensive Air Showers (EAS) coming at wide range of zenith angles (0 o to 85 o ) with energy of the primary particle of the detected events above 10 16 eV. System is located at ~3340 meters above the sea level and consists of 10 charged particle detection points. Points are separated up to 1.3 km. Aerial view of the detector system is presented in the Figure 1. Each H10T detector is equipped with a 1 m 2 polystyrene-based scintillator detector. Points 1-5 have an additional detector based on optically thick glass of 2 cm thickness. All detectors are equipped with Hamamatsu [5] R7723 photomultiplier tube (PMT). These PMTs have 1.7 ns pulse rise time, 1.1 ns time spread and work at maximum 2 kV bias. Data is recorded using 500 MHz CAEN [6] DT5730 digitizer. EAS Temporal Structure and Detector Resolution Requirement As EAS develops in the atmosphere, it shapes into a disk. However, that disk has a temporal profile (passage time, or width) as well. The schematic of this disk temporal profile at the detection level is represented in Figure 2. It is easy to note that the disk is very thin near the EAS axis and becomes extended with distance. We accept as standard type [7] EAS event such as defined by simulation package CORSIKA [8] for the purpose of this study in order to obtain EAS disk parameters dependency. From the simulation details, we show that additional information about EAS can be obtained by analyzing its temporal structure. For this analysis, high time resolution is required. CORSIKA simulation provides arrival time for each particle in the disk. The EAS arrival time at a detector is defined as the 50% of the particles at that detector. The arrival time for a vertical EAS is proportional to square of the distance. This arrival time data is normally used to get the EAS arrival direction by using data from 3 or more detectors. Time resolution is required for this on the ~ns level. The disk width is defined as the time difference between 10% and 90% of the arriving particles at each detector; it depends linearly on the distance from the EAS axis and can assist in determining the EAS axis location. Simulated results for disk arrival time and disk width as a function of distance from the EAS axis are shown in Figure 3. The time resolution required to accurately measure EAS temporal characteristics is on the order of few ns. Large detection area and detection range are also required. Figure 4 (left) is the illustration frame from the full simulation of the glass-based detector. The green rectangle is the 2 cm thick glass (chosen due to availability), red lines are paths of individual photons from a passing particle, and black circle is the PMT. Blue lines are the support structures and outer casing as seen in Figure 4 (right). Glass-based Detector with High Time Resolution The simulation purpose was to optimize the PMT placement such as distance from the glass and position at the same side as arriving particles (top) or on the opposite side (bottom). Each ultra-relativistic charged particle produces the Cherenkov light in glass that is directed along its path, from top to bottom. The light detector thus is placed at the bottom to intercept incoming light. However, both the simulation and prototyping showed that the placement of the PMT at the top and painting the bottom face of glass with TiO 2 drastically increases detection uniformity. The uniformity is defined as the 011015-3 similarity in detector response to Minimally Ionizing Particle (MIP) arriving across the glass top face. Here its presented as the average ratio of the MIP pulse areas from the detector corners to the MIP pulse areas from the center (directly under the PMT face). Figure 5 (left) is the simulation result for the detection uniformity dependence on PMT distance from glass face. This distance is limited by a plot in Figure 5 (right) showing that number of detected photons per MIP reduces with distance. Single photoelectron (PE) spectrum using LED was taken for the R7723 PMT at different bias voltage (Figure 6 left). Then the detector has been calibrated using MIP signalso we know the average number of photons detected by a PMT per one MIP detection. Next step was the calibration of the PMT response to the PIN diode reference at the same time from the same light source (details are provided in [2]). The results are shown in figure 6 (right). Numbered points are: 1deviation from linearity is less the 2%; 2deviation is less than 10%, 3 -ADC saturation for PMT signal is reached. When operated at ~1500V bias, the linearity of the response within 10% of glass detector is up to 300 MIP signals. With the chosen parameters, the glass detectors used for upgrade have the ~2 ns signal rise time and linear detection of up to 3000 MIP [2]. These characteristics are among the fastest PMT-based detectors and are applicable from H10T purposes. Conclusion A fast glass-based charged particles detector with large detection area has been simulated, tested and constructed. The best time resolution possible for the PMT + glass with 0.25 m 2 detection area for 2 cm thick glass is the ~2 ns pulse rise time. The linearity of response for this detector is up to 3000 MIP that is beneficial for EAS detection.
2019-11-22T00:46:56.876Z
2019-11-13T00:00:00.000
{ "year": 2019, "sha1": "4e03da2ea409ad14a17a14d0933846e901ae9d9a", "oa_license": "CCBY", "oa_url": "https://journals.jps.jp/doi/pdf/10.7566/JPSCP.27.011015", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "c6a0180efc89cafe545c29245d4009ce38fba43e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
9482881
pes2o/s2orc
v3-fos-license
Third-Order Gas-Liquid Phase Transition and the Nature of Andrews Critical Point The main objective of this article is to study the nature of the Andrews critical point in the gas-liquid transition in a physical-vapor transport (PVT) system. A dynamical model, consistent with the van der Waals equation near the Andrews critical point, is derived. With this model, we deduce two physical parameters, which interact exactly at the Andrews critical point, and which dictate the dynamic transition behavior near the Andrews critical point. In particular, it is shown that 1) the Andrews critical point is a switching point where the phase transition changes from the first order to the third order, 2) the gas-liquid co-existence curve can be extended beyond the Andrews critical point, and 3) the liquid-gas phase transition going beyond Andrews point is of the third order. This clearly explains why it is hard to observe the gas-liquid phase transition beyond the Andrews critical point. Furthermore, the analysis leads naturally the introduction of a general asymmetry principle of fluctuations and the preferred transition mechanism for a thermodynamic system. Introduction Phase transition is one of the central problems in nonlinear sciences. Many systems have different phases, and the most commonly encountered phases are gas, liquid and solid phases. A natural system which possesses these three phases is the physical-vapor transport (PVT) system. As we know, a P V T system is a system composed of one type of molecules, and the interaction between molecules is governed by the van der Waals law. The molecules generally have a repulsive core and a short-range attraction region outside the core. Such systems have a number of phases: gas, liquid and solid, and a solid can appear in a few phases. The most typical example of a P V T system is water. A P T -phase diagram of a typical P V T system is schematically illustrated by Figure 1.1, where point A is the triple point at which the gas, liquid, and solid phases can coexist. Point C is the Andrews critical point at which the gas-liquid coexistence curve terminates [3,4]. Classical view on the termination of the gasliquid coexistence curve at the critical point amounts to saying that the system can go continuously from a gaseous state to a liquid state without ever meeting an observable phase transition, if we choose the right path. It is, however, still an open question why the Andrews critical point exists and what is the order of transition going beyond this critical point. In [1], a mathematical theory is derived to address this problem. In this article, we explore the physical implications of the mathematical theory derived in [1], give a theory on the nature of the Andrews critical point, and introduce the asymmetry principle of fluctuations and the preferred transition mechanism. First, the modeling is based on 1) the Landau mean field theory, 2) a unified dynamic approach for equilibrium phase transitions, 3) the classical phase diagram in Figure 1.1, and c) the van der Waals equation. It is worth mentioning two important aspects of the model we derived. One is that the new model can be used to study liquid-solid and gas-solid transitions as well, by choosing different parameters. Second is the consistency of the model with the van der Waals equation. Namely, near the Andrews critical point C, the steady state equation of the homogeneous model (2.12) is exactly the van der Waals equation. This consistency gives a good validation of the mean field model. In addition, the dynamic approach leading to the model provides much richer information. For example, the model (2.7) can be used to study the heterogeneity of the system. Second, with the dynamic model at our disposal, we introduce two new physical parameters λ = λ(T, p) and a 2 = a 2 (T, p), where the temperature T and the pressure p are control parameters. These two physical parameters determine the phase transition behavior near the Andrews point and provide the key ingredient to characterize the nature of the Andrews point. It is remarkable that these two parameters reproduce the location of the Andrews critical point C, which is the same as derived by van der Waals in his classical work, although the method we use is the dynamic approach based on the Landau mean field theory, different from the one used by van der Waals. Coincidentally, these two parameters are the second and third-order derivatives of the Gibbs energy at the equilibrium state ρ 0 ; see (3.3). Third, the two parameters λ and a 2 correspond to two-curves in the pT -phase plane, and interact exactly at the Andrews critical point C. Then with the dynamic transition theory developed recently by the authors, we deduce a theory on the Andrews critical point C: 1) the critical point is a switching point where the phase transition changes from the first order with latent heat to the third order, and 2) the gas-liquid phase transition beyond the Andrews critical point is of the third order. This explains why it is hard to observe the phase liquid-gas transition beyond the Andrews point, and clearly Fourth, physical intuition and the theory lead us to introduce asymmetry principle of fluctuations, and the preferred transition mechanism. A Dynamic Model for Gas-Liquid Transition The classical and the simplest equation of state which can exhibit many of the essential features of the gas-liquid phase transition is the van der Waals equation: where v is the molar volume, p is the pressure, T is the temperature, R is the universal gas constant, b is the revised constant of inherent volume, and a is the revised constant of attractive force between molecules. If we adopt the molar density ρ = 1/v to replace v in (2.1), then the van der Waals equation becomes Now, we shall apply thermodynamic potentials to investigate the gas-liquid phase transitions in P V T systems, and we shall see later that the van der Waals equation can be derived as a Euler-Langrange equation for the minimizers of the Gibbs free energy for P V T systems at gaseous states. Consider an isothermal-isopiestic process. The thermodynamic potential is taken to be the Gibbs free energy. In this case, the order parameters are the molar density ρ and the entropy density S, and the control parameters are the pressure p and temperature T . The general form of the Gibbs free energy for P V T systems is given as where g and α are differentiable with respect to ρ and S, Ω ⊂ R 3 is the container, and αp is the mechanical coupling term in the Gibbs free energy, which can be expressed by where b = b(T, p) depends continuously on T and p. In fact, this mechanical coupling term should be p. In view of the van der Waals equation (2.2) and the mathematical analysis based on the new dynamical transition theory, phenomenologically we need to adjust the term by adding a coefficient α, leading to (2.4) as the first two terms in the Taylor expansion. Although the van der Waals equation works for gaseous sates only, by choosing the dependence of the coefficient b on the temperature and the pressure, the energy applies to the liquid and solid states as well. This is a very subtle term from the physical point of view to derive a feasible free energy. Based on both the physical and mathematical considerations, we take the Taylor expansion of g(ρ, S, T, p) on ρ and S as follows , β 1 and β 2 depend continuously on T and p, and In a P V T system, the order parameter is u = (ρ, S), where ρ i and S i (i = 0, 1) represent the density and entropy, ρ 0 , S 0 are reference points near the coexistence curve of gas and liquid states. Hence the conjugate variables of ρ and S are the pressure p and the temperature T . Thus, by the le Châtlier principle, we derive from (2.3)-(2.5) the following dynamic model for a P V T system: A physically meaningful boundary condition for the system is the Neumann boundary condition: An important special case for P V T systems is that the pressure and temperature functions are homogeneous in Ω. Thus we can assume that ρ and S are independent of x ∈ Ω, and the free energy (2.3) with (2.4) and (2.5) can be expressed as From (2.9) we get the dynamical equations as Because β 1 > 0 for all T and p, we can replace the second equation of (2.10) by . Then, (2.10) are equivalent to the following equation It is clear that if α 1 = 0, 2β −1 1 β 2 = R, α 2 = a, (α 3 − 2β 2 2 β −1 1 ) = ab, then the steady state equation of (2.12) is referred to the van der Waals equation. We remark that(2.12) can be considered as the dynamic version of the van der Waals equation, although we used the Landau mean field theory together with the le Châtlier principle. The approach provides a much richer information. For example, the model (2.7) can be used to study the heterogeneity of the system. In addition, the model here can be used to study liquid-solid and gas-solid transitions as well, by choosing different parameters. Two New Physical Parameters and the Andrews Critical Point In this section we use (2.12) to derive two new physical parameters, which dictates the dynamic transition behavior near the Andrews critical points. Let ρ 0 be a steady state solution of (2.12) near the Andrews point C = (T c , p c ). We take the transformation Then equation (2.12) becomes (drop the prime) where α 1 is close to zero. Here we emphasize that ρ 0 and (λ, a 2 ) are all functions of the control parameters (T, p). These are two important physical parameters, which are used to fully characterize the dynamic behavior of gas-liquid transition near the Andrews point. In fact, from the derivation of the model, we obtain immediately the following physical meaning of these two parameters: where ρ 0 = ρ 0 (T, p) is the equilibrium state. In the P T -plane, near the Andrews point C = (T c , p c ), the critical parameter equation for some δ > 0, defines a continuous function T = φ(p), such that Equivalently, this is called the principle of exchange of stabilities, which, as we have shown in [1], is the necessary and sufficient condition for the gas-liquid phase transition. One important component of our theory is that the Andrews critical point is determined by the system of equations Then by a direct computation, it is easy to see that the critical point C is given by This is in agreement with the classical work by van der Waals. Here we obtain the Andrews point using a dynamic approach. Theory of the Andrews Critical Point We now explain the gas-liquid transition near the Andrews critical point C. First, we have shown in (3.6) that at the equilibrium point ρ 0 , the two curves given by λ(p, T ) = 0 and a 2 (p, T ) = 0 interact exactly at the critical point C as shown in Figure 4.1, and the curve segment AB of λ = 0 is divided into two parts AC and CB by the point C such that a 2 (T, p) > 0 for (T, p) ∈ AC, a 2 (T, p) < 0 for (T, p) ∈ CB. Here the curve AC is the classical gas-liquid co-existence curve. Second, on the curve AC, excluding the critical point C, a 2 (T, p) > 0. The phase transition of the system is a mixed type if we take a pass crossing the AC; see (1) For T > T 1 , the gaseous state ρ 0 , corresponding to ρ = 0 in the figure, is stable. This is the only stable physical state in this temperature range, and the system is in the gaseous state. (2) For T 0 < T < T 1 , there are two metastable states given by gaseous phase ρ 0 and the liquid phase ρ 0 + ρ + . (3) For T < T 0 , there are three states: the unstable basic gaseous state ρ 0 , and the two metastable states: ρ 0 + ρ − and ρ 0 + ρ + . One important component of our theory is that the only physical phase here is the liquid phase represented by metastable state: ρ 0 + ρ + . Although, mathematically speaking, the gaseous state ρ 0 + ρ − is also metastable, it does not appear in nature. The only possible explanation for this exclusion is the asymmetry principle of fluctuations, to be further explored in the next section. (4) Hence we have shown that as we lower the temperature, the system undergoes a first order transition from a gaseous state to a liquid state with an abrupt change in density. In fact, there is an energy gap between the gaseous and liquid states: see [1]. This energy gap |∆E| stands for a latent heat, and ∆E < 0 shows that the transition from a gaseous state to a liquid state is an isothermal exothermal process, and from a liquid state to gaseous state is an isothermal endothermal process. Third, at the critical point C, we have a 2 = 0. Then the dynamic transition is as shown in Figure 4.3; see [1] for the detailed mathematical analysis leading to this phase diagram: (1) As in the previous case, for T > T 0 , the only physical state is given by the gaseous phase ρ 0 , corresponding to zero deviation shown in the figure. (2) As the temperature T is lowered crossing T 0 , the gaseous state losses its stability, leading to two metastable states: one is the liquid phase ρ 0 + ρ + , and the other is the gaseous phase ρ 0 +ρ − . Again, the gaseous phase ρ 0 +ρ − does not appear, and the asymmetry principle of fluctuations is valid in this situation as well. (3) The phase transition here is of the second order, as the energy is continuous at T 0 . In fact, the energy for the transition liquid state is given by for some α > 0. Hence the difference of the heat capacity at T = T c is Namely the heat capacity has a finite jump at T = T c , therefore the transition at T = T c is of the second order. Fourth, on the curve BC, a 2 (T, p) < 0, and the phase transition diagram is given by Figure 4.4: (1) For T > T 1 , the system is in the gaseous phase, which is stable. (2) For T 0 < T < T 1 , there are two metastable gaseous states given by ρ 0 and ρ 0 + ρ − . As before, although it is metastable, the gaseous state ρ 0 + ρ − does not appear. (3) For T < T 0 , the gaseous phase ρ 0 losses its stability, and the system undergoes a dynamic transition to the metastable liquid state ρ 0 + ρ + . Mathematically, the gaseous state ρ 0 + ρ − is also metastable. However, it does not appear either, due to the asymmetry principle of fluctuations. (4) The dynamic transition in this case is of the third order. In fact, we have Namely, the free energy is continuously differentiable up to the second order at T = T 0 , and the transition is of the third order. It implies that as (T 0 , p 0 ) ∈ CB the third-order transition at (T 0 , p 0 ) can not be observed by physical experiments. In summary, we have obtained a precise characterization of the phase transition behavior near the Andrews point, and have derived precisely the nature of the Andrews critical point. In particular, we have shown the following: (1) The transition is first order before the critical point, second-order at the critical point, and third order after the critical point. (2) The curve λ(T, p) = 0 always defines the gas-liquid co-existence curve in both sides of the critical point C. We note that in the classical theory, the co-existence curve terminates at the critical point, and with our theory, we are able to determine the gas-liquid transition behavior and the co-existence curve beyond the critical point. Asymmetry Principle of Fluctuations and the Preferred Transition Mechanism We have shown in the last section that in all three cases (both sides of the critical point and at the critical point), the metastable state ρ 0 +ρ − does not appear. Hence the only possible physical explanation is the asymmetry of the fluctuations. In fact, for the ferromagnetic systems we also see this asymmetry of fluctuations [2]. This observation leads to the following important principle: Physical Principle (Asymmetry of Fluctuations). The symmetry of fluctuations for general thermodynamic systems may not be universally true. In other words, in some systems with multi-equilibrium metastable states, the fluctuations near a critical point occur only in one basin of attraction of some equilibrium states, which are the ones that can be physically observed. An alternate explanation of this principle is related to phase transitions in certain preferred direction in a given thermodynamic system, which we call preferred transition mechanism. We conjecture that this mechanism is universal as well. Here we use this mechanism to explain the asymmetry principle of fluctuations in the gas-liquid transition, and, in return, to explain the meaning of the preferred transition mechanism. In the gas-liquid transition as the temperature is lowered, the system prefers phase transitions to denser phase. This can be considered as one aspect of the preferred transition mechanism. Another important aspect of the mechanism is the preferred transition at a critical point T * as shown in Figure 5.1, which is reproduced from Figure 4.2(a). This critical point is between T 0 and T 1 , and for water under one atmospheric pressure, T * is 100 o C. For T * < T < T 1 , the liquid state state ρ 0 + ρ + is called superheated liquid, and for T 0 < T < T * , the gaseous state ρ 0 is called supercooled gas. The preferred transition mechanism consists of the following: (1) As temperature decreases, the system is forced to undergo a first-order transition at T = T * , from the gas state ρ 0 to the liquid state ρ 0 + ρ + . (2) As the temperature increases, the system is forced to undergo a first-order liquid ρ 0 + ρ + to gas ρ 0 transition as the same critical point T = T * . (3) The supercooled gas and superheated liquid can rarely occur. Physically, this is related to the hysteresis phenomena.
2010-07-13T22:40:27.000Z
2010-07-13T00:00:00.000
{ "year": 2010, "sha1": "d4d253978acc56c0268961aaf82f890c173db080", "oa_license": "CCBY", "oa_url": "https://aip.scitation.org/doi/pdf/10.1063/1.3650703", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "d4d253978acc56c0268961aaf82f890c173db080", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Chemistry" ] }
234150662
pes2o/s2orc
v3-fos-license
Comparing Innovative Versus Conventional Ham Processes via Environmental Life Cycle Assessment Supplemented with the Assessment of Nitrite Impacts on Human Health Global sustainability indicators, particularly in human health, are necessary to describe agrifood products footprint. Nitrosamines are toxic molecules that are often encountered in cured and processed meats. As they are frequently consumed, meat-based products need to be assessed to evaluate their potential impact on human health. This article provides a methodological framework based on life cycle assessment for comparing meat product processing scenarios. The respective contributions of each step of the product life cycle are extended with a new human health indicator, nitrosamine toxicity, which has not been previously included in life cycle assessment (LCA) studies and tools (software and databases). This inclusion allows for the comparison of conventional versus innovative processes. Nitrosamines toxicity was estimated to be 2.20x10−6 disability-adjusted life years (DALY) for 1 kg of consumed conventional cooked ham while 4.54x10−7 DALY for 1 kg of consumed innovative cooked ham. The potential carcinogenic and noncarcinogenic effects of nitrosamines from meat products on human health are taken into account. Human health indicators are an important step forward in the comprehensive application of LCA methodology to improve the global sustainability of food systems. Sustainability Impact Assessment in the Agrifood Industry and Meat Sector Food accounts for 20% to 30% of the environmental impact of total household consumption in the European Union and could contribute more than 50% of some impact categories, such as the acidification impact category. Meat products are partly responsible for this high environmental impact [1]. Pork (fresh meat and sausage) was still the most consumed meat in 2015 worldwide, with a consumption rate of 15.3 kg/per year in carcass equivalents, representing approximately 37% of the total meat consumption [2]. Pork offers a wide variety of products at a low price and is fairly stable, and 34% of meat product volumes were purchased in a processed form [3]. Ham production has, therefore, significant impacts on human health and the environment. Much work has been done on the environmental assessment of meat production through life cycle assessment (LCA) studies. For example, in a Weidema study [4], household storage accounted for 20% of the total energy consumption in the life cycle of meat products. Calderon et al. [5] identified transportation as a second priority step in the environmental impact of canned pork. According to Davis and Sonnesson [6], reducing the environmental impact of eating a poultry-meat-based meal is as important as improving the environmental performance of manufacturing a ready-to-eat product. Some improvements have been proposed in three aspects of the life cycle of meat products: utilization (reducing food waste transportation), agricultural steps (reducing water use and emissions to the environment, as well as land use), and energy consumption (reducing agricultural consumption, food processing, distribution, and household consumption) [4]. However, most studies still stop at the farm gate [7][8][9]. Life cycle steps occurring before the farm gate (livestock feed and animal husbandry) are mainly responsible for the total environmental impact, with over 80% contribution to acidification and eutrophication and 60% to 80% contribution to greenhouse gas emissions [4,[10][11][12][13]. The life cycle steps after the farm gate (slaughter, production, distribution, and consumption) account for 10% to 40% of greenhouse gas emissions and 10% to 70% of energy consumption, with a very small contribution to acidification and eutrophication impacts [4,6,7,[10][11][12][14][15][16][17][18]. On the other hand, water depletion has been recently included in LCA studies of meat products as an impact category. The contribution of the post-farm steps to this category is 15-36% [19][20][21], depending on the species; for example, the contribution of the post-farm steps to the carbon footprint of packaged fresh beef is negligible [22]. The contribution of life cycle steps to the global impact of the product is then influenced by the specific characteristics of the meat (species, energy sources, type of processing, packaging, storage, transport, and consumption mode) and the methodological choices of the studies (system limits, type of allocation, and assumptions). These conditions prevent the comparison of environmental impacts between products. Other discriminating indicators are still needed to better characterize and compare the sustainability of products. This is why this study focuses on a human health indicator, based on nitrosamines from nitrites in ham. Those substances have been selected because they are largely discussed in the literature [23]. Human Health Footprint: The Case of Nitrosamines in Ham Production Meat products may contain nitrites that are used for their technological role (color and oxidation prevention), sensory attributes, and preservation of those products. Nitrites are precursors of nitrosamines: nitrosamines are formed from the reaction between nitrites and amino acids or secondary amines. Nitrosamines are molecules with known toxicity, especially their carcinogenic effects. In the environment, the main sources of nitrosamines are food, cigarettes, and occupational activities (rubber manufacturing), and they are thus pollutants of water sources [24,25]. Indeed, an exposure pathway is the consumption of meat products [26]. This is why the authorized amount of sodium nitrite in meat products is regulated and limited to a maximum of 150 mg.kg −1 product [27] by the European Food Safety Authority (EFSA). The nitrosamines most commonly found in the meat matrix are N-nitrosodimethylamine (NDMA), N-nitrosodiethylamine (NDEA), N-nitrosopiperidine (NPIP), and N-nitrosopyrrolidine (NPYR) [28]. Among these substances, NDMA and NDEA are recognized as probable carcinogens by the International Agency for Research on Cancer (IARC) [29][30][31]. NPIPs and NPIYRs also have the potential to pose cancer risks. However, some agents, such as ascorbic acid and alpha-tocopherol, influence the reaction and decrease the formation of nitrosamines. Daily consumption of nitrites may increase the risk of gastrointestinal cancer due to the in vivo formation of nitrosamines. The endogenous formation of N-nitroso compounds in the stomach is complex and depends on several factors, such as gastric pH, bacteria, and the presence of antioxidants [32]. The amount of those components is therefore very complex to evaluate, model, and predict [33]. In addition, the processing and storage of meat can increase the amount of amines available for nitrosamine formation [34]. Therefore, many alternatives have been developed to limit the use of nitrites in cured and processed meat products [35]. Figure 1 illustrates the general risks and benefits associated with nitrite treatment of ham and, more generally, of cooked and processed meat products. and processed meat products [35]. Figure 1 illustrates the general risks and benefits associated with nitrite treatment of ham and, more generally, of cooked and processed meat products. Cooked ham has been widely used as a marker of the exposure of nitroso compounds because it is largely consumed, and thus exposure to nitrosamines could be considered significant [36][37][38][39]. High-Pressure Treatment: A Scenario to Reduce Nitrite Content Because of the potentially harmful effect of nitrites, new technologies have been developed to decrease or eliminate nitrite contents in meat products. High-pressure treatment represents a promising alternative to the addition of nitrites in ham due to its preservation ability during the life of the product. The resistance of bacteria in the food medium, as well as the recovery of bacterial activity after pasteurization, are two major problems in meat products. It was shown in 1895 that starting from 400 MPa a high-pressure treatment could significantly slow down the growth recovery of microorganisms, such as weathering or pathogenic bacteria. It thus increases the shelf life of the products [40][41][42][43]. Deactivation depends on many parameters, such as the type of pathogen, pressure level, process temperature, treatment time, and pH. For example, some studies have shown that a storage temperature lower than 6 °C, combined with a treatment of 600 MPa, can prevent the resumption of growth of Listeria monocytogenes in cooked ham [44,45]. High-pressure treatment cannot always replace conventional techniques but may be a complementary treatment that increases the shelf life of meat products. Currently, meat products represent 30% of high-pressure products marketed [46], and high-pressure cooked ham is one of the first products marketed in Europe at the end of the nineteenth century [47]. The addition of a high-pressure operating unit in ham production could have an effect on the environmental impact of the final product and especially on human health consequences. Therefore it seemed important to discriminate between the different processes and to evaluate the environmental and human health impacts of the life cycle of cooked ham. The environmental performance of high-pressure processing technology for food processing was compared with traditionally used food preservation technologies such as thermal pasteurization and modified atmosphere packaging [48]. Based on the methodology of life cycle analysis, involving in particular primary data on the flows and consumption of high-pressure processing (HPP) plants, the evaluation of the environmental performance of sliced Parma ham shows that high-pressure treatment appears to have a lower environmental impact in almost all impact categories than others processes. Indeed, packaging under a modified atmosphere requires a large amount of packaging materials and food gases, and thermal pasteurization a large amount of energy. This study aims to highlight the environmental benefits of high-pressure treatment, a well-known non-thermal technology, but still limited in terms of use. Further studies confirming its Cooked ham has been widely used as a marker of the exposure of nitroso compounds because it is largely consumed, and thus exposure to nitrosamines could be considered significant [36][37][38][39]. High-Pressure Treatment: A Scenario to Reduce Nitrite Content Because of the potentially harmful effect of nitrites, new technologies have been developed to decrease or eliminate nitrite contents in meat products. High-pressure treatment represents a promising alternative to the addition of nitrites in ham due to its preservation ability during the life of the product. The resistance of bacteria in the food medium, as well as the recovery of bacterial activity after pasteurization, are two major problems in meat products. It was shown in 1895 that starting from 400 MPa a high-pressure treatment could significantly slow down the growth recovery of microorganisms, such as weathering or pathogenic bacteria. It thus increases the shelf life of the products [40][41][42][43]. Deactivation depends on many parameters, such as the type of pathogen, pressure level, process temperature, treatment time, and pH. For example, some studies have shown that a storage temperature lower than 6 • C, combined with a treatment of 600 MPa, can prevent the resumption of growth of Listeria monocytogenes in cooked ham [44,45]. High-pressure treatment cannot always replace conventional techniques but may be a complementary treatment that increases the shelf life of meat products. Currently, meat products represent 30% of high-pressure products marketed [46], and high-pressure cooked ham is one of the first products marketed in Europe at the end of the nineteenth century [47]. The addition of a high-pressure operating unit in ham production could have an effect on the environmental impact of the final product and especially on human health consequences. Therefore it seemed important to discriminate between the different processes and to evaluate the environmental and human health impacts of the life cycle of cooked ham. The environmental performance of high-pressure processing technology for food processing was compared with traditionally used food preservation technologies such as thermal pasteurization and modified atmosphere packaging [48]. Based on the methodology of life cycle analysis, involving in particular primary data on the flows and consumption of high-pressure processing (HPP) plants, the evaluation of the environmental performance of sliced Parma ham shows that high-pressure treatment appears to have a lower environmental impact in almost all impact categories than others processes. Indeed, packaging under a modified atmosphere requires a large amount of packaging materials and food gases, and thermal pasteurization a large amount of energy. This study aims to highlight the environmental benefits of high-pressure treatment, a well-known non-thermal technology, but still limited in terms of use. Further studies confirming its low impact on the environment are required and should be supplemented with human health analyzes. In this article, a combination of two innovative processes was evaluated, high-pressure processing and biopreservation with lactic acid bacteria, as recommended by Simonin et al. [49]. The objectives of this study were the following: (i) comparison of the potential environmental impact of ham production through two different technological paths: conventional production versus production with an innovative process (high pressure + biopreservation); (ii) development of a characterization factor to take into account nitrite contents in the human health impact and to help compare different technological processes. Environmental Life Cycle Footprint of Ham Production Goal and scope: The reference system corresponded to the production of a conventional, superior ham. Functional unit: The main function of cooked ham is to feed the population by meeting implicit and/or explicit consumer needs. Cooked ham is mainly sold sliced, and the weight of the unit of sale most often purchased is 0.18 kg (or four slices). For our study and to compare alternative systems, all emissions and resources are linked to a reference quantity of product. The functional unit was defined as "1 kg of consumed cooked ham", i.e., superior ham than that mainly consumed in France, namely free of polyphosphates like in Rakotondramavo et al. [50]. System boundaries: The system boundary determination was based on the methodology presented by Hospido et al. [51]. The following four steps were modeled: raw material production, cooked ham production, distribution, and use phase (consumption). The raw material production stage included conventional breeding (birth, postweaning, and fattening) of pigs in France. The elements included in the subsystem were energy and water production, livestock infrastructure, and feed production. However, veterinary products, cleaning products, food distribution, power cables, and infrastructure for the production of food were not included. The cooked ham production stage included transportation of live pigs to the slaughterhouse, slaughtering and cutting operations (electrical stunning, scalding, dehairing, buckling, eviscerating, bleeding, and cutting), refrigerated transport to the slaughterhouse ham production plant, and cooked ham production operations (receiving, deboning, brining, molding, baking, cooling, slicing, packaging, storing and transporting the ingredients, and primary packaging). The annual production data at the slaughterhouse are related to the quantity of ham pieces per mass allocation. The transport distance between groups of pork producers from the Brittany region and the slaughterhouse was estimated by weighting according to the regional distribution of production. Slaughter-cutting waste accounts for 15% of the live pig weight [52]. The production plant also produces other meat products. The annual production data at the plant were related to the quantity of cooked ham per mass allocation. This study considered the absence of losses during the production of ham. This study also excluded the manufacture of additives (sodium ascorbate and aromatics) present in the processing of the product as well as their transport because they represent less than 0.1% of the final composition of the product. The infrastructure of the facilities, equipment, packaging used for temporary storage and products used for cleaning the facilities were not included. The influence of combined biopreservation and high-pressure processing was evaluated in terms of the life cycle of cooked ham from data on an industrial scale to assess the human health and environmental impacts of its potential application. In the conventional ham production scenario, an embedded amount of 0.1 g per kilo of nitrites was included. No nitrites were assumed to be used in the innovative scenario. A high-pressure program of 600 MPa for 3 min at room temperature was used in this study (data from Villamonte 2014 [53], and from a confidential report disclosed by Hiperbaric). A spray of 25 mg of commercial lactic starter slurry per kilo of ham was used for the biopreservation of the product. Biopreservation is based on the principle of using natural microbial cultures to avoid food spoilage and increase food security. The ferments, therefore, tend to lengthen the shelf life of food by avoiding product degradation. The cultures of lactic ferments used for ham Appl. Sci. 2021, 11, 451 5 of 18 are specially developed for food application. The extension of the lifetime associated with this innovative treatment (60 days) is comparable to that associated with the conventional treatment product [54,55]. This process did not alter the rest of the life cycle of the product (distribution or consumption, for example). The distribution stage included transport from the plant's warehouses to the distribution platform, storage on the platform (two days), transport to the supermarket, and storage time (energy consumption) in a display case (linear) at the supermarket (seven days). The infrastructure and packaging for transport and storage were not included. The distribution platform was estimated using the Brunel University study approach [56] and information from a study by Rizet et al. [57]. For point-of-sale storage, estimates were obtained from summer energy evaluation studies of a supermarket in France. Round-trip transport with vehicles subject to the European EURO 4 emission standard was considered. The use phase included transportation from the supermarket to the consumer's home, and storage of the product (seven days) in a type A energy label refrigerator was included. The transport characteristics corresponded to an urban consumer in France from the point of sale. The study assumed 3.15% waste of the product, estimated according to the nature of food and waste in France [58]. The end of life of the packaging waste associated with the consumption of the product was included. The data on the energy consumption of the refrigeration were estimated according to the model of Nielsen et al. [52] for a type A refrigerator. The waste was treated by incineration with energy recovery (52.4%) or by storage (47.6%). The steps included in the limits are shown in Figure 2. Allocation method: mass allocation. Inventory of inputs, emissions, and resources: The system was described and modeled with SimaPro 8.5.0.0 software (PRé Sustainability, Amersfoort, The Netherlands) [59]. The life cycle inventory included the flow of materials and energy within the system boundaries, adjusted to 1 kg of consumed cooked ham. The primary data (real) were measured and/or provided by officials of the French pork industry French National Pork Institute (IFIP) and a high-pressure processing equipment company (Hiperbaric in Spain). Secondary data (from databases) were supplemented by the database and bibliography available for the reference technology path. The specifications and modalities of the reference and innovative processes were based on the data obtained from the partners, for example, the high-pressure treatment schedules and the inventory of resources needed to cultivate the preservation fermentation. The data for the raw material production step were collected from the Agribalyse French agricultural products database (version 1.3)-the information obtained on pig slaughtering corresponded to an average French slaughterhouse. The data for cooked ham production were collected from conventional French meat producers. The technological level used in the slaughtering and manufacturing steps was modern or automated, and the site capacities were representative of French production. The generic data originated from the EcoInvent 3.3 database. The energy source was that of France and was acquired from this database. The quality of the related data was categorized into generic (pig breeding, distribution, and use) and specific (slaughter-cutting, production of cooked ham, and high-pressure treatment). This information was temporally representative of current production technologies and consumption habits, and based on available sources: livestock (Agribalyse V1.1 February 2014, ILCD-Quality final note 4,5/5), slaughter-cutting and cooked ham production (from IFIP, 2014), distribution (from Villamonte, 2014 [53]), and use (data from Villamonte, 2014 [53]). To summarize, the flows that change between the different scenarios studied are, on the one hand, those linked to the unitary high-pressure treatment operation (i.e., for a kg of cooked ham, 0.0695 kWh of electricity, 700 kPa of compressed air, and 0.377 L of water), as well as the production flows of the lactic ferment for biopreservation (taken into account in the study of raw materials for the manufacture of the culture medium, liquid nitrogen, in particular for keeping the packaging cold, energy and water required, emissions into the air and water (BOD and COD), reduction of avoided impacts thanks to the spreading of the culture medium after use on land in direct proximity to the production of the ferment). Impact assessment: All aspects of environmental issues were divided into impact categories. For the food industry, resource use is an essential question in the current context. Therefore, the cumulative demand for renewable and nonrenewable energy, abiotic resource depletion, and water depletion were utilized as impact categories for the study. Reference impact categories, such as climate change, acidification, and eutrophication, were also included. The environmental impact categories were obtained by the ReCiPe method (version 1.12), midpoint (problems), and endpoint (damages, all expressed in DALY for human toxicity). The impact category of water depletion represented only the amount of water used. The cumulative demand for nonrenewable energy was determined by the cumulative energy demand method version 1.09. This method estimated the energy use in the life cycle from renewable and nonrenewable energy resources. The cumulative demand for energy concerned all renewable and nonrenewable energy sources (nuclear, oil, coal, natural gas, etc.) potentially used throughout the product life cycle. Climate change was evaluated as the concentration of substances, such as carbon monoxide and methane, that disrupt the climate balance and contribute to the greenhouse effect. Metal depletion represents the extraction of mineral resources (ores), expressed in kg of equivalent iron. The indicator for the acidification category was the increase in the concentration of acidifying substances (air pollutants) causing tree dieback, expressed in g of sulfur dioxide (SO 2 ) equivalents. The eutrophication category, expressed in g of phosphate (P) or nitrogen (N) equivalents, reflected the quantity of nutrients released into the environment that favor the proliferation of certain species and that cause the disruption of ecosystems. The water-depletion impact category evaluated the total amount of freshwater (m 3 ) used in the life cycle of cooked ham. Human Health Footprint of Ham Production To take into account the effect of substances in ham, such as nitrites, a new characterization factor was developed to be included in the life cycle assessment in the human health damage category. The methodology for the development of this new characterization factor is described below. Fate and Exposure The causal chain of the potential toxicity of nitrosamines via ingestion of meat products containing nitrites is presented in Figure 3. In general, chemical fate includes the transport of the substance through different environmental compartments. The fate factor associates the emissions from a substance in compartment n with the increase of the quantity of the substance in this compartment (air, soil, water, and food) [60]. The particularity of the effect of substances, such as nitrosamines, in foodstuffs, is partly due to the mechanism of food ingestion. In fact, the entire emission (nitrosamines formed in the food) is ingested by the consumer. Thus, the transfer of the substance and its accumulation in another compartment were not taken into account. Indeed, nitrosamines are formed in meat products (exogenous nitrosamines) via the reaction of a nitrosating agent (nitrogen oxides) with amino compounds from proteins. These nitrosating agents are derived from nitrites and favored by heat treatments. Human exposure represents only a fraction of the substance from the food transferred (nitrosamines via the ingestion pathway) to the entire human population. The potential risk to human health was calculated by the probable number of cases per kg of substance ingested by the population. The carcinogenic and noncarcinogenic effects were estimated by toxicity data for the substance from epidemiological studies. The severity of these effects (carcinogenic and noncarcinogenic) was expressed in terms of equivalent life lost per case of the disease (DALY.cas-1), a unit used by the World Health Organization [60,61]. DALY stands for disability-adjusted life years and is also frequently used precisely in LCA via damage-oriented and/or endpoint methods, which group the impacts according to the results in the cause-and-effect chain and clearly show the impact on a specific category of individuals. The damage factors on human health transform kilograms of the equivalent substance into years of healthy life lost (DALY). For the needs of this study, the French population was considered. The calculation of the indicator of the potential impact of the toxicity of nitrosamines on humans was defined from the expression established by Udo de Haes [62] for the category of oriented damage to human health (Equation (1)) where Si is the impact category indicator (DALY, functional unit −1 ), Fi is the intake fraction (kg taken functional unit −1 ), and EFi is the effect factor (DALY, kg taken −1 ) of the nitrosamine type i. The results are the aggregation of the independent estimate of each nitrosamine compound i in the meat product. Intake Fraction The intake fraction of nitrosamine i (iF) is defined as the ingested dose of nitrosamine i per functional unit (FU). The dose considers the concentration of nitrosamine i per unit of food (exogenous nitrosamines). In our study, the fraction of ingested nitrosamines was calculated from Equation (2). The concentration of nitrosamines (µg.kg −1 ) at the time of ingestion was defined by the literature reviews summarized in Table 1 Effect and Severity Factor The USEtox 1.01 model [67,68] was used to calculate the characterization factors. The USEtox characterization model is based on a consensus to address the problems associated with the variability of human toxicity characterization model methodology [69]. UNEP/SETAC (United Nations Environment Programme/Society of Environmental Toxicology and Chemistry) created this model from other models, such as CalTOx, IM-PACT 2002 + USES-LCA, BETR, EDIP, WATSON, and Ecosense. The aim was to create a simple model with a solid scientific basis [69,70] to obtain problem-oriented characterization factors. The fate and exposure factors of the model were modified according to the previously described characteristics. The effect factors of the human toxicity characterization models for nitrosamines from the USEtox analysis were used (Table 2). These factors represented the variation in the probability of disease from the change in nitrosamine uptake over the course of the life of the targeted population. The unit for the potential for human toxicity of nitrosamines was case.kg −1 . The model expresses toxicity in comparative toxic units (CTUs). The effect factor is based on the extrapolation of ED50 (lethal dose for 50% of a given population). The carcinogenic effects were obtained from the Carcinogenic Potency Database. As the dose-response curve was linear, the EF (case.kg −1 taken) was calculated according to Equation (3) [67]: To evaluate the significance of the indicator in relation to other indicators of the damage-oriented impact category of human health from the life cycle analysis of cooked ham, the severity factor for the carcinogenic effect of nitrosamines was considered 11.5 DALY.case-1 according to the study conducted by Huijbregts et al. [71] on carcinogenic chemicals, including nitrosamines. Table 3 shows the potential environmental impact results of the life cycle of cooked ham production for conventional and innovative scenarios. The production of conventionally cooked ham in this study used approximately of nonrenewable energy for innovative scenarios. For both scenarios, the impact of 63% was fossil energy, and 37% was nuclear energy. Conventionally cooked ham has an impact on greenhouse gas emissions of 13.4 kg CO 2 eq.kg −1 for both scenarios (13.422 for the conventional scenario and 13.439 for the innovative scenario). The conventionally cooked ham induced the depletion of metal resources to the order of 0.28 kg Fe eq.kg −1 of product for both scenarios (0.276 for the conventional scenario and 0.277 for the innovative scenario). The total amount of freshwater (m 3 ) used in the life cycle of cooked ham for both scenarios was around 11 m 3 of water (10.958 for the conventional scenario and 11.465 for the innovative scenario). For all impact indicators except nitrosamine toxicity, the variation between the two scenarios was less than 5%, which is considered negligible. The potential effect of the ham life cycle directly on human health is outlined in part 3.3 Sensitivity analysis. All of these damage-oriented impact categories were then expressed in DALY. Effects on ecosystems were not estimated directly in this study (but could still be calculated from problemoriented impact categories). Environmental Life Cycle Assessment of Cooked Ham Production To show how similar the impacts calculated by environmental LCA are for the two scenarios compared, Table 4 presents an analysis of the contributions of the different stages of the product life cycle. This analysis of ham life cycle hotspots shows how the environmental life cycle analysis makes it difficult to distinguish between the two scenarios, conventional and innovative. The production of raw material was the most important step, accounting for 94% of the potential impact due to emissions occurring during pig farming. The environmental results of our life cycle analysis study on ham production are comparable to those of previous related research. According to Carlsson-Kanyama [10], the energy requirement for pork in Sweden was 32 MJ. In general, important differences are observed between countries due to differences in the applied energy sources. These differences with data from Sweden can therefore be explained by the different energy sources used by the two countries. Additionally, this study considered all steps up to the point of sale, but the product has not been treated after being slaughtered. Compared to cooked ham, the difference is probably largely due to the extent of the contribution of ham production and storage on the distribution and product use stages. Therefore, in our study we considered more impacts. In addition, according to Reckmann et al. [72], the nonrenewable energy demand is 19.5 MJ.kg −1 for slaughtered pork, with a contribution of 13% for the slaughtering stage. In this respect, the conventionally cooked ham had a very high energy consumption (87.7%), with a contribution of 10% from slaughtering. The breed from the previous study conducted in Sweden was not the same type of breed that is used in France because, according to the Agribalyse database, the nonrenewable energy consumption for a conventional pig raised in France is equal to 17.3 MJ.kg −1 pork (live weight). These differences originate from the processed meat product due to the high raw material requirements (pieces of ham) and the steps required (cutting, cooking, and slicing) to produce a superior cooked ham. Despite the importance of limiting energy demand in sustainable food production [73], in the studies available on processed meat products, this impact category was not evaluated. Studies on the energy demand of meat products are needed to identify and control or reduce the potential for preservation processes with a high potential environmental impact. The carbon footprint of cooked ham (13.422 and 13.439 kg CO 2 eq.kg −1 ) was similar to that obtained in other studies. In France, the carbon footprint of cooked ham was studied by the research firm BioIntelligence Service. Their superior cooked ham had a carbon footprint that varied from 7.71 to 8.55 CO 2 eq.kg −1 (equivalent corresponding to the functional unit of this study) depending on the product (different numbers of slices, with or without the rind) [74]. The steps taken into account in the study by BioIntelligence Service were agriculture, manufacturing, distribution, transportation, and packaging. Between 63.1% and 75.9% of the potential for depletion of nonrenewable resources comes from the ham production phase and, more particularly, from the raw material production phase (in particular, pork). The consumption is mainly of electricity and gas on the farm (34.9%), as well as the consumption of fuel on the pig farm (26.4%), which contributes the most to the depletion of nonrenewable resources. According to Carlsson-Kanyama [10], pork in Sweden has an impact of 6.1 kg CO 2 eq.kg −1 meat (Roy et al. 2012). The CO 2 emission impacts are also dependent on the energy sources of each country. In this case, France could be associated with low CO 2 emissions because its electricity is predominantly nuclear. In terms of contribution, the main step was the production of raw material, followed by packaging, cooking, transportation, slaughtering, and storage. Distribution contributed to 3.3% of the impacts in this case. A rough estimate was given by our study for the distribution stage (4%). Another study on pork pie (2-3 kg CO 2 eq.kg −1 ) produced in France showed that the production of raw materials also contributed 80% of the impact to the life cycle and that the processing stage contributed 10% to the cycle. The influence of transportation was negligible [13]. Our results were consistent with those of other studies: most of the impact of meat products comes from the agricultural production stage of the raw material. With regard to the depletion of abiotic resources, comparison with other studies on meat products was not possible because the depletion of nonrenewable mineral natural resources was not taken into account. However, the overall trend showed a decrease in the concentration of mineral ore [75]. According to the study by Reckmann et al. [72], slaughtered pork represents 57.1 g SO 2 eq.kg −1 . Slaughter yield and cooked ham production could be responsible for the difference in these emissions. The important contribution of raw material production to the eutrophication impact category was due to environmental emissions (nitrates in water and ammonia in the air) from intensive pig farming [76]. The emissions from a cooked ham were at least twice as high as those of a poultry product (19.9 to 29.9 g of PO 4 3− .kg −1 product, [77]) and as those of slaughtered pork (23.3 g PO 4 3− .kg −1 product, [72]). The Potential Impact of Nitrosamines on Human Health This study estimated that the exposure to exogenous nitrosamines (NDMA, NDEA, NPYR, and NPIP) by the daily consumption of cooked ham (16 g) was 0.15 ± 0.11 µg per day [78]. The variability in the product was associated with the quantification of the various nitrite compounds. Other studies on cooked ham were limited to the estimated NDMA concentration. For example, in our estimation, the exposure was lower than what was reported in work by Catsburg et al. [39]; the contribution of exogenous nitroso compounds was between 0 and 2.1 µg per day. In our study, this contribution ranged from 0.003 to 0.34 µg per day and corresponded only to the consumption of cooked ham. In addition to their use for preserving meat, nitrites are added to other foods to preserve them by limiting the proliferation of pathogenic microorganisms, in particular, Clostridium botulinum. Nitrates alone are used to prevent some cheeses from swelling during fermentation. They are also naturally present in some vegetables, with the highest concentrations occurring in leafy vegetables, such as spinach or lettuce. Moreover, these substances can enter the food chain as environmental contaminants in water due to their use in intensive agricultural practices, animal production and wastewater discharge. According to the European Food Safety Authority (EFSA) [79], based on realistic data, i.e., levels of concentration actually observed in foods, the intake of nitrates in the form of a food additive is less than 5% of the global exposure to nitrates through food. The development of gastrointestinal cancer is associated with exogenous exposure to nitroso compounds. The ingestion of exogenous nitrosamines in subjects with cancer was 0.0591 ± 0.0485 µg per day. However, this estimate corresponds to the NDMA compound in the diet [80]. Jakszyn et al. [81] also estimated dietary exposure to NDMA in a Spanish population at 0.114 µg per day. The foods that contributed most to this exposure are meat products (14%), beer (11%), and refined cheese (13%). In addition, a study on a European population showed that dietary exposure to NDMA was 0.26 ± 0.34 µg per day and 0.19 ± 0.31 µg per day for subjects with gastric cancer and for healthy subjects, respectively [82]. The exposure determined for the consumption of cooked ham for our study seemed high in comparison to these evaluations. However, the quantification of a single type of nitrosamine (NDMA) causes an underestimation of the risk of exogenous nitrosamines. In our case, the exposure to nitrosamines by the consumption of cooked ham decreased drastically if NDMA was the only nitrite compound taken into account: 0.065 ± 0.063 µg per day. In France, the exposure (NDMA) due to ham was estimated to be 0.0038 µg per day, with the presence of NDMA at 0.31 µg per kg ham and the annual ham consumption of 4.45 kg during the period of 1987-1992. The exposure to NDMA in the diet was determined to be 0.19 µg per day. Meat products contributed to 12.5% of NDMA exposure [83]. A preliminary study estimated a daily exposure of 0.25 µg (NDMA) per day per person in France [84]. In general, these studies only considered NDMA in assessing nitrosamine exposure. The exposure to preformed nitrosamines was only considered for the estimation of the indicator. The severity of nitrosamines (2.2x10 −6 DALY.kg −1 ), considering the effect on an entire population and not limited to a sensitive population, was in line with a report from the International Agency for Research on Cancer (IARC) [31]. These experts recognized that there is strong evidence to support changing the recommendations for processed meat product consumption to achieve moderation or a reduction in consumption. Indeed, an increased risk of colorectal cancer is associated with higher consumption of meat products. The Global Fund for Cancer Research suggests that a reduction of 50 g meat product consumption per day could represent a 20% decrease in the number of colorectal cancer cases [85,86]. In regard to the human health indicators developed in this study, the potential impacts of food safety in the use phase were not negligible. The main aim of this study was to evaluate the potential contribution of nitrosamines to the damage of human health throughout the ham production life cycle. The impact on morbidity associated with contaminants from food processes is not currently known. However, the meat production industry is exploring alternatives to reduce or replace nitrites while preserving the microbiological and sensory quality of the products. An innovative combination of biopreservation and high-pressure processing has been explored to reduce nitrites in ham and contribute to reducing the total potential human health impact by almost 8% and almost 20% for the potential human health impact resulting from nitrosamine toxicity. Another option is the use of plant extracts with active molecules capable of inactivating microorganisms and/or preventing the negative effects of contaminants [87]. An additional option is to improve animal nutrition in such a way as to provide substances with a beneficial effect on health or even vitamins to react against the formation of these carcinogenic compounds [88,89]. Finally, a recent study has shown the influence of food processing conditions on the risk of cancer. A decrease in carcinogenesis in rats was linked to the anaerobic packaging of ham compared with unpackaged food and food exposure to air [90]. The innovative process of ham production combining high-pressure treatment and biopreservation could be a solution to not altering the concentration of residual nitrites in cooked ham [91][92][93][94]. Several high-pressure-treated meat products are marketed with the claim "nitrites/nitrates not added". They contain natural preservatives, such as celery juice, that replace additives conventionally used in their preservation [95]. High-pressure microbial inactivation can allow for the production of sausages without nitrite use while maintaining food safety [92,96]. This technology can also be used to improve the antimicrobial role of salt [97] or natural antimicrobials in ham [44,45]. Some potential impacts on human health could be avoided by specific innovative treatments involving these new technologies. Table 5 shows the impact results of the life cycle of cooked ham production in France in terms of damage-oriented impact categories from the ReCiPe method. All indicators are expressed in DALY and correspond to the chosen functional unit (1 kg of ham consumed). They describe the damage to human health attributable to each impact category. Climate change, particulate matter formation, and nitrosamine toxicity are major contributors to the human health impact category. In particular, nitrosamine toxicity presented a significant variation between the two scenarios, conventional ham versus highpressure processed ham. The results for conventional ham were similar to those reported in the work of Weidema et al. [4]. The damage-oriented environmental impact of meat products in the European Union was mainly due to land use (32-49%), respiratory effects (inorganic (28-49%)), and climate change (15-23%). However, tackling the effects of nitrosamine on human toxicity is new and thus cannot be compared with the literature. Nitrosamines may become the third source of potential harm to human health after climate change and particulate matter formation, with an estimated contribution of up to 10% to the potential impacts. However, the integration of new impacts into LCAs implies a thorough knowledge of the mechanisms that limit the development of characterization factors [98]. For example, the International Agency for Research on Cancer has concluded that ingested nitrates or nitrites are likely human carcinogens when conditions induce the production of endogenous nitrosamines (IARC, 2010), which was not considered in this study. This is why efforts are needed to improve the understanding of these phenomena. Thus, the nitrosamine severity factor (11.5 DALY.cas-1) used to quantify the damage to human health increased these uncertainties. The severity of colorectal cancer could be as high as 8.8 DALY, and that of stomach cancer could be as high as 13.6 DALY [99]. Conclusions Food safety is a predominant attribute of a product in the implicit or explicit evaluation by the consumer, other stakeholders, and even the individuals involved themselves in the product life cycle. This study aimed to propose a multicriteria LCA-based approach, including environmental and human health indicators, to measure and compare the potential effects of a food product for different processing scenarios. A conventional LCA did not show a significant difference between two ham production scenarios, while the addition of human health indicators allowed them to be distinguished because for all impact indicators except nitrosamine toxicity, the variation between the two scenarios was less than 5%, which was considered negligible. The impacts of life cycle steps on human health should be included in more food LCA studies. Indeed, these indicators are not negligible because this study showed that the product characteristics have a potential impact on human health that is comparable to other sources of environmental impact. Exposure to nitrosamines comes from various sectors of the environment. However, their presence in food is specific because the substances are preformed and because the food is a source of the precursors that allow for their formation in vivo. The complexity of these mechanisms makes it impossible to establish more reliable models to measure their total exposure and to define a more predictable degree of severity. Additional work aimed at better understanding this mechanism is necessary. The definition of indicators describing the potential impact of product life cycle steps on human health makes it possible to compare and evaluate several scenarios as part of a product-process innovation approach by highlighting the consequences of implementing such alternatives. A separate publication dedicated to the construction of several human health indicators linked to the addition or not of nitrites in ham will be proposed following this work. The potential impacts of a processed ham using new technologies could then be compared to those of a conventionally cooked ham. A comparative analysis of innovative processes as new life cycle steps would allow us to consider changes in product quality that may have consequences for human health and the environment.
2021-01-07T09:06:43.958Z
2021-01-05T00:00:00.000
{ "year": 2021, "sha1": "b0d341d0309ae5dc99c80ffc324663c9687c91c0", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/11/1/451/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "2cec53aff306d25e26ef9766d6b6eeaddd5fe2bf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Environmental Science" ] }
16047647
pes2o/s2orc
v3-fos-license
Regulation of p53: a collaboration between Mdm2 and Mdmx. p53 plays an important role in the regulation of the cell cycle, DNA repair, and apoptosis and is an attractive cancer therapeutic target. Mdm2 and Mdmx are recognized as the main p53 negative regulators. Although it remains unclear why Mdm2 and Mdmx are both required for p53 degradation, a model has been proposed whereby these two proteins function independent of one another; Mdm2 acts as an E3 ubiquitin ligase that catalyzes the ubiquitination of p53 for degradation, whereas Mdmx inhibits p53 by binding to and masking the transcriptional activation domain of p53, without causing its degradation. However, Mdm2 and Mdmx have been shown to function collaboratively. In fact, recent studies have pointed to a more important role for an Mdm2/Mdmx co-regulatory mechanism of p53 regulation than previously thought. In this review, we summarize current progress in the field about the functional and physical interaction between Mdm2 and Mdmx, their individual and collaborative roles in controlling p53, and inhibitors that target Mdm2 and Mdmx as a novel class of anticancer therapeutics. INTRODUCTION The p53 tumor suppressor plays a pivotal role in regulating cellular processes including cell cycle arrest, apoptosis, cell metabolism and senescence. Mutation of the TP53 gene or inactivation of the p53 signaling pathway occurs at a high frequency in many human tumors, suggesting that p53 plays a critical role in preventing normal cells from becoming cancerous. p53 is a stressinducible protein; it is inactive under normal physiological conditions and activated in response to various types of stresses such as DNA damage and ribosomal stress [1]. Activated p53 can either induce cell cycle arrest and inhibit cell growth or promotes cell apoptosis depending on different type of stress and the cellular context. Multiple mechanisms have been revealed to collectively accomplish the regulation of p53 activity [2,3], which ultimately determines the selectivity of p53 for specific transcriptional targets, resulting in precise control of the p53 activity. p53 is the most frequently inactivated tumor suppressor gene in human cancer. Clinical studies have shown that p53 is mutated in approximately 50% of human cancers. Mdm2 and MdmX (also known as MDM4) are two structurally related proteins that play a critical role in downregulating p53 activity in embryonic cells and stem cells under normal conditions [4]. Therefore, the amplification and/or aberrant expression of Mdm2 and MdmX occur in a number of tumors of diverse origin, especially in tumors that retain wild-type p53. Mdm2 (murine double minute 2) was discovered on double minute chromosomes in a derivative cell line of NIH-3T3 cells [5,6]. Mdm2 belongs to the family of E3 ubiquitin ligases that contain a RING [really interesting new gene] domain [7] and serves as the major E3 ubiquitin ligase for p53 degradation. Several studies have illustrated the importance of Mdm2 in the control of p53 activity. The mechanism by which Mdm2 suppresses p53 has classically been thought to occur by two distinct ways: by binding to the N-terminal domain of p53 and masking p53's access to transcriptional machinery, and by ubiquitinating p53 and targeting it for proteasomal degradation [8][9][10][11]. However, recent research has shown that Mdm2-p53 binding alone in the absence of Mdm2 E3 ubiquitin ligase activity is insufficient to suppress p53 activity [12]. MdmX has been identified as a highly homologous gene that is closely related to Mdm2 [13,14]. Similarly to Mdm2, MdmX possesses a p53 binding domain at its N-terminus and a RING finger domain at its C-terminus through which it heterodimerizes with Mdm2. However, unlike Mdm2, MdmX does not have appreciable ubiquitin ligase activity. Because of its sequence similarity with Mdm2 and its ability to inhibit p53-induced transcription when overexpressed, MdmX has been hypothesized to act as a negative regulator of p53 through physical binding [15]. MODELS FOR THE REGULATION OF P53 BY MDM2 AND MDMX Genetic evidences have shown that Mdm2 and MdmX are the two essential negative regulators of p53, since the concomitant deletion of p53 can rescue the embryonic lethality caused by the deletion of either Mdm2 or MdmX. The fact, neither Mdm2 nor MdmX can compensate for one another in vivo to inhibit p53 suggests that Mdm2 and MdmX perform critical, non-overlapping functions in p53 suppression. The requirement of both Mdm2 and MdmX raises the question as to why p53 needs two highly similar regulators. A proposal has been put forward whereby these two homologous proteins can function independently: Mdm2 functions as an E3 ubiquitin ligase that catalyzes the ubiquitination of p53, MdmX, and itself for proteosomal degradation [16][17][18], whereas MdmX functions mainly by binding to and masking the transcriptional activation domain of p53. Mdm2 and MdmX physically interact with and functionally affect each other. Mdm2 can form a homodimer in vitro, but it is also capable of forming a more stable heterodimer with MdmX through their RING domains [19,20]. In vitro transfection studies have indicated that MdmX stabilizes Mdm2 by interfering with Mdm2 autoubiquitination. However, MdmX has also been reported to be ubiquitinated and degraded by Mdm2 [18]. Other studies have shown that MdmX is able to inhibit Mdm2-mediated p53 degradation by competing with Mdm2 for p53 binding resulting in the accumulation of p53 [20][21][22]. Many lines of evidence point to an intricate interaction that exists between Mdm2 and MdmX in p53 regulation [23][24][25][26][27]. Mdm2 and MdmX proteins are found to exist in cells predominantly in the form of a heteroduplex [25], and structural studies have predicted that the formation of an Mdm2-MdmX heteroduplex is structurally favored over the formation of homoduplexes of either protein [28]. It has been shown that Mdm2 alone is a relatively inefficient E3 ubiquitin ligase [25], but becomes more efficient at ubiquitinating p53 after heterodimerization with MdmX [26]. A previous genetic study in the development of the mouse central nervous system (CNS) has revealed a synergistic role between Mdm2 and MdmX, as well as independent functions of Mdm2 and MdmX for p53 inhibition. In this study, mice lacking Mdm2 in the CNS developed hydranencephaly at embryonic day 12, whereas mice in which MdmX was deleted in the CNS showed a proencephaly phenotype at embryonic day 17.5. Interestingly, the simultaneous deletion of both genes resulted in an even earlier and more severe CNS phenotype. All of these phenotypes were rescued by the concomitant deletion of p53. These observations strongly support a synergistic relationship between Mdm2 and MdmX in the inhibition of p53 activity during the development of the CNS [29]. Based on both in vitro and in vivo studies, another, perhaps more convincing model was proposed in which Mdm2 and MdmX work together to control p53 activity [30,31]. In order to determine whether Mdm2-p53 binding alone is sufficient to suppress p53 activity, or whether Mdm2-medi ated ubiquitination is also required in that regard, Itahana et al. [12] generated knock-in mice harboring a single point muta tion (C462A) in one of the zinc-coordinating residues in the C-terminal RING domain of Mdm2 that is critical for the E3 ubiquitin ligase activity of Mdm2. The homozygous C462A mutation was embryonic lethal and the lethality was rescued by the concomitant deletion of p53, providing evidence that the Mdm2 RING domain is required for the regulation of p53 activity in vivo. This study used an inducible p53 ER system that allowed the investigators to induce the expression of p53 ex vivo in mouse embryonic fibroblast (MEF) cells to study the interactions of p53 in the Mdm2 mutant background. Upon induction of p53 in MEF cells, Itahana et al. demonstrated that the Mdm2 RING mutant protein, although deficient in the ability to ubiquitinate p53, is fully capable of binding to p53 proving that Mdm2 cannot suppress p53 transcriptional activity through binding alone. However, the authors also showed that the C462A mutation alters the structure of the Mdm2 RING domain to the extent that the Mdm2 C462A mutant is unable to heterodimerize with MdmX. Therefore, the study cannot explain whether Mdm2 and MdmX interaction is required for p53 suppression. Recently, two studies using MdmX RING domain mutant knock-in alleles demonstrated that the RING domain of MdmX, like that of Mdm2, is also critical for regulating p53 activity during early embryogenesis [32,33]. In the Huang et al. study, mice harboring an MdmX C462A mutation in one of the critical zinccoordinating residues of the RING domain died at approximately day 9.5 of embryonic development as the result of an increase in apoptosis and a decrease in cell proliferation. The concomitant deletion of p53 completely rescued the embryonic lethality of the MdmX C462A mutation [32]. Importantly, the authors showed that the MdmX C462A mutant protein does not bind to Mdm2, yet it retains the ability to bind to p53 to the same degree as wild-type MdmX. These results indicate that even though both Mdm2 and MdmX are fully capable of binding to p53 individually, the disruption of the Mdm2-MdmX heterocomplex causes p53 activation in vivo. In a similar study performed by Pant et al., the authors used a tamoxifen-based Cre-inducible MdmX ΔRING allele to investigate the role of Mdm2-MdmX heterodimerization in Mdm2 and p53 regulation. They found that although the heteroduplex is essential during embryonic development, heterodimerization is dispensable during the adult life of the mouse. Together, these studies provide compelling evidence that the action of the heterodimer of Mdm2 and MdmX, and not necessarily the independent action of either protein is crucial to the appropriate control of p53. However, these studies cannot answer a remaining question as whether the Mdm2 E3 ubiquitin ligase function is still required for p53 suppression, because the Mdm2 RING mutation simultaneously disrupts its E3 ligase function and its binding to MdmX, and because in vitro studies have shown that Mdm2 by itself is a relatively weak E3 for p53 degradation and its heterodimerization with MdmX enhances its E3 activity [25,26]. Thus, the generation and analysis of mutations that block the ubiquitin ligase activity but do not affect the heterodimerization between Mdm2 and MdmX in vivo, if technically possible, would be essential for understanding the importance of the in vivo cooperation between Mdm2 and MdmX. THE MDM2/MDMX RATIO DETERMINES P53 STABILITY AND ACTIVITY Although Mdm2 and MdmX have a synergistic relationship that effectively inhibits p53, as discussed above, Mdm2 and MdmX also have independent roles in the regulation of p53. MdmX can inhibit p53 transcriptional activity by interfering with the ability of p53 to interact with the basal transcription machinery, while Mdm2 can target p53 for degradation. Several studies have reported that elevated MdmX levels stabilize p53 by inhibiting Mdm2-mediated p53 degradation without interfering significantly with Mdm2-dependent p53 ubiquitination [21,22,34,35]. Transfection studies have also provided evidence that MdmX can stabilize Mdm2 by interfering with auto-ubiquitination and degradation of Mdm2 [34]. Conversely, results from Linares et al. have shown that MdmX stimulates Mdm2mediated ubiquitination of p53, as well as Mdm2 selfubiquitination in vitro [26]. These inconsistent data reflect the complex relationship between Mdm2 and MdmX and are often difficult to reconcile because of the nature of in vitro overexpression studies. Quantitative analysis has demonstrated that the level of endogenous MdmX is present at different proportion to that of Mdm2 in several types of human cell lines [36]. This observation might well account for the discrepancies when trying to examine the effect by altering the MdmX abundance in various in vitro studies and also indicates the relative level of Mdm2 and MdmX is crucial for controlling p53 stability and activity, which was further demonstrated by the recent crystal structures studies. Linker et al. revealed that the primary and secondary interfaces in Mdm2 homodimers or Mdm2/MdmX heterodimers are crucial for the binding of ubiquitin E2 enzyme and ubiquitylation of the subunit. Because Mdm2 homodimers have two primary and secondary interfaces for ubiquitin E2 enzyme binding and the E2 enzyme can be recruited by either monomer, which will lead to the ubiquitylation of the other subunit. However, in the Mdm2/MdmX heterodimer, only Mdm2 can provide the primary E2 interaction site while the secondary interface will be provided by MdmX, which will not cause the ubiquitylation and degradation of Mdm2. Therefore, the ratio of Mdm2/MdmX can be used to explain Mdm2 status in different situations: Mdm2 will form homodimer and degrade by itself through ubiquitination if the ratio is high, on the contrary, Mdm2 will be stabilized if the ratio is low [37]. Both in vivo and in vitro experiments have demonstrated that p53 can bind to p53-responsive elements located within the Mdm2 gene and promote its transcription thereby set up a negative feedback regulatory loop [38,39], while in contrast p53 cannot transactivate MdmX. Because of this, the protein level of Mdm2 fluctuates widely upon p53 activation, whereas since MdmX is not a p53 transcriptional target, the level of MdmX remains relatively constant. Thus, the stressinduced up-regulation of p53 increases the levels of Mdm2 thereby modulating the ratio of Mdm2 to MdmX and serving as a negative feedback loop by which p53 can regulate itself. When the level of MdmX is higher than the level of Mdm2, MdmX will inhibit Mdm2-mediated p53 degradation resulting in the stabilization of the level of Mdm2 through the stabilization of p53. In this proposed feedback loop, MdmX acts as a sensor of the concentration of Mdm2 and controls the balance between Mdm2 and p53. MDM2 AND MDMX IN P53 UBIQUITINATION It has been widely accepted that Mdm2 antagonizes p53 by promoting its ubiquitination and proteasomedependent degradation [8,10]. In addition to the polyubiquitin-dependent degradation of p53, Mdm2 can also promote monoubiquitination of p53; this does not directly cause p53 degradation, but can promote the export of p53 from the nucleus to the cytoplasm especially to the mitochondria and further promote other kinds of modification of p53 [40,41]. In vitro studies have shown that DNA damage can destabilize Mdm2 by means of autoubiquitination [39,42] and Mdm2 ubiquitinates MdmX to mark it for proteasomal degradation [43,44]. As discussed above, MdmX does not have appreciable ubiquitin ligase activity, but MdmX has been proposed to inhibit p53 by binding to the N-terminal transcription activation domain of p53 [13]. This binding inhibits p53 activation by hampering the interaction p53 with p300, as acetylation of p53 by p300 can lead to p53 activation and increased transcriptional activity [45]. Consistently, increased endogenous p53 acetylation level is observed in MdmX-null cells. Furthermore, it has been reported that several lysine residues at the C-terminal region of p53 involved in the acetylation are also the same sites of Mdm2-mediated ubiquitination [46]. Thus, it is conceivable that MdmX might indirectly stimulate the Mdm2-mediated ubiquitination of p53 through decreasing the acetylation. Together with the observation that the Mdm2-MdmX heterodimer is a more effective E3 ligase for p53 ubiquitination than Mdm2 alone [26], these data support a model in which the Mdm2-MdmX complex is more efficient in targeting p53 for ubiquitination and degradation. Ubiquitin-conjugating enzymes (E2s) have a dominant role in determining which of the lysine residues are used for polyubiquitination. Like many other RING domain proteins, the Mdm2 RING domain can promote the transfer of ubiquitin molecules from an E2 conjugating enzyme directly to the lysine residues of the target substrates [47]. Because the E2 enzyme decides the type and length of ubiquitin linkage [48], it is important to identify which E2s are recruited to the Mdm2-MdmX complexes. In vitro studies have shown that UbcH5 functions as an E2 enzyme for Mdm2-induced p53 ubiquitination and degradation [49]. Whether UbcH5 functions in vivo as the main E2 for Mdm2, or whether there are other E2 enzymes that interact with Mdm2 remains to be determined. A recent study [50] has shown that E2 enzyme in the absence of the appropriate E3 ubiquitin ligase is sufficient to promote the ubiquitination of the substrate. Based on the results of this study, the authors conclude that the main function of E3 ligases include: to specify the lysines to be ubiquitinated, to specify the conformation of ubiquitination, to specify mono versus polyubiquitination, and to define the target region on the substrate to be ubiquitinated. PHOSPHORYLATION OF MDM2 AND MDMX In addition to ubiquitination as a mechanism of controlling Mdm2 and MdmX, the activity of these proteins depends on their phosphorylation status. A number of kinases have been reported to phosphorylate Mdm2 and MdmX at different residues. DNA damage stimulates activation of multiple kinases including ataxia telangiectasia mutated (ATM) [51], checkpoint kinases 1 and 2 (Chk1 and Chk2) [52], DNA-dependent protein kinase (DNA-PK), and c-Abl kinase, which leads to the phosphorylation of both Mdm2 and MdmX [53,54]. Evidence supports the idea that the phosphorylation of Mdm2 by ATM inhibits Mdm2 RING domain homodimerization, which prevents the polyubiquitination of p53 [55]. In addition, ATM and Chk2 have been shown to phosphorylate and destabilize MdmX [56][57][58][59]. Another protein that has been shown to regulate Mdm2 and MdmX phosphorylation status is Wip1, which is a phosphatase that can specifically dephosphorylate Mdm2 at Ser395 and MdmX at Ser403 increasing their stability and inhibiting p53 activity [60,61]. The dimerization of Mdm2 and MdmX and Mdm2's E3 ligase function also appear to be regulated by phosphorylation. In an MdmX3SA (Ser-341, -367 and -402 to alanine) knock-in mouse model, Mdm2 retains the ability to bind to MdmX, but is significantly reduced in its capacity to degrade MdmX, resulting in an increase in the concentration of Mdm2-MdmX heterodimers [62]. Thus, the observed defect in p53 stabilization in MdmX3SA mice could be due to the presence of high levels of Mdm2-MdmX complexes. This study also found an approximately 50% reduction in the basal p53 activity in MdmX3SA mice indicating that the stoichiometric balance between Mdm2 and MdmX is crucial for p53 activation and its response to DNA damage stress in vivo. INHIBITORS TARGETING MDM2 AND MDMX Although approximately 50% of cancers harbor p53 mutations, the other 50% of cancers retain WT p53, yet they remain uninhibited by the tumor suppression activity of p53. This is generally accomplished through the overexpression of Mdm2 or MdmX by gene amplification or mutation. It has been accepted, at least theoretically, that reactivation or restoration of the p53 function in tumors is a promising cancer therapeutic strategy. Some proposed strategies include repressing the expression of Mdm2, blocking the p53-Mdm2 interaction, and inhibiting the ubiquitin ligase activity of Mdm2 [63,64]. For example, Nutlin, a small molecule that inhibits Mdm2, can trigger cell-cycle arrest and apoptosis and exhibits antitumor efficacy in a murine xenograft model [65]. Several studies also revealed that rational combination of Nutlin-3a and other drugs could potentiate chemotherapy with mitotic inhibitors against cancer and protect normal cells from cytostatic agent [66,67]. However, still several issues have been raised from studies of Nutlin. One of them is the high toxicity of inhibiting Mdm2 by Nutlin. Studies in mice indicate that Mdm2 loss leads to induction of p53 activation and p53-dependent pathologies in both proliferating and quiescent cells, such as erythroid progenitor cells, neurons and smooth muscle cells [68]. www.impactjournals.com/oncotarget Another limitation with Nutlin is that although Nutlin kills cancer cells that express elevated Mdm2, tumor cells overexpressing MdmX have poor response to Nutlin due to its low binding affinity for MdmX compared to Mdm2. As an alternative route for p53 inhibition, overexpression of MdmX in tumor cells has been observed. This will decrease the efficacy for anti-Mdm2-based cancer therapy. Therefore, the development of compounds targeting both Mdm2 and MdmX in tumors retaining WT p53 has become a promising therapeutic goal. Recently, Bernal et al. [69] showed in both in vitro and in vivo experiments that a ''stabilized alpha-helix'' of p53 peptide, SAH-p53-8, preferentially inhibits the binding of p53 with MdmX and reduces cancer cell viability, thereby overcomes MdmX-mediated cancer resistance. SAH-p53-8 is derived from the so-called ''stapled'' peptides SAH-p53 that was designed based on the peptide sequence of the p53 transactivation domain. This peptide show protease resistant combined with increased cellular uptake properties due to a chemical designed strategy termed "hydrocarbon stapling", which can mimic the biological function of the nature α-helical structure. Co-immunoprecipitation experiments indicate that this peptide can bind to both Mdm2 and MdmX within the cells. Although SAH-p53-8 exhibits a 25-fold greater binding preference for MdmX over Mdm2, it has been shown to have the ability to kill cancer cells that overexpress Mdm2, MdmX, or both of the proteins. More importantly, SAH-p53-8 has been shown to efficiently induce a tumor-suppressive response in vivo. The study provides a clue to reactivate p53 tumor suppressor function by synergistically applying Mdm2 and MdmX inhibitors in cancer cells, and affords new therapeutic opportunities for simultaneously inhibiting both Mdm2 and MdmX to restore p53 using drug combinations or dual-inhibitory drugs [70][71][72]. Thus, efficient rescue of p53 by pharmacological drugs targets Mdm2-MdmX hetero-oligomers is conceptually viable. CONCLUDING REMARKS Over the past decade, considerable progress has been made towards understanding the regulation of p53 by Mdm2 and MdmX, and much of which has come from data obtained from various mouse models. It is generally accepted that the ubiquitination of p53 is a fundamental mechanism of p53 control and that Mdm2 is the principal p53 ubiquitin ligase [36,73]. A study of an Mdm2 RING finger mutant knockin mouse model [12] has shown that Mdm2 is in fact not regulated by autoubiquitination in vivo, nor is it capable of blocking p53 activity by binding alone. This is consistent with an earlier report that small molecules that inhibit the E3 ubiquitin ligase activity of Mdm2 can activate p53 [10,74]. However, the exact mechanism underlying the degradation of p53, the regulation of the RING domain of Mdm2, and the role of MdmX in this process is still unclear. Therefore, it is essential to fully understand how the RING domain of Mdm2 regulates p53; whether it is an independent mechanism whereby Mdm2 modifies p53 directly by ubiquitination and degradation, or whether Mdm2 requires MdmX binding in order to regulate p53 activity by the Mdm2-MdmX heterodimer. Recent studies using MdmX RING mutant knockin mouse models can account for part of the story. They show that Mdm2 with an intact RING domain and intrinsic E3 ligase activity are not sufficient for the inhibition of p53 activity in the absence of interaction with MdmX. These studies provide the first in vivo evidence that the association of Mdm2 with MdmX, but not the Mdm2 E3 ligase activity, is necessary for p53 control, at least in the developmental stage of mice, which is consistent with previous data based on in vitro experiments. Nevertheless, several questions still remain: Whether degradation must occur in order for p53 to be rendered inactive, or whether ubiquitination without degradation is sufficient for the inhibition of p53, how the Mdm2-MdmX heterodimer enables Mdm2 to be more efficiently ubiquitinating p53, and whether the Mdm2-MdmX heterodimer affects p53 ubiquitination. Although much has already been learned about the regulation of p53 by Mdm2 and MdmX, much still remains unknown. Crystal structure studies are needed to further understand at the molecular level how exactly the Mdm2-MdmX-p53 ternary complex is formed and why the Mdm2-MdmX complex is a more efficient E3 ligase complex than Mdm2 alone. The histone acetyltransferase PCAF [75] has been identified as an E3 ubiquitin ligase that mediates the degradation of Mdm2. However, can Mdm2 be ubiquitinated and degraded by an as yet undefined E3 ubiquitin ligase? Under which circumstances is p53 monoubiquitinated and polyubiquitinated? Do Mdm2 and MdmX have additional functions independent of regulating p53? The answers to these questions will be important for understanding the importance of the Mdm2-MdmX heterodimer in tumorigenesis and for determining the feasibility of the Mdm2-MdmX heterodimer as a target for cancer therapy.
2017-06-24T22:44:03.737Z
2012-03-01T00:00:00.000
{ "year": 2012, "sha1": "8cef9451f7a330fec57440c4ef331641964b8785", "oa_license": "CCBY", "oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=443&path[]=771", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "8cef9451f7a330fec57440c4ef331641964b8785", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
71144726
pes2o/s2orc
v3-fos-license
Assessment of the Food-Swallowing Process Using Bolus Visualisation and Manometry Simultaneously in a Device that Models Human Swallowing The characteristics of the flows of boluses with different consistencies, i.e. different rheological properties, through the pharynx have not been fully elucidated. The results obtained using a novel in vitro device, the Gothenburg Throat, which allows simultaneous bolus flow visualisation and manometry assessments in the pharynx geometry, are presented, to explain the dependence of bolus flow on bolus consistency. Four different bolus consistencies of a commercial food thickener, 0.5, 1, 1.5 and 2 Pa s (at a shear rate of 50 s−1)—corresponding to a range from low honey-thick to pudding-thick consistencies on the National Dysphagia Diet (NDD) scale—were examined in the in vitro pharynx. The bolus velocities recorded in the simulator pharynx were in the range of 0.046–0.48 m/s, which is within the range reported in clinical studies. The corresponding wall shear rates associated with these velocities ranged from 13 s−1 (pudding consistency) to 209 s−1 (honey-thick consistency). The results of the in vitro manometry tests using different consistencies and bolus volumes were rather similar to those obtained in clinical studies. The in vitro device used in this study appears to be a valuable tool for pre-clinical analyses of thickened fluids. Furthermore, the results show that it is desirable to consider a broad range of shear rates when assessing the suitability of a certain consistency for swallowing. Introduction Thickening of liquids for consumption is a common approach to manage and nourish individuals who are suffering from dysphagia [1,2]. The thickeners used for dysphagia are based on the fundamental concept of increasing bolus consistency, thereby reducing the velocity of the flow during the swallowing process. This allows sufficient time for muscular adjustments in individuals who are suffering from dysphagia. Such individuals are susceptible to lowviscosity fluids [3]. In contrast, a bolus of too-high viscosity demands that extra force be exerted by the tongue and pharyngeal muscles to push the bolus through the oropharynx. Individuals who lack pharyngeal muscle or tongue strength may experience post-swallow residues [4], requiring a secondary clearing swallow [5]. Most of the published studies on pharyngeal bolus velocity report it as being in the range of 0.1-0.5 m/s at different locations in the pharynx. Higher velocities have been recorded for water, which decrease as the thickener concentration increases [6][7][8]. Clinical studies using ultrasound have demonstrated that increasing the bolus viscosity results in lower and flatter (i.e. less variation of the maximum and minimum velocities) velocity profiles above the epiglottis. The lower velocities are due to a high fluid viscosity, while the velocity profiles are flatter due to shear thinning [5]. Shear thinning, which is a term that is less familiar to the medical community, is crucial, as almost all fluid foodstuffs have viscosities that decrease with increasing shear rate (velocity) [1,9]. 3 Commercially available thickening powders are used to manage the delayed pharyngeal response. Thickening powders are usually gum based or starch based. Starch molecules swell upon hydration, thereby increasing the viscosity of a solution, whereas gums form a network of entanglements that arrest water. Starch-based thickeners have been shown to break down during digestion especially in oral phase due to amylase enzyme present in saliva [10], which results in decreased viscosity of the thickened fluid. Gum-based thickeners transit relatively unchanged during oral processing [11]. Thickeners, whether gum or starch based, are shear thinning, so the shear rate should always be mentioned for a given consistency of a fluid, as recognised by the European Society for Swallowing Disorders (ESSD) in its recently published White Paper [8]. Very little has been published on shear rate measurements for bolus transport in the pharynx. To our knowledge, the shear rate during swallowing has been determined in only a few simulation studies, e.g. those conducted by Meng et al. [12] and Salinas et al. [13]. Meng et al. used a 2D geometry for the simulation and only reported the shear rate at the UES. Simulation studies cannot capture the complexity of bolus flow in humans. Zhu et al. [7] studied commercial thickeners and glucose mixed with contrast media during video fluoroscopic analysis of three patients. From these analyses, the velocities and associated shear rates were calculated. The use of contrast media is restricted to clinical examinations, so the results might be different in practical situations when real fluid foodstuffs are swallowed. To determine the bolus shear rate, we have applied a unique, non-invasive Ultrasound Velocity Profiling (UVP) technique [14]. UVP measures the real-time flow in tubes, ducts, and similar geometries. The technique is based on the reflection of ultrasound waves from reflecting particles/ bubbles in a flowing fluid. Thus, UVP does not require any contrast media. UVP measures the velocity profile directly in a given geometry, and it can be applied to bolus flow to determine accurately the shear rate distribution during swallowing. Manometry is an important tool for measuring pressure variations during swallowing using an in-dwelling catheter [15,16]. Studies have shown that when the bolus volume and viscosity are increased, the recorded intra-luminal pressure also increases, which means that greater force is needed to transport a bolus of larger volume [17,18]. Ergun et al. studied the UES shape, velocity, and bolus volume interactions using ultra-fast computer tomography, and they concluded that 15 ml of compulsory air were swallowed with the bolus [19]. Bolus consistency influences the maximum pharyngeal pressure, UES contraction, UES opening/closing duration, and the duration of pharyngeal pressure [20]. The well-known catheter method of pressure measurement is invasive, and under certain conditions it might obstruct the bolus flow [21,22]. Moreover, with the catheter method, laborious calibration steps must be followed for individual patients [18]. Furthermore, ethical concerns regarding safety always arise in clinical studies [23]. Consequently, improved thickener formulations and the incorporation of novel rheological attributes, such as fluid elasticity, yield stress, and shear thinning (as proposed in the ESSD White Paper), are challenging to test on patients directly. As mentioned before, clinical studies can produce different results even when studying the same hypothesis. Variations in the results are mainly attributed to the complexity of the swallowing process and inter-subject variability. In a previous study, we examined the influence of elasticity on safe swallowing for patients with dysphagia [24]. Fluid elasticity promoted safe swallowing, although the inter-subject variability was large. Therefore, an in vitro swallowing device that can generate a realistic bolus flow and that allows the performance of clinical types of measurements represents a perfect balance between the two extremes of in vitro simulations and medical examinations. This study describes a unique approach to performing thorough investigations of in vitro swallowing using manometry and bolus visualisation techniques simultaneously and non-invasively. Materials A powdered thickener from Nutricia Nordic AB (Stockholm, Sweden) was used in the experiment. Four different bolus consistencies (Table 1) of the given thickener were used for UVP, while two of them were used for the manometry. The thickener was added to water so that the consistency was set in the range of 0.5-2.0 Pa s at a shear rate of 50 s −1 , thereby covering the consistency range from lower honey thick (51-350 mPa s) to pudding thick (> 1750 mPa s), according to the National Dysphagia Diet (NDD) scale [25]. Since the three viscosities used in the current experiments lie in the honey-thick range (351-1750 mPa s), this range is further categorised into low, medium, and high consistencies (Table 1), to simplify the interpretation in the Discussion. The viscosity was measured using a conventional ARES-G2 instrument from TA Instruments (New Castle, DE, USA). A cone and plate geometry was used with diameter of 40 mm and angle of 0.04 radians. The shear rate was varied from 1 s −1 to 1000 s −1 . The temperature was set to 25 °C. The consistencies of the liquids were all shear thinning, and when fitting the power law model n = 0.33. The power law model describes shear stress = constant × rate flowindex . Therefore, in subsequent sections, the mentioned viscosity is always at a shear rate of 50 s −1 . Methods The in vitro simulator, called the Gothenburg Throat, is described in detail in a separate paper [26]. Nevertheless, to guide the reader, a brief account of how the simulator works is provided here (see Fig. 1). The given fluid is stored in a tank that is connected to the simulator. The fluid is transported to a syringe pump, simulating the oral phase. A bolus of set volume and velocity is injected into the model pharynx, mimicking the thrusting action of the tongue. While filling the syringe, the slide valve is kept closed to prevent fluid flow by gravity until the barrel is filled with the fluid. The slide valve is opened momentarily, ensuring that the fluid flows only due to the thrust exerted by the syringe. Two valves mimic the opening to the larynx and the nasopharynx, a movable epiglottis closes during swallowing, and a clamping valve mimics the upper oesophageal sphincter (UES). The UES is in the closed position from the start, and is opened for the model bolus to eject while the epiglottis from its original open position is closed. The epiglottis does not hermetically seal the entrance to the larynx opening. As the bolus passes the pharynx, the velocity profile is measured by UVP, while the pressure transducers record the pressures at four locations. A 3-s interval is set between each bolus injection. In Vitro Device Settings In the current work, the bolus volume was 15.0 ± 2.5 ml, unless otherwise stated. A camera (DSC-RX100M5; Sony, Tokyo, Japan) installed at a distance of 10 cm from the flow simulator was used to capture the syringe speed, which was equal to the initial bolus speed. In this work, the piston speed, as regulated from the compressed air regulator, was set to remain at 1.25 ± 0.03 m/s, irrespective of the bolus volume and viscosity. During photographic sequencing of the bolus flow, the camera was operated in the high-framerate mode, capturing images at 50 frames/s. The bolus was dyed blue for better contrast, ensuring that colour addition did not influence the bolus rheology. The acquired slowmotion videos were analysed using Media Player, MPC-HC ver. 1.7.13 in high-precision mode. With knowledge of the frame rate and distance travelled by the bolus head, the velocity can be calculated. Shear Rate Calculation The velocity profiles were acquired using the latest UVP instrument from Incipientus Ultrasound Flow Technologies AB (Gothenburg, Sweden). A novel non-invasive transducer was specifically designed for the Gothenburg Throat Simulator, which consists of a 5-MHz, 6-mm piezo transducer that generates a 30.5° beam in the pharynx. The UVP transducer captured velocity profiles in real time. From the gradient of the velocity profile, v(r), the shear rate ( ̇ ) was calculated as a function of the radius [14,[27][28][29]: To determine the gradient and smoothen the experimental data, a second-order polynomial was applied on the experimental data. Figure 2 shows an example of the shear rate calculation from the experimental data. The shear rate distribution inside the bolus is shown, and the maximum shear rate reported here is calculated from the velocity gradient at the wall of the model pharynx. Only half of the velocity profile closest to the transducer is needed to calculate the shear rate distribution, as the geometry is assumed to be symmetrical with the other half of the pipe. The data acquired closer to the transducer side are more accurate, since the ultrasound energy reduces with distance in the other half, due to absorption and attenuation. Figure 3 is presented as an example of data acquisition with the ultrasound transducer used in the present work. The base frequency of the ultrasound transducer used in current work was 5 MHz. On the average, 128 ultrasound pulses were used to construct each velocity profile. Based on off-line experiments, the sound velocity, radius at the short axis of the elliptical geometry, and the Doppler angle were set at 1500 m/s, 8.4 mm, and 69°, respectively. Manometry Analysis Manometry was performed using the pressure transducers and monitoring software (Oscilloscope from Pico Technology, Cambridgeshire, England). Pressure transducers at three locations (pharynx entrance, mid-pharynx, and close to the UES) inside the modal pharynx and the one at nasal cavity were calibrated against a digital reference pressure transducer for air using the DPI 705 unit (Amtele Engineering AB, Stockholm Sweden). The accuracy of the transducers was < 6% in the pressure range of 1-48 kPa, which was considered acceptable for this application. The pressure transducer in the nasal cavity does not come in contact with the bolus and is used as a control to differentiate bolus pressure from air pressure. Thus, corrected pressure values are reported here. Pressure peak duration (seconds) was measured by calculating the onset and offset of pressure wave generation. Statistical Analysis Comparison of mean values was performed using the t test in the Microsoft Excel 2010 software. Velocity Profiles and Shear Rates During Bolus Flow The velocity profiles of the boluses thickened to consistencies of 0.5, 1.0, 1.5, and 2.0 Pa s (at 50 s −1 ) are shown in Fig. 4 as power spectra, where the brightness level is proportional to the energy of the ultrasound beam. The velocity profiles were measured from the transducer side to the centre of the model pharynx. Negative velocities were noted when the bolus flow took place in the direction opposite to the ultrasound beam, which is inclined at 69°. The average fluid velocities ranged from 0.046 ± 0.02 m/s to 0.48 ± 0.05 m/s, increasing in the order of decreasing bolus consistency ( Table 2). The differences in velocities between the boluses with consistencies in the ranges of 0.5-1.5 Pa s and 0.5-2.0 Pas were statistically significant (p=0.05). The velocity profiles acquired for low-viscosity fluids are somewhat noisy, as bolus transport is slightly different than, for example, a fluid that is flowing continuously. Bolus transport is a rapid process, occurring in less than 1 s in healthy subjects and in the in vitro simulator. To capture this rapid movement, faster data acquisition and faster on-screen display are desirable. Therefore, a lower number of ultrasound pulses (128) were used to capture a single Doppler spectrum, i.e. velocity profile (displayed in Fig. 4). Furthermore, air accumulated inevitably during the pumping of boluses that caused a decrease in the Doppler frequency range due to the slower velocities of the entrained bubbles. It is noteworthy that most of the Doppler noise was in the lower velocity range, especially in the cases of boluses with low viscosities (0.5 Pa s and 1.0 Pa s). The noise levels decreased for high-viscosity In general, the increased velocity that resulted from a low viscosity yielded higher shear rates, as expected. A significantly higher (p = 0.05) shear rate (~ 209 s −1 ), calculated from the gradient of the velocities recorded at the wall, was seen for the bolus with the lowest consistency. The pudding consistency (viscosity of 2.0 Pa s at 50 s −1 ), which was the highest consistency used in the current work, yielded the lowest shear rate of 13 ± 6 s −1 . In addition, the difference in shear rate was statistically significant (p = 0.05), as compared to the other consistencies. The high syrup consistency (1.5 Pa s at 50 s −1 ) yielded a wall shear rate of ~ 24 s −1 , while the medium honey-thick consistency gave a wall shear rate of ~ 74 s −1 ; these shear rate values are statistically significant (p = 0.05) than those for the least and highest bolus consistencies, 0.5 Pa s and 2.0 Pa s, respectively, used in the current work. Table 3 shows that the high-viscosity fluid (2.0 Pa s at 50 s −1 ) resulted in higher intra-bolus pressures at all locations in the model pharynx. The pressure values were always slightly higher (statistically insignificant at p=0.05) at the pharyngeal entrance (Fig. 6, A and B). The mid-pharynx transducer was mounted behind the epiglottis, which was closed and thereby restricted the air flow. The short pressure pulses, therefore, do not allow sufficient time for equilibration, resulting in slightly lower pressure readings. Influences of Fluid Viscosity and Syringe Speed on Pressure Levels in the Model Pharynx To demonstrate the actual data acquired from the measuring system and the shapes of the pressure peaks, Fig. 6 is presented as a representative example. The more-viscous bolus resulted in visibly higher intra-bolus pressures at the different locations for the high-viscosity fluid, and the opposite phenomenon was seen for the low-viscosity fluid. A detailed explanation for this observation is provided in Table 3. The two panels of Fig. 6 (upper and lower panels) further show that the overall pressure increases (location D) above the atmospheric pressure (~ 100 kPa) inside the model pharynx as a result of the bolus flow. Furthermore, Fig. 6 shows a clear difference in the pressure peak width; the pressure peak is broader for the fluids with consistency of 2.0 Pa s, when presented on the same time scale. Effect of Bolus Volume Bolus volume had a profound effect on the pressures recorded in the model pharynx (Fig. 7). The data could be categorised into two sets: low and high bolus volumes, i.e. 5, 10 and 15 ml are considered as low-volume boluses, while 20 ml and 25 ml are regarded as high-volume boluses. In the low-bolus-volume category, the pressure values increased incrementally but not significantly (p < 0.05; Transducer A). Within the high-bolus-volume category, the pressure values were significantly different (p < 0.05; Transducer A) between the 20-ml and 25-ml volumes. Between the highand low-category boluses, statistically significant (p < 0.05; Transducer A) differences were noted for all the volumes, except between the 10-ml and 20-ml volumes. Similar trends of high volume and high pressure were noted for the transducers located at the mid-pharynx and pharynx exit. The pressure values were lower at the mid-pharynx with bolus volumes of 5, 10, 15, and 20 ml, due to the location of the transducer. The mid-pharynx transducer is located behind the epiglottis. The epiglottis was closed to prevent fluid flow into the model airways, thereby restricting the air flow and resulting in a lower measured pressure. The pressure levels at the UES were higher than at the mid-pharynx and lower than at the model pharynx entrance with respect to the different fluid volumes. Effect of Pressure Peak Duration at the UES Due to a Change in Viscosity The pressure peak duration was longer for an increased fluid viscosity, as presented in Table 4. This effect was significant (p < 0.05) at the pharyngeal entrance and exit sensors of the model pharynx. At the mid-pharynx, the pressure peak duration for the 2-Pa s fluid was longer than that for the 1-Pa s fluid; however, this difference was not statistically significant (p < 0.05). Influence of UES Area Contraction The elliptical area of the UES (Fig. 8, A and B), which originally was ~ 374 mm 2 , was decreased to 267 mm 2 and 133 mm 2 to simulate UES area contraction and the subsequent influence of pressure in the region, as shown in Fig. 8C. Two levels of fluid consistency, 1 Pa s and 2 Pa s, were used to study this effect at the lower-most pressure transducer in the model pharynx. As expected, decreasing the area of the UES resulted in increased pressure build-up at the UES. This effect was more pronounced for the fluid with viscosity of 2 Pa s. Discussion This work involved a thorough investigation with an in vitro type of swallowing analysis that takes both bolus visualisation and manometry into account. The acquired bolus head velocities (0.48 ± 0.05 m/s) noted here are similar to those observed in clinical studies, such as those conducted by Tashiro et al. [3] (0.246-0.488 m/s) and Hasegawa et al. [30] (average velocity, 0.46 m/s). Both of these previous studies used the Pulsed Ultrasound Doppler technique, as was the case in the current study. Typical values from studies of 1 3 bolus velocities in the pharynx lie in the range of 0.1-0.5 m/s [7]. In the present study, the largest reduction in velocity was found for the bolus with pudding consistency (2 Pa s). A similar observation was made by Rofes et al. [31] for a xanthan gum-based thickener, the same one as used here. In the present work, the ultrasound beam was directed towards the The velocity profile and subsequent shear rate are most interesting in the meso-pharynx due to the higher risk of aspiration in individuals who have slower airways closure. The thickened fluids used in dysphagia management are always shear thinning, i.e. the bolus will transit faster and be perceived to be thinner. Shear rates of > 200 s −1 for low honey consistency-thick viscosity and ~ 75 s −1 for medium honey-thick viscosity are above the current level of 50 s −1 mentioned in the NDD scale. Thickening of fluids to highly honey-thick and pudding-thick consistencies (viscosities of 1.5 Pa s and 2 Pa s, respectively, at 50 s −1 ) resulted in shear rates of < 50 s −1 . The therapeutic effects of thickeners are linked to slowing of the bolus velocity through the pharynx [31]. This is further explained in our work by the low shear rate, which increases the perception of thickness with high honey-thick and pudding consistencies. The bolus flow is dominated by the viscous forces, evidenced as low Reynolds number. The Reynolds number, Re = ⋅D⋅v , where (ρ) is the measured density; (v) is the velocity; and D is the hydraulic diameter, indicates whether a flow is dominated by viscous or inertial forces. It was calculated as ~ 20 for the least-viscous bolus having a viscosity of 0.5 Pa s at 50 s −1 , with (ρ) = 1014 kg/m 3 , v=0.48 m/s (from ultrasonic analyses), and D = ~ 0.02 m (taking into account the non-circular geometry of the pharynx). A higher Reynolds number would reflect greater fluctuations in the velocities. The velocity profiles presented in Fig. 3 are clearly more stable, especially those acquired with highconsistency boluses, indicating that the estimated Reynolds number is reasonable. Thus, the thickened fluids used for dysphagia management provide a sufficient increase in viscosity to ensure that the bolus flow is dominated by viscous forces. This finding is predicted and discussed in detail by Burbidge et al. [5]. Bolus consistency, which is a manifestation of internal resistance to flow, is the best-known strategy for managing dysphagia. The main influence noticed in clinical trials is the longer bolus transit time, or in other words, slower resultant velocities with high-consistency boluses, as observed in our UVP measurements. Photographic sequencing of bolus transport was performed to follow and confirm the physical events that occur inside the model pharynx, including the simultaneous detection of the pressure peak as the bolus hits the transducer and the disintegration of boluses having different compositions. When the photographic image sequences were analysed for transit times, they were found to be in good agreement with the velocity profiles determined using the UVP technique. The transit times measured from the photographic image sequences were 325 ms and 908 ms for the boluses with consistencies of 1 Pa s and 2 Pa s, respectively. Knowing the length of the model pharynx and the velocities form the UVP technique; the transit times are calculated as approximately ~250 s and ~1 s, respectively, for the two displayed consistencies of 1 Pa s and 2 Pa s. Clinical studies have reported typical pharyngeal transit times as short as 100 ms [32] and as long as 1 s as the bolus consistency increases. Therefore, the transit times reported here are within the range reported in clinical studies. Adaption to bolus consistency was simulated in our experiments by keeping the syringe speed the same which is regulatable in the in vitro device. This resulted in an increased intra-bolus pressure with increasing bolus consistency, as expected from the fluid mechanics. However, from the biological perspective, this is not always the case, as multiple factors, such as peristaltic action, arrival of the contraction wave, laryngeal movement, and epiglottal and UES movements, are involved. This explains why, when the body acts like a machine, e.g. the tongue thrust pushes the bolus towards the pharynx; higher pressures in the pharynx entrance are expected. These in vitro manometry results are not directly comparable to the results of the in vivo examinations. The swallowing process in humans is obviously much more complex than that in the device we used for this work. For this very reason, clinical studies do not always produce reproducible results with respect to changes in bolus consistency. For instance, in the clinical study conducted by Butler et al. [20], no significant association was detected between bolus consistency and the pressure values. Contrarily, Lan et al. [22] reported results opposite to those found in the current study, i.e. that a water bolus applies more pressure than do thick liquids. The explanation given by Lan et al. [20] was that the swallowing muscles act as a buffer to control the free flow of water. To accomplish this, the contractile muscles have to contract more to control the free flow of water, which results in a higher applied pressure. The change in pressure is not proportional to the change in viscosity due to the fluid being shear thinning (n = 0.33). The tendency towards increased intra-bolus pressure with increasing bolus consistency was the same at every location in the model pharynx. The higher intra-bolus pressure at the entrance transducer demonstrates that the flow is pressuredriven and not gravity-induced in the simulator. Bolus volume is an important variable that has been considered in many studies [18,[33][34][35][36]. A low bolus volume may trigger uncontrolled swallowing, whereas a high bolus volume for patients with neurological problems increases the risk of penetration/aspiration [34]. Hoffman et al. [36] noticed an increase in the velopharynx pressure when the bolus volume was increased from 5 ml to 20 ml. However, using a bolus volume of up to 10 ml, Butler et al. [18] did not observe any significant differences in pharyngeal pressure. The bolus volume was varied between 3 ml and 10 ml, as opposed to between 5 ml and 25 ml here. We speculate that the bolus volumes of up to 15 ml used in most clinical studies do not have significant impacts on the pharyngeal pressure and the subsequent biomechanical events, such as UES opening duration and total swallow duration, while bolus volumes > 15 ml do influence these events. A longer pressure duration for a viscous bolus was observed by Al-Toubi et al. [16], analogous to the findings of the present study. In the present study, higher values for pressure duration were noted, as compared to those reported by Al-Toubi et al. [16]. This is due to the much higher viscosities of the boluses (1 Pa s and 2 Pa s), as compared to the swallowing of saliva and water investigated by Al-Toubi and colleagues. A similar study performed by Lin et al. did not notice any significant differences in pressure duration associated with bolus consistency or volume [34]. According to Bhatia and Shah [37], ~ 70% of patients with dysphagia have a malfunctioning UES, and the intrabolus pressure is influenced by the consistency and area of the bolus. In the present work, decreasing the UES area and increasing bolus consistency yielded results similar to those reported by Chen et al [21]. Those authors described how the pressure values at the UES contraction increased twofold as the consistency increased from water to solid boluses. A similar trend was noticed in the current study, where a nearly twofold increase in intra-bolus pressure was noticed at all three levels of areas modified. The pressure values reported here are measured slightly above the UES, at the beginning of the UES contraction, and the sensor is embedded in the model pharynx wall, as opposed to the situation in clinical studies where a manometry catheter is used. Our results suggest that by creating a realistic bolus flow in a pharyngeal geometry similar to the biological counterpart, concrete conclusions regarding the shear rate can be drawn non-invasively. Moreover, the in vitro manometry performed here is unique, never having been performed previously, to the best of our knowledge. The analysis resembles the classical four sensor-based manometry used in clinical studies, with the added advantage of being non-invasive, since the pressure transducers are embedded in the model pharynx wall. The current study is the first to involve clinical equivalent bolus visualisation and manometry being performed noninvasively. The device itself represents a valuable tool for the manufacturers of food thickeners to test novel formulations, as mentioned in the White Paper on fluid thickening [8]. Conclusion We show that, for thickened fluids, the velocity profiles for boluses with consistencies that range from honey like to pudding like give shear rates that range from 13 ± 6 s −1 to 209 ± 46 s −1 . Thus, fluid characterisation at several shear rates other than just at 50 s −1 is warranted. Moreover, noninvasive in vitro manometry performed with the in vitro swallowing device, with the focus on bolus volume, consistency, and modified UES area, gives results similar to those seen in some clinical studies. Therefore, the device could be used as a pre-clinical examination tool, to improve our understanding of the linkages between bolus rheology and the biomechanics of swallowing.
2019-03-08T15:49:15.982Z
2019-03-06T00:00:00.000
{ "year": 2019, "sha1": "55ac36e52c13617c06cd296de0594ee76dd69c2b", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00455-019-09995-8.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "045eb54ad8d622c6bcf7a770a165161664c3c3fd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3753932
pes2o/s2orc
v3-fos-license
Suppression of allograft rejection by CD8+CD122+PD-1+ Tregs is dictated by their Fas ligand-initiated killing of effector T cells versus Fas-mediated own apoptosis Mounting evidence has shown that naturally occurring CD8+CD122+ T cells are regulatory T cells (Tregs) that suppress both autoimmunity and alloimmunity. We have previously shown that CD8+CD122+PD-1+ Tregs not only suppress allograft rejection, but also are more potent in suppression than conventional CD4+CD25+ Tregs. However, the mechanisms underlying their suppression of alloimmunity are not well understood. In an adoptive T-cell transfer model of mice lacking lymphocytes, we found that suppression of skin allograft rejection by CD8+CD122+PD-1+ Tregs was mostly dependent on their expression of Fas ligand as either lacking Fas ligand or blocking it with antibodies largely abolished their suppression of allograft rejection mediated by transferred T cells. Their suppression was also mostly reversed when effector T cells lacked Fas receptor. Indeed, these FasL+ Tregs induced T cell apoptosis in vitro in a Fas/FasL-dependent manner. However, their suppression of T cell proliferation in vitro was dependent on IL-10, but not FasL expression. Furthermore, adoptive transfer of CD8+CD122+PD-1+ Tregs significantly extended allograft survival even in wild-type mice if Tregs lacked Fas receptor or if recipients received recombinant IL-15, as these two measures synergistically expanded adoptively-transferred Tregs in recipients. Thus, this study may have important implications for Treg therapies in clinical transplantation. INTRODUCTION CD4+CD25+ regulatory T cells (Treg) prevent allograft rejection and are essential for tolerance in animal models [1][2][3][4][5][6][7][8]. However, mounting evidence has demonstrated that naturally occurring CD8+CD122+ T cells are also Tregs that inhibit conventional T cell responses [9][10][11][12][13][14], antitumor immunity [15], as well as autoimmunity [16,17]. We have previously shown that CD8+CD122+ T cells are not only Tregs [18,19], but also are more potent in suppression of allograft rejection than are conventional CD4+CD25+ Tregs [20]. In particular, we have demonstrated that PD-1-positive component within CD8+CD122+ T cell population is mainly responsible for their regulatory activities while antigen-specific CD8+CD122+PD-1-T cells are memory T cells [18]. Therefore, CD8+CD122+ Tregs likely correspond to their CD4+CD25+ counterparts since CD122 is the β subunit of IL-2 receptor on T cells while CD25 is the α subunit of the same receptor [21]. More accurately, CD8+CD122+PD-1+ Tregs likely correspond to their CD4+CD25+FoxP3+ counterparts, and they probably cooperate to maintain Research Paper: Immunology the immunologic homeostasis and keep autoimmune responses in check. However, the mechanisms underlying the suppression of alloimmunity by CD8+CD122+PD-1+ Tregs remain not well understood, although it has been shown that IL-10 is partially involved in their suppression of allograft rejection [18]. Therefore, it is imperative to fully understand the mechanisms responsible for the Treg suppression so that they can be fully exploited to inhibit allograft rejection in an immune competent recipient or even in humans. In an adoptive T-cell transfer model of Rag1-/-mice, we found that suppression of skin allograft rejection in by CD8+CD122+PD-1+ Tregs was mostly dependent on their expression of Fas ligand. Their suppression was also largely reversed when effector T cells lacked Fas receptor. The FasL+ Tregs indeed induced conventional T cell apoptosis in vitro in a FasL-Fas-dependent manner. Moreover, the Treg adoptive transfer extended allograft survival even in wild-type mice when the Tregs themselves lacked Fas receptor or if recipients received recombinant IL-15 since these two approaches synergistically expanded Tregs that were transferred to wild-type recipients. Fas ligand expression on CD8+CD122+PD-1+ Tregs is critical for their suppression of allograft rejection To search for the mechanisms underlying immunosuppression mediated by memory-like CD8+CD122+PD-1+ Tregs, we determined a role for Fas ligand (FasL) in their suppression of allograft rejection. Rag1-/-mice on B6 background were transplanted with a Balb/C skin graft and received syngeneic CD3+ T cells and/or CD8+CD122+PD-1+ Tregs. Some recipients received the Tregs derived from FasL-/-(gld) mice while others were treated with a blocking anti-FasL antibody. As shown in Figure 1A, the transfer of CD8+CD122+PD-1+ Tregs significantly delayed skin allograft rejection mediated by CD3+ T cells (MST= 39 vs. 13 days, n=8-9, P<0.05). As controls, transfer of the Tregs alone did not reject the allografts. However, the suppression of allograft rejection by CD8+CD122+PD-1+ Tregs was mostly diminished by either utilization of FasL-deficient Tregs (MST= 24 vs. 39 days, n=8, P<0.05) or treatments with a blocking anti-FasL mAb (MST= 26 vs. 39 days, n=7-8, P<0.05). Isotype control mAb did not alter the allograft survival (data not shown). Moreover, the Tregs were much less effective in suppression of allograft rejection when CD3+ effector T cells lacked Fas receptor (MST= 21 vs. 39 days, n=7-8, P<0.05). On the other hand, a lack of perforin on the Tregs did not alter their capacity to prolong skin allograft survival. Shown also was a representative image of accepted ( Figure 1B) or rejected ( Figure 1C) skin allograft. Indeed, most of the purified CD8+CD122+PD-1+ Tregs expressed FasL prior to their adoptive transfer ( Figure 1D). Thus, these data indicate that FasL/Fas, but not perforin/granzyme, pathway plays an important role in CD8+CD122+PD-1+ Treg-mediated suppression of allograft rejection. CD8+CD122+PD-1+ Tregs promote CD3+ effector T cell apoptosis in a FAS/FasL-dependent manner Since we found that Fas-FasL pathway was critical for CD8+CD122+PD-1+ Treg-mediated suppression of allograft rejection, we asked whether or not CD8+CD122+PD-1+ Tregs would directly induce effector T cell apoptosis via engagement of Fas-FasL pathway. FACS-sorted CD8+CD122+PD-1+Thy1.1+ Tregs and CD3+ Thy1.1-T cells were cultured and activated by anti-CD3 and anti-CD28 mAbs for 72 hours. Thy1.1-T cells were then analyzed for their apoptosis using a TUNEL method. As shown in Figure 2A & 2B, CD8+CD122+PD-1+ Tregs significantly induced effector T cell apoptosis while their FasL-deficient counterparts failed to do so. Similarly, anti-FasL blocking mAb reversed T cell apoptosis induced by the Tregs when compared to the isotype control. On the other hand, CD8+CD122+PD-1+ Tregs also failed to promote the apoptosis of Fas-deficient T cells. These findings suggest that CD8+CD122+PD-1+ Tregs induce the apoptosis of effector T cells via interactions between their surface FasL and the Fas receptor on effector T cells. Fas/FasL pathway is not required for CD8+CD122+PD-1+ Treg suppression of T cell proliferation To determine whether or not Fas signaling also plays a role in suppression of T cell proliferation in vitro by CD8+CD122+PD-1+ Tregs, one-way MLR was set up using these Tregs as suppressors, enriched T cells as responders or effectors (Teff), and irradiated Balb/C splenocytes as stimulators. In some groups, cell cultures were treated with anti-FasL or anti-IL-10 mAb. As shown in Figure 3A, CD8+CD122+PD-1+ (Triple+) Tregs drastically inhibited T cell (Teff) proliferation three days following the culture. Interestingly, neither lack of FasL on the Tregs nor anti-FasL blocking mAb significantly altered the Treg suppression of T cell proliferation, indicating that Fas/FasL signaling pathway is not required for CD8+CD122+PD-1+ Treg-mediated suppression of T cell proliferation. However, neutralizing IL-10 abolished their inhibition of T cell proliferation, suggesting that IL-10, but not Fas/FasL interaction, is essential for the Treg-mediated suppression of T cell proliferation. The same findings were seen five days following the cell culture ( Figure 3B). Depriving Tregs of Fas death signaling or supplying recipients with recombinant IL-15 inhibits allograft rejection in immunologically competent wild-type mice To be clinically relevant, Tregs need to be effective in suppression in immune competent wild-type animals. We asked whether or not lacking Fas death receptor would enhance CD8+CD122+PD1+ Treg suppressive function in wild-type recipients. We also examined if administering recombinant rIL-15 would increase their suppressive capacity given that IL-15 has been shown to be critical for the generation and maintenance of CD8+ memory (CD8+CD122+CD44 high ) T cells. Wild-type B6 mice were transplanted with Balb/C skin and received Fas-replete or Fas-deficient CD8+CD122+PD-1+ Tregs, and were treated with recombinant rIL-15. As shown in Figure 4, the adoptive transfer of the Tregs derived from Fas-deficient mice (MST= 22 vs. 12 days, n=7-8, P<0.05), but not wild-type mice (MST= 14 vs. 12 days, n=7-8, P>0.05), significantly delayed skin allograft rejection. Administration of rIL-15 alone also prolonged skin allograft survival (MST= 20 vs. 12 days, n=7-8, P<0.05). Importantly, the combined approaches with both Fas-deficient Treg transfer and administration of rIL-15 further extended the allograft survival (MST= 30 vs. 22 days, n=8, P<0.05). To determine if these measures enhanced the Treg suppression of allograft rejection by promoting their expansion in vivo, similarly transplanted wild-type recipients received Fas-replete or Fas-deficient CD8+CD122+PD-1+Thy1.1+ Tregs and/or rIL-15. As shown in Figure 5A, the numbers of Fas-deficient Thy1.1+ Tregs in both spleens and draining lymph nodes (dLN) of recipients were increased compared to those of Fasreplete Thy1.1+ Tregs 10 days following transplantation. Administration of rIL-15 also significantly augmented the Treg numbers while the combined measures with both transfer of Fas-deficient Tregs and administration of rIL-15 further increased their numbers. Similar findings were also observed 20 days after transplantation (data not shown). On the other hand, the Fas-deficient Thy1.1+ Tregs derived from dLNs of recipients were increasingly resistant to apoptosis compared with the control Tregs ( Figure 5B) whereas administration of IL-15 did not alter their apoptotic rates. These data suggest that Fas-deficient CD8+CD122+PD-1+ Tregs undergo faster expansion than do the Fas-replete Tregs, especially in the presence of exogenous IL-15. DISCUSSION Using skin allotransplant and adoptive T-cell transfer models of lymphocyte-deficient mice as well as wild-type recipients, we studied the mechanisms by which CD8+CD122+PD-1+ Tregs exert their suppression of alloimmune responses. We found that inhibition of skin allograft rejection by the Tregs was mostly dependent on their expression of Fas ligand. Their suppression was also largely reversed when effector T cells lacked Fas receptor. FasL+ Tregs induced conventional T cell apoptosis in vitro in a FasL-Fas-dependent manner. Moreover, Treg adoptive transfer significantly extended the allograft survival even in wild-type murine recipients if the CD8+ Tregs lacked Fas receptor or recipients received recombinant IL-15, because these two approaches synergistically expanded the Tregs transferred to wild-type recipients. Therefore, this study revealed a new mechanism underlying the CD8+ Treg suppression of allograft rejection, and could be implicated for Treg therapies in clinical transplantation. The mechanisms underlying CD8+CD122+PD-1+ Treg suppression remain not well understood. IL-10 plays an important role in CD4+CD25+ Treg-mediated suppression [22][23][24]. IL-10 production by CD8+CD122+ Tregs has also been shown to be one of the mechanisms underlying their suppression [10,18,25]. CD8+CD122+ Tregs suppressed the proliferation of CD8+ T cells by producing IL-10 [10]. They also recognized conventional T cells through their interaction with MHC class I-αβ TCR and regulated T cell responses through IL-10 production [25]. Others demonstrated that CD8+CD122+ Tregs from RasGRP1(-/-) mice inhibited the proliferation of CD8+CD122-T cells also through IL-10 [26]. We previously found that suppression of allograft rejection by CD8+CD122+PD-1+ Tregs was partially dependent on IL-10 [18] and that PD-1 signaling was required for their maximal production of IL-10. Hence, other mechanisms may be also involved in CD8+CD122+PD-1+ Tregmediated suppression of allograft rejection. CD8+ cytotoxic T lymphocytes (CTL) exert their effector functions through two signaling pathways: Perforin/granzyme and FasL/Fas. Granzymes enter the target cell cytoplasm and their serine protease triggers the caspase cascade, leading to target cell apoptosis. Engagement of Fas with FasL initiates the recruitment of death-induced signaling complex (DISC). Then Fasassociated death domain (FADD) translocates with the DISC and recruits pro-caspases 8 and 10 that in turn activate the effector caspases 3 and 6 etc., eventually leading to the apoptosis of Fas+ target cells. Indeed, we found that suppression of alloimmune responses by CD8+CD122+PD-1+ Tregs was mostly dependent on their expression of FasL, but not perforin, and that the Tregs also induced conventional T cell apoptosis in vitro in a FasL/ Fas-dependent manner. Previous studies demonstrated that CD4+ Treg cells restricted effector T cell generation through a Fas Ligand-dependent mechanism [27] and maintained allograft tolerance in a granzyme B-dependent manner [28], suggesting that both pathways may be involved in CD4+ Treg-mediated suppression. It remains to be defined why FasL/Fas, but not perforin/granzyme, pathway is involved in CD8+CD122+PD1+ Tregmediated suppression of alloimmune responses. Perhaps, the suppression of alloimmune responses through FasL-Fas interactions in our experimental models simply results Figure 3: CD8+CD122+PD-1+ Treg suppression of T cell proliferation in vitro. One-way MLR was set up using CD8+CD122+PD-1+ Tregs as suppressors, enriched T cells as responders or effectors (Teff), and irradiated Balb/C splenocytes as stimulators. The ratios of Treg to Teff were 1:4. In some groups, cell cultures were treated with anti-FasL or anti-IL-10 mAb, as described in the methods. T cell proliferation was analyzed using a thymidine-uptake method three (A.) and five (B.) days after the culture. Data are presented as mean ± SD. One representative of two separate experiments is shown. from the physical contacts between CD8+CD122+PD-1+ Tregs and Fas+ effector T cells. Our data indicate that administration of rIL-15 at low doses enhances the suppression of allograft rejection in wild-type mice by CD8+CD122+PD1+ Tregs, which otherwise would have been ineffective in immune competent recipients. Their transfer alone did not work in immune competent recipients may be due to the insufficient numbers. It is also possible that some CD8+CD122+PD-1+ Tregs may lose their expression of PD-1 following adoptive transfer. However, the number of adoptively transferred Tregs was much higher after administering rIL-15 than that of control Tregs without IL-15, suggesting that IL-15 enhances the Treg suppression by expanding them in vivo, likely through promoting their homeostatic proliferation. Hence, our findings could have important implications for Treg cell therapies in clinical transplantation. Previous studies have shown that IL-15 is critical for the generation and maintenance of memory CD8+ T cells [29,30] while administration of recombinant IL-15 increases their precursor frequency in vivo [31,32]. Therefore, it is understandable that IL-15 increases the suppressive capacity of CD8+CD122+PD1+ Tregs by expanding these Tregs since they also exhibit a CD8+ memory phenotype. However, IL-15 could also promote the generation of endogenous and donor-specific memory CD8+ T cells that do more harm than good to an allograft. In our studies, we successfully used low doses of IL-15 to promote expansion of CD8+CD122+PD-1+ Tregs that significantly inhibited allograft rejection, indicating that administration of IL-15 does not significantly increase donor-specific memory CD8+ T cells, which would otherwise damage an allograft. Perhaps, adoptively transferred CD8+CD122+PD-1+ Tregs can easily outnumber endogenous, harmful and donor-specific CD8+ memory T cells since endogenous T cell memory is generally developed in a very small number. Skin transplantation Skin donors were 7-8-week-old wild-type BALB/c mice while skin allograft recipients were 7-8-week-old Rag1-/-or WT C57BL/6 mice. Full-thickness trunk skin was transplanted to the dorsal flank area of recipient mice and secured with the bondage of Band-Aid (Johnson Johnson, New Brunswick, NJ). Skin rejection was defined as graft necrosis greater than 90% as described in our previous publications [18,33]. Analysis of T cell apoptosis by a TUNEL method To detect cell apoptosis in vitro, FACS-sorted CD8+CD122+PD-1+Thy1.1+ Tregs and CD3+ T cells were cultured in the presence of anti-CD3 and anti-CD28 Abs (2.5ng/ml) for 72 hours. For cell apoptosis in vivo, dLN cells were directly isolated from recipients. Cells were stained for surface markers Thy1.1 and CD8. They were then fixed in 2% paraformaldehyde, permeabilized with 0.1% Triton X-100 solution, and labeled with fluorescein-tagged deoxyuridine triphosphate (dUTP) by the terminal deoxynucleotidyl transferase-mediated dUTP nick-end labeling (TUNEL) method according to the manufacturer's instructions (Roche Applied Science, Mannheim, Germany). Statistical analysis Comparisons of the mean were performed using ANOVA. The analysis of graft survival was conducted using Kaplan-Meier method (log-rank test). All analyses were performed using Prism-6 software (GraphPad Software, La Jolla, CA). Data were presented as Mean ± SD. A value of P<0.05 was considered statistically significant.
2018-04-03T05:25:49.160Z
2017-02-20T00:00:00.000
{ "year": 2017, "sha1": "26187d72fb3eac46a90d0dcc33971388b721792c", "oa_license": "CCBY", "oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=15551&path[]=49679", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "26187d72fb3eac46a90d0dcc33971388b721792c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253397996
pes2o/s2orc
v3-fos-license
Effective fluid mixture of tensor-multi-scalar gravity We apply to tensor-multi-scalar gravity the effective fluid analysis based on the representation of the gravitational scalar field as a dissipative effective fluid. This generalization poses new challenges as the effective fluid is now a complicated mixture of individual fluids mutually coupled to each other and many reference frames are possible for its description. They are all legitimate, although not all convenient for specific problems, and they give rise to different physical interpretations. Two of these frames are highlighted. Introduction It is well known that the scalar-tensor gravity field equations can be written as effective Einstein equations with an effective dissipative fluid in their right-hand side, built out of the Brans-Dicke-like scalar field φ present in the theory and of its first and second covariant derivatives [7,8,2,9]. The formalism has been generalized to "viable" Horndeski gravity [10, 11,12] and applied to Friedmann-Lemaître-Robertson-Walker (FLRW) cosmology [13], to theories containing nonpropagating scalar degrees of freedom [14,15], and to specific scalar-tensor solutions [16,17]. But what is the analogue of a multi-component fluid? Naturally, the simplest multi-fluid equivalent of a theory of gravity is tensor-multiscalar gravity. Here we extend the effective fluid formalism to this class of theories. The task is much less obvious than it would appear at first sight because all the gravitational scalar fields couple to gravity, which makes them all couple a e-mail: marcello.miranda@unina.it b e-mail: Pierre-Antoine.Graham@usherbrooke.ca c e-mail: vfaraoni@ubishops.ca to each other. In general, there can also be direct mutual couplings through their kinetic and potential terms in the action. In the presence of multiple real fluids decoupled from each other, one can describe this mixture in the frame of an observer with timelike four-velocity u µ . This four-velocity can be that of the comoving frame of one of the fluids, or it can be associated with any other observer. In general, it is difficult to define an average fluid [20]. This means that the total stress-energy tensor T µν of the effective fluid mixture, which is a tensor defined unambiguously, can be decomposed in many ways according to the four-velocity u µ selected. Each of these descriptions is legitimate but the description of the total mixture and its physical interpretation will depend on the observer u µ selected to decompose T µν . In particular, the density, pressure, heat flux density, and anisotropic stresses of each fluid as "seen" from a particular observer u µ will differ from those measured in the comoving frame of that fluid. To appreciate the difference between the descriptions of a fluid in different frames, consider a perfect fluid with four-velocity u * µ that, in its comoving frame, is described by the stress-energy tensor 1 In the frame of a different observer with timelike four-velocity u µ related to u µ * by this perfect fluid (now "tilted") will appear dissipative, with the different stress-energy tensor decomposition [22,23,24] T µν = ρ u µ u ν + Ph µν + q µ u ν + q ν u µ + π µν , where [22,23,24] h µν ≡ g µν + u µ u ν , is the energy density, is the pressure, is the energy flux density, and is the anisotropic stress tensor. It is clear that the (spatial) vector q µ arises solely due to the relative motion between the two frames, i.e., to the (spatial) vector v µ . In this context it is problematic to interpret this purely convective current as a heat flux according to Eckart's generalization of Fourier's law [1] where T is the temperature and K is the thermal conductivity. This law expresses the fact that heat conduction is caused not only by spatial temperature gradients but also by an "inertial" contribution due to the fluid acceleration [1]. The situation becomes more complicated when multiple fluids are coupled to each other and even more when they are effective fluids and they all couple explicitly with the curvature 2 (more precisely, with the Ricci scalar R) and to each other, which is the situation in tensor-multi-scalar gravity. In this work we discuss two possibilities, but other frames may be more convenient for specific problems. Rather surprisingly, in tensor-multi-scalar gravity formulated in the Jordan conformal frame, one can obtain a particular frame as a sort of fictitious "average" frame, which is generally not possible with real fluids [20]. It is obtained by identifying the coupling function of the scalars to R (which depends on all the scalar fields in the theory) with a new field ψ and amounts to a redefinition of the scalar fields. This procedure is routine in tensor-single-scalar gravity, in which the only Brans-Dicke-like field is redefined for convenience, without much consequence or interpretation. In tensor-multi-scalar gravity, instead, this redefinition takes a new meaning. It identifies a four-velocity and a sort of "average" frame because there is only one Ricci scalar R and all the scalar fields in the theory couple to it. This ingredient is missing for real fluids, which do not couple to the curvature and have no "average" frame [20]. In the following we analyze tensor-multi-scalar gravity in its Jordan (conformal) frame formulation. It is possible to discuss it from the point of view of the "average" observer, or from the comoving frame of each fluid, or from that of any other timelike observer u µ . It is important to remember that these descriptions will be different and will provide different physical interpretations of the mechanical and thermal aspects of the fluid mixture, and that these are all legitimate (hence one should not strive to identify the "correct" one). The point is that some of these formulations (originating different decompositions of the total T µν based on different u α ) will be more convenient, and some others will be less convenient, for specific physical problems. One should adopt the formulation that is most convenient for the particular problem at hand without prejudice. For example, analyses of the quark-gluon plasma created in heavy ion collisions universally employ the Landau (or energy) frame [26,27,28,29,30] in which there is no heat flux 3 while in FLRW cosmology, where comoving coordinates are the standard, relativistic fluids are routinely described in their comoving (or Eckart) frame [21,20]. Here we are interested in the fluid-mechanical equivalent and in the thermal description of tensor-multi-scalar gravity, where the fluids in the mixture are effective fluids and they all couple explicitly with R and with each other. This is a very specific situation and our choices, although convenient in this problem, are not meant to be recipes with universal convenience (although aspects of our discussion may apply to other situations as well). After this discussion, we present an alternative view of the first-order thermodynamics of tensor-multi-scalar gravity in the Einstein conformal frame, while the last section summarizes our conclusions. 2 Tensor-multi-scalar gravity in the Jordan conformal frame Let us begin with a convenient Jordan frame formulation of tensor-multi-scalar gravity (without derivative couplings). We adopt most of the notations specific to tensor-multi-scalar gravity used in Ref. [32]. There are N scalar fields of gravitational nature φ A , with A = 1, 2, ... , N, all coupled nonminimally with the Ricci scalar R and between themselves, as described by the action where capital indices A, B, J , ... label the scalar fields in the multiplet φ 1 , ... , φ N , g is the determinant of the spacetime metric g µν , ∇ µ is the associated covariant derivative, and V is a scalar field potential. The Einstein summation convention is used also on the multiplet indices J. The coupling function F φ 1 , ... , φ N depends on all the φ A , i.e., ∂ F/∂ φ I = 0 ∀I ∈ {1, ... , N}, or else some of the scalar fields would not be coupled directly to R and would lose their status of gravitational scalar fields. 4 F is assumed to be positive to keep the effective gravitational coupling G eff ≃ 1/F positive. The matrix Z AB φ 1 , ... , φ N acts as a Riemannian metric on the scalar field space of coordinates φ 1 , ... , φ N . Z AB can be taken to be symmetric without loss of generality because it multiplies the combination of kinetic terms ∇ α φ A ∇ α φ B symmetric in A and B. The elements of Z AB are all positive to avoid introducing unstable phantom fields. In general, also the potential V φ 1 , ... , φ N depends on multiple fields (although it is not important that it depends on all these fields, which is instead crucial for the coupling function F). Since the matrix Z AB is real and symmetric, it can be diagonalized at each spacetime point x µ and has positive eigenvalues, turning the sum of kinetic terms appearing in the action (12) into where a bar denotes fields in the system of principal axes of the matrix Z AB in field space, and is the diagonal form of Z AB . This diagonalization, however, is not crucial and we will not use it explicitly, retaining the non-diagonal form of Z AB in our formulae. Multi-fluid decomposition The total stress-energy tensor is obtained by varying the action (12) with respect to g µν . Using ∂ A ≡ ∂ /∂ φ A and D AB ≡ Z AB + ∂ AB F, the associated equation of motion reads where δ g µν is the matter stress-energy tensor and The nature of these scalar fields (gravitational or not) depends on the conformal frame [33]. Here we refer to the Jordan conformal frame. The equation of motion obtained by variation of the action with respect to φ A reads We can obtain the expression of the Ricci scalar from (15), µν is the trace of the matter stress-energy tensor. With this expression, Eq. (17) turns into The goal of the decomposition given here is to separate T µν so that each part can be decomposed in the frame of a given fluid. Each fluid then receives an individual stress-energy tensor contribution. The number of purely convective terms is minimised by such a decomposition to allow for a clearer description of the intrinsic dissipative properties of each fluid. Assuming the gradient of each scalar field to be timelike, we define the φ A -fluid four-velocity At this point, in order to avoid ambiguities, all the multiplet summations in this section will be written with an explicit summation symbol. The above identification between a scalar field and an associated effective fluid allows us to rewrite the scalar field derivatives in term of kinematic quantities [11]. The second derivative ∇ µ ∇ ν φ A in Eq. (16) can be expanded as where h A µν ≡ g µν + u A µ u A ν is the three-metric of the hypersurface orthogonal to the four-vector u A µ , Θ A ≡ ∇ µ u A µ is the expansion tensor associated with the A-fluid, and σ A µν ≡ where Since this equation does not depend on four-velocity gradients, we can interpret it as a purely inviscid contribution to the stress-energy tensor mixture. If we rewrite the metric as and we define then, writing explicitly the summations, the stress-energy tensor assumes the form which is interpreted as a mixture of interacting imperfect fluids. "Average" or "ψ-" description Let us discuss another possible procedure. In the following we redefine the fields φ A but, before proceeding, it is essential to note (and remember through the rest of this work) that all these fields couple directly with the Ricci scalar R through F and they all play a role of in determining the prop-erties of the effective fluid equivalent to the tensor-multiscalar theory and the effective gravitational coupling G eff ≡ F −1 . (Their role may be different as, in general, F φ 1 , ... , φ N is not symmetric in all its arguments.) In particular, the effective temperature of this multi-component fluid is determined by all the fields φ A and the upcoming redefinition of these fields does not change this fact. We proceed to redefine the scalar field multiplet as in Ref. [32], which is standard practice in single-scalar-tensor gravity. We can rename the coupling function by electing it to be a Brans-Dicke-like scalar, and we then have the N scalar fields ψ, φ 1 , ... , φ N−1 . This mathematically convenient procedure effectively makes only the field ψ couple explicitly to R but the reader should not be fooled into believing that the remaining fields φ A do not couple to gravity. In fact, all the fields φ A are coupled to ψ (and also to each other), which makes them couple also to gravity. Indeed, they were explicitly coupled to gravity before the field redefinition ψ ≡ F and the physics does not change. The action (12) is recast as [32] S TMS = 1 2κ where The field equations for g µν , ψ, φ A obtained by varying the action (28) are [32] where we have used the notation ∂ A ≡ ∂ /∂ φ A and ∂ ψ ≡ ∂ /∂ ψ. Using the metric field equations we can express the Ricci scalar in terms of the matter and effective stress-energy tensors, where µν . Then, the equation of motion for ψ turns into Finally, we define the effective stress-energy tensor as We can now move to the effective fluid picture. Comoving (Eckart) frame of ψ-fluid Assume that the gradient of ψ is timelike; using we define the effective fluid four-velocity which is normalized, u µ u µ = −1 (but the sign of the righthand side of this definition must be adjusted to keep u µ a future-oriented vector, which is crucial in discussions of dissipation which is time-irreversible). In general, the φ A -fluids are tilted with respect to the ψ-fluid, i.e., u Aµ and u µ have different directions. Using the u µ of the ψ-fluid we perform the usual 3 + 1 splitting of spacetime into the time direction and the 3-space "seen" by the observer with four-velocity u µ . This 3-space has Riemannian metric The kinematic quantities (expansion tensor Θ µν , expansion scalar Θ = ∇ µ u µ , shear tensor σ µν , shear scalar, and accelerationu µ ) associated with u µ are the same as those calculated for single-scalar-tensor gravity in [2]. In fact, their definitions are purely kinematic and theory-independent since they do not use the field equations but only the definition (39) of u µ . These kinematic quantities are straightforward, although lengthy to compute. Since they are used here, we report them in Appendix A. The field equations (32) have the form of effective Einstein equations with an effective stress-energy tensor in their right-hand side, which can be seen as the stress-energy tensor of a dissipative multi-component fluid of the form is the effective energy density, is the effective heat current density describing heat conduction, is the effective stress tensor, is the effective isotropic pressure, and the trace-free part of the stress tensor is the effective anisotropic stress tensor. q µ , Π αβ , and π αβ are purely spatial with respect to u µ . The fluid description is obtained by expressing the derivatives of ψ in terms of the relative effective fluid four-velocity (39) and kinematic quantities, Furthermore, we have therefore the ψ-equation of motion readṡ We need these equations to eliminate the dependence of T µν onẊ and on ⊔ ⊓ψ. Indeed, prior to using the equation of motion for ψ, one obtains Using the decomposition ∇ µ = h µ ν ∇ ν − u µ u ν ∇ ν , defininġ φ A ≡ u α ∇ α φ A , and taking into account the symmetry Z AB = Z BA , the interacting terms contribute to the density, pressure, heat flux and anisotropic stress, and the stress-energy tensor reads Then, it is straightforward to obtain the effective fluid quantities where an overdot denotes differentiation along the lines of the ψ-fluid, i.e.,φ A ≡ u α ∇ α φ A . At this point, we can identify the various contributions to the effective energy tensor as P = P inv + P vis + P φ (58) where are the bulk and shear viscosity coefficients, respectively. In this particular case in which the Lagrangian is linear in X, the ψ-equation of motion reveal that ⊔ ⊓ψ does not contain derivatives of the ψ-fluid four-velocity, therefore it only contributes to the inviscid pressure. However, it contains φterms related to the interactions. Finally, the φ -terms contribute only to the inviscid part of the effective stress-energy tensor (because P φ and ρ φ depend only on first derivatives of the fields), to the heat flux, and to the shear viscosity. In the general case of the previous section, all the φ fields contribute to both viscous and inviscid part. Conclusions The picture of the effective fluid equivalent of tensor-multiscalar gravity that emerges from the previous sections is the following. Because all the N original gravitational scalar fields couple explicitly to the Ricci scalar, they are automatically coupled to each other. In addition, they may have explicit couplings to each other through the functions Z AB and V , but this is not necessary for them to be mutually coupled. In the multi-fluid interpretation, this property could correspond to these fields being thermalized, but this interpretation is not corroborated in any obvious way by the field equations and remains rather arbitrary.
2022-11-09T06:42:52.253Z
2022-11-08T00:00:00.000
{ "year": 2022, "sha1": "f5ffd3ca057200a8bfc8f59d81aeaefc69f8d77a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f5ffd3ca057200a8bfc8f59d81aeaefc69f8d77a", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Physics" ] }
54761946
pes2o/s2orc
v3-fos-license
Extruded Aquaculture Feed: A Review Extruded Aquaculture Feed: A Review Agro-industrial by-products are processed materials that can have high protein content or other nutrients. The agro-industrial by-products are traditionally sold at low prices for animal feed consumption. These residues of the agro-industry have a high concentration of nutritional and bioactive compounds, which can be applied as fishmeal substitutes. In this chapter, it is shown how extrusion can be an alternative process for aquaculture feed production, increasing digestibility, and functional properties of the aquaculture feed, such as water stability and floatability. The thermal process during extrusion decreases the antinutritional factors present in legumes or other agro-industrial by-products, such as trypsin inhibitors and lectins. This chapter reviews research related to new protein sources that can potentially complement or substitute fishmeal for aquaculture feed. The use of bean ( Phaseolus vulgaris ) protein and cottonseed meal as a fishmeal substitute are shown, as well as the optimization of the extrusion process for aquaculture feed produc- tion. The incorporation of plant protein into the aquaculture production contributes to a more sustainable process. The effect of the extrusion parameters on the final product and quality are explained. Introduction The Food and Agriculture Organization (FAO) considers that about 16% of the consumed animal protein comes from fish proteins [1]. With increasing population, demand for fish consumption will increase. Aquaculture is a good alternative for wild fish production. Aquaculture is a growing economic activity with estimated sales in the USA in 2013 of $1.3 billion dollars [2]. About 43% of the world's fish production has increased in farms and has been increasing in the past decade, especially in Asia and Africa [3]. Worldwide the aquaculture production grew in 2013 to 97.2 million tons (live weight), with a value of 157 billion US dollars. Asia is the major aquaculture producer in the world. Aquaculture production, such as that of finfish and crustaceans, requires a high amount of fishmeal [1]. Fishmeal represents about 60% of the cost of aquaculture feed and is also a limited resource. The world market consumes about 68% of fishmeal for aquaculture products, such as shrimp, trout, salmon, and other species [4], and it is expected to grow in the next decades. Several investigations have looked into the use of alternative protein sources that could either supplement or substitute for fishmeal. Agro-industrial byproducts have been successfully used for animal feed [5], but can also be applied in aquaculture. Soybean has been successful to a certain point as a substitute for fishmeal as a protein source for different aquaculture species. Reports show the use of soybean in feeding trout and shrimp. About 47% of world soybean production is GMO-soy [6]. Up to 69% soybean in the global market is genetically modified, while 85% of the soy produced in the USA is genetically modified. Consumers are also looking for nonGMOs for consumption. Different legumes represent an alternative source of protein usable for aquaculture feed. Agro-industrial by-products have high protein concentration and are sold at low cost for animal feed. Bean (Phaseolus vulgaris) is a common legume grown and consumed in many countries and different climates. Bean plants have low water requirements and are a staple food in many areas of the world. Small and damaged beans have no economic value and represent an agricultural by-product. Beans are an excellent protein source for aquaculture feed if processed thermally to inactivate the antinutritional factors present in the kernel [7,8]. Cottonseed meal is a by-product of the oil industry. After oil extraction, the cottonseed meal (CSM) can have a protein content of up to 55%. CSM is sold at low prices for cattle feed and other small ruminants. The presence of gossypol, an antinutritional agent in CSM, limits its use in aquaculture. The breeding program has developed cotton varieties with low gossypol content, acceptable for use in aquaculture [9]. Some aquaculture species have low amylases activity, which limits the enzymatic breakdown of starches. Extrusion applies high-temperature in short processing times. Extrusion is an alternative for feed production, increasing digestibility, and functional properties of the aquaculture feed, such as water stability and floatability. The thermal process during extrusion decreases the antinutritional factors present in legumes or other agro-industrial by-products, such as trypsin inhibitors and lectins [8]. The chapter reviews agro-industrial by-products than can substitute or complement fishmeal for aquaculture feed. The optimization of the extrusion process for aquaculture feed production is discussed. Chemical composition and diet requirements in extruded aquaculture products Agro-industrial by-products are processed materials, where some of the main compounds have been either extracted or are products that do not comply with certain quality requirements. The oil extraction industry produces by-products with high protein, while distillery by-products have less sugar after fermentation, but high protein concentrations. The bean (P. vulgaris) processing industry discards small kernels with no commercial application. The agro-industrial by-products are traditionally sold at low prices for animal feed consumption. These residues of the agro-industry have a higher concentration of nutritional and bioactive compounds, which can be either used as supplements in functional foods or extracted and used for nutraceuticals. After oil extraction, soybean meal, canola meal, and flax seed meal contain high concentrations of proteins, minerals, and fiber. The brewing and distillery industry also offers by-products with high protein concentration. Cottonseed meal is obtained after oil extraction from the cottonseed. Glandless cottonseed has a low concentration of gossypol and is suitable for consumption by humans as well as livestock and aquaculture products. The levels of gossypol present in glandless cottonseed meal are not toxic for monogastric animals. The concentration of protein in glandless cottonseed meal (GCSM) can be up to 55%, which is more than 100% protein increased, compared to the cottonseed. GCSM has more protein than canola meal and is comparable to soybean meal ( Table 1). Soybean seeds can have 40% of protein [10], but its content increases after oil extraction. Cottonseed meal has low starch content and a high mineral content, which makes it an excellent complement for aquaculture feed. Small and cracked beans do not have the required quality for consumer acceptance, but have a high protein and starch content ( Table 1). Although the protein content is lower than other agro-industrial by-products presented in Table 1, the high starch content makes it more suitable for extrusion than by-products with low starch content. The extruded bean flour can be used for human and animal consumption, as well as for the aquaculture feed industry. The extrusion process totally inactivates the trypsin inhibitors and lectins in the bean flour [8]. **Total starch content. proteins and essential fatty acid, normally from fish oil, minerals, and vitamins. Aquaculture feed contains about 62% fishmeal, 20% wheat flour, 20% fish oil, 3.4% milk whey, 2.1% vitamins and minerals, and 0.5% choline chloride for cell function and structure ( Table 2). The use of plant proteins should supply enough proteins to cover the nutritional requirements of the aquaculture products. Fishmeal is a limited resource, but alternative protein sources can substitute for fishmeal in aquaculture diets. Hardness and functional properties of extruded products Hardness, Water Absorption Index, and Water Solubility Index are essential functional properties of aquaculture feed. The feed should have a certain hardness for the trout or shrimp to be able to eat it. The hardness of the extruded product depends on extrusion moisture and extrusion temperature. High extrusion temperature and high moisture content result in a hard extruded product. Softer products are the result of extruding at low temperatures and high moisture content (Figure 1). Aquaculture feed products need a specific Water Absorption Index (WAI) to facilitate consumption, while the Water Solubility Index (WSI) correlates well with the stability of the feed in an aqueous environment. The extruded products need to be stable in the water; a high WAI also produces a high WSI of the extrudates. Studies show that extruded products with lower WSI are obtained at low extrusion temperatures and low moisture content, which also produces harder extruded products. If we compare Figures 1 and 2, we can conclude that although crystallinity is lower in the product extruded at higher temperatures, the hardness tends to be high. The results indicate a probable high degree of denaturation, where proteins unfold, allowing protein to restructure into harder structure. Extruded bean flour Studies show that with a fishmeal substitution of 15, 30 and 45% bean flour or soy protein, there are no significant (p > 0.05) changes in protein content ( Table 3). Even a 45% fishmeal substitution with bean flour/soy protein had similar protein content as the diet with no vegetable protein. The fat content of the aquaculture feed ranged between 15.9 and 18.8%, and the dry matter content was 91.6% or higher. The mineral content was significantly lower in the diets with 45% bean flour and with soy protein, compared to the diet with only fishmeal ( Table 3). Bean flour is a better source of minerals for fish diet than soy protein concentrates. The functional properties of the extruded products are affected (p < 0.05) by the substitution of fishmeal with vegetable proteins. The Expansion Index (EI) decreases depending on the extrusion moisture. The EI decreases when substituting 30 and 45% of the fishmeal with bean flour at 18 and 22% extrusion moisture (Figure 2). When extruding at 22% moisture the EI and the bulk density (BD) were not affected (p < 0.05) by the substitution of fishmeal by bean flour (Figure 3) [18]. The Water Absorption Index (WAI) is also affected (p < 0.05) by the presence of bean flour (Figure 4). An increase in bean flour and decrease in fishmeal decrease (p < 0.05) the WAI and increase (p < 0.05) the Water Solubility Index (WSI) at 22% extrusion moisture. The WSI does not change (p > 0.05) at 18% extrusion moisture, and it even decreases at 22% extrusion moisture. The stability of the extrudates depends on the solubility and water absorption indices. The lower the WSI, the more stable the feed will be ( Figure 5). Another quality parameter in aquaculture feed is the sinking velocity of the product. The sinking velocity of the aquaculture feed is different for each aquaculture species. Some fish requires slow sinking feed that will resemble the movement of small fish or other living organisms. In the case of shrimp, the feed should sink to the bottom of the ponds for better use. Extrusion temperature affects (p < 0.05) the sinking velocity of the feed (Figure 6). Extrusion at 120°C increases (p < 0.05) Values with different letters in the same column are significantly different (p < 0.05). Dry matter (%) *NFE = Nitrogen free extract. the sinking velocity compared to extruded feed at 150°C. Bean flour can also affect (p < 0.05) the sinking velocity of the extruded feed. When feed contains bean flour and is required to have a low sinking velocity, the most recommended extrusion temperature is 150°C. The independent variables to be considered in aquaculture extrusion are temperature, moisture content, and screw speed. Based on the independent variables and the dependent variables (Expansion Index, bulk density, and sinking velocity), in which the EI should be between 0.88 and 1.11, the bulk density ranges between 0.55 and 0.97 g/cm 3 and the sinking velocity is required to be between 2 and 6.2 cm/s. The best extrusion conditions with a single laboratory extruder (Brabender, Germany) are 120°C, 22% moisture content at a screw speed of 140 rpm with a diet formulation containing 62% fishmeal and no vegetable protein. Diets containing 15 and 30% bean flour require less moisture (18%), but the same temperature and screw speed, to obtain an optimum aquaculture feed. Diets containing 45% bean flour and 15% soy protein are best extruded at 120°C and 18% moisture at a lower screw speed (80 rpm). The lower mineral content appears to affect in this case the screw speed. A sharp increase in soy protein concentrate of 30 and 45% requires high extrusion temperatures of 135 and 150°C, respectively. Extrusion of 30% soy protein requires 20% moisture content and a screw speed of 110 rpm. The moisture requirements are also high (22%) as well as the screw speed (140 rpm) for extrudates with 45% soy protein concentrates. The specific mechanical energy (SME) is the necessary energy in the form of work in the extrusion process. In aquaculture feed, the SME is affected (p < 0.05) by the extrusion temperature. High extrusion temperature decreases (p < 0.05) the SME. The extrusion moisture does not affect (p > 0.05) the SME, except with samples containing 15% bean flour. When extruding samples containing bean flour, the increase in screw speed will increase (p < 0.05) the SME, but not with those materials containing fish meal and no bean flour [19]. Starch pregelatinization Pregelatinized and not pregelatinized starch has been added to balanced aquaculture feed to study the effect of pregelatinization on the functional properties of the extrudates. The studies have shown that pregelatinization of starch before extrusion has a positive effect (p < 0.05) on the Water Solubility Index (WSI) (data not shown). The WSI decreases and can be beneficial for the aquaculture industry. Pregelatinization does not affect (p > 0.05) the Water Absorption Index of the extruded feed, while the sinking velocity of the extrudates does not increase (p > 0.05) compared to the diets with no starch. Adding starch to the extrudates will lower (p < 0.05) the sinking velocity of the extrudates, as long as the starch is not pregelatinized. Pregelatinized extrudates are harder than extrudates containing starch that have not been pregelatinized before extrusion. The Expansion Index (EI) of extrudates containing pregelatinized starch decreased (p < 0.05) compared to extrudates with starch which was not gelatinized. The decrease in EI was observed with 20% of starch content, but there were no differences (p > 0.05) with extrudates containing more than 50% starch [20]. Nixtamalization can also be used to pregelatinize starch. Nixtamalization is a traditional thermal treatment used for corn products in North and Central America, where corn kernels are cooked with CaOH, resulting in a pregelatinized dough suitable for extrusion. Figure 7b shows pregelatinized corn kernels, where the center of the kernels appears to be enzymatically degraded during the steeping time of the process. Figure 7a shows a bean starch kernel before extrusion, while Figure 7c shows the structural matrix of extruded bean/corn flours. Again we observe the protein structure, but also partial gelatinization of the starch kernels. Different raw material influences the final product characteristics. Crystallinity represents the structure arrangement and is mostly related to the starch structure in the kernel. Although corn has a higher starch content (about 65.5%) [8], than bean flour (51.9%), bean flour shows a higher crystallinity than nixtamal (Figure 8). Extrusion decreases crystallinity because of the gelatinization of the starch kernels, but the temperature and moisture content during extrusion also affect the percentage of the crystallinity of the extruded product. The crystallinity of the extruded product is related to the retrogradation of the starch. Low extrusion temperatures produce the lowest crystallinity of the end-product probably because of the lower degree of gelatinization. High extrusion temperatures yield a low crystallinity because of starch dextrinization during extrusion and lower retrogradation. Feeding trials of extruded aquaculture feed Studies of extruded trout feed show a final weight decrease (p < 0.05) after 32 days of feeding rainbow trout (Onchorhynchus mykiss) with bean flour. Figure 9 shows a decrease (p < 0.05) in trout final weight and weight gained after being fed with extruded feed containing 15-45% of bean flour when compared to extruded feed-containing fishmeal. Trout fed with 45% of bean flour only gained 5.5% of weight in 32 days. Although weight gain is lower (p < 0.05) when fishmeal is substituted with bean flour, the survival rate was 100% for all the diets [21], indicating that bean flour can be used up to 30% as a fishmeal substitute. The feed conversion efficiency can be calculated as shown in Formula 1. The feed conversion efficiency (FCE) is highest (p < 0.05) with fishmeal (54.4%), and decreased (p < 0.05) to between 47.1 and 46.5% with 15 and 30% of bean flour, respectively. Extruded feed containing 15-30% bean flour shows an acceptable feed conversion efficiency. Feeds containing 45% of bean flour are not recommended for trout feed due to the low FCE Index. FCE = (Final weight(g) − Initial weight(g)/Ccomsumed feed(g))*100 The condition factor or coefficient of condition K is a quality parameter of the fish and takes into consideration the weight and length. A condition factor above 1.6 shows that a fish has an excellent condition. The diets containing fishmeal and bean flour (15 and 30%) had a similar K factor (Figure 10). The K factor increased (p < 0.05) with 45% of bean flour. The diets containing fishmeal or 15 and 30% of bean flour had the same (p > 0.05) feed conversion ratio (FCR). The feed conversion ratio shows the inverse relationship between feed intake and weight gain of the fish and is related to the digestibility and metabolic use of the diet [22]. Trout fed with fishmeal or 15 and 30% of bean flour also had the same (p > 0.05) specific growth rate (SGR), but not trout fed with 45% of bean flour in their diets. The Hepatosomatic Index (HSI) shows an indirect relationship between liver weight and body weight; it indicates the nutritional state of the trout. High HSI indicates a better fish nutritional condition. The fishmeal and the 15% bean flour diets have the highest HSI; there is no difference (p > 0.05) between the two diets. The diets with 30 and 45% of bean flour have lower (p < 0.05) HSI than the diets containing no fishmeal, and either no bean flour or 15% bean flour. Color of extruded bean flour aquaculture feed The L* values describe the lightness of the color of the sample on a scale of 0-100, where 0 = black and 100 = white. On the other hand, the a* values if positive (0-60) are related to a reddish color of the extruded product, if the a* values are negative (0-60), the feed tends to be greener. The closer the values are to zero the more the color tends to be neutral. The values with the highest (p < 0.05) luminosity have the lowest a* values. The samples with higher (p < 0.05) L* correspond to the extrudates with 45% of bean flour, probably because of the presence of more starch in the sample. These samples also have the lowest (p < 0.05) a*. The samples with the lowest L* are the samples without bean flour, except the sample extruded with 15% bean flour, 150°C, and 22% moisture content, which also has a low L*. The samples with less bean flour and more fishmeal tend to have a higher a* value (Figure 11). The major factor affecting have values ranging between 13.53 and 13.68 lower (p < 0.05) than the b* values for the extruded samples with bean flour, except the sample with 15% BF extruded at 150°C and 22% moisture. The extruded aquaculture feed with 15-30% of bean flour present a less gray color tending to yellowish with values between 14.18 and 14.98. For higher amounts of bean flour, the feed tends to be lighter in color and more yellowish (16.00-16.7). The differences in color are explained in part by the original color of the raw material; fishmeal tends to be more into the gray, less light color, while starch present in the bean flour also affects luminosity and color. The thermal process during extrusion also defines the final color of the feed. Different chemical reaction, such as milliard and caramelization interact to give the final luminosity and color of the product. Effect of extrusion on bioactive compounds The effect of extrusion moisture and temperature on the antioxidative capacity and bioactive compounds in bean/corn extrudates is shown in Table 4. Neither extrusion temperature nor extrusion moisture had an effect (p > 0.05) on the antioxidant activity [23]. The β-carotene, flavonoids, or polyphenols content was not affected (p > 0.05) by the extrusion temperature and moisture. An experimental central rotary design of second order was used for the extrusion experiment. The experiments were conducted in a single screw extruder with a temperature range from 142 to 198°C and 16.3 to 18.7% moisture content. Extrusion even at 192°C does not seem to affect the antioxidative activity, nor the concentration of the active compounds. Extrusion uses little processing time, which makes it an adequate way for food processing since it is not shedding for bioactive compounds. Extruded cottonseed meal The use of cottonseed meal (CSM) in extruded snacks can double the amount of protein with just an increase of 10% CSM [9]. The protein concentration of an extruded snack can enhance The difference in chemical composition changes the chemical and physical structure, the extrusion properties, and the functional properties of the final product. Figure 12 shows two extruded samples. Figure 12a illustrates the structure of extruded corn masa, with a low protein content and a high starch content. It can be seen how layers of starch are built to provide expansion to the extruded product. On the other hand, Figure 12b shows the matrix of extruded cottonseed meal, which has 12.8% of protein. The flat, homogenous layers are gone, and a more irregular structure is present. It appears as if the protein breaks up the continuous starch structure and builds a less homogenous texture. The lower concentration of starch and the presence of more protein produce a more compact structure, which has a Lower Expansion Index and a harder crispier structure. The increase of protein content in extruded corn/cottonseed meal products reduces (p < 0.05) the physical and functional properties of the end-product; the Expansion Index, the water activity, and the water absorption and solubility indices decrease (p < 0.05) with protein cotton increase. As Expansion Index decreases the hardness of the extruded products increases. It is not only the low starch concentration that lowers the Expansion Index of the extruded products, but the matrix composition and structure also determines the hardness and Expansion Index of the final product. Figure 12 shows how proteins produce a more compact irregular structure in cottonseed meal (CSM) extruded products. When CSM is extruded in a single screw extruder, an increase in CSM negatively affects (p < 0.05) the Expansion Index, because of the presence of protein. CSM also decreases (p < 0.05) the water activity, Water Absorption Index, and the Water Solubility Index of the extruded product [9]. In aquaculture, Low Water Solubility and Low Water Absorption Indices are most likely to be preferred, rather than high values. The extruded feed requires stability in an aqueous environment to assure that the fish or shrimp can have time to consume it. Stability of the extrudates also helps to reduce water turbidity and pollution. Low Water Solubility and Water Absorption Indices have a positive effect on the quality of aquaculture feed. On the other hand, a lower water activity (a w ) is related to a longer shelf life of the feed. Extrusion shows restructuring of cottonseed meal. Figure 13a shows a heterogeneous structure before extrusion and a homogenous structure after extrusion (Figure 13b). Lambda scan microscopy of extruded products shows different scans between samples with and without cottonseed meal (Figure 14). The extruded samples with cottonseed meal show a second pick at about 670 nm (Figure 14b), which is not shown in the samples without cottonseed meal (Figure 14a). a b Figure 13. Confocal microscopy of (a) not extruded and (b) extruded cottonseed meal (reprinted with permission from Reyes-Jaquez et al. [9]). a b Figure 14. Lambda scan microscopy of extruded (a) corn masa and (b) corn masa/cottonseed meal (reprinted with permission from Reyes-Jaquez et al. [9]).
2018-12-11T13:15:36.602Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "99651fb692326f1ac963ead8bf6dcc12ca0adec1", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/55511", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "256c84e0455ea05972e1dc08a67401adfcdea893", "s2fieldsofstudy": [ "Biology", "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
813828
pes2o/s2orc
v3-fos-license
Genome modifications and cloning using a conjugally transferable recombineering system The genetic modification of primary bacterial disease isolates is challenging due to the lack of highly efficient genetic tools. Herein we describe the development of a modified PCR-based, λ Red-mediated recombineering system for efficient deletion of genes in Gram-negative bacteria. A series of conjugally transferrable plasmids were constructed by cloning an oriT sequence and different antibiotic resistance genes into recombinogenic plasmid pKD46. Using this system we deleted ten different genes from the genomes of Edwardsiella ictaluri and Aeromonas hydrophila. A temperature sensitive and conjugally transferable flp recombinase plasmid was developed to generate markerless gene deletion mutants. We also developed an efficient cloning system to capture larger bacterial genetic elements and clone them into a conjugally transferrable plasmid for facile transferring to Gram-negative bacteria. This system should be applicable in diverse Gram-negative bacteria to modify and complement genomic elements in bacteria that cannot be manipulated using available genetic tools. Introduction Genetic manipulation of bacterial strains provides critical information on the contributions of specific loci to virulence or other cellular functions, and many systems have been developed to achieve genetic knockouts and modifications [4,5,18]. The modification of bacterial genomes using counter-selectable doublecrossover methods are labor intensive and sometimes very difficult to achieve due to the low frequency of recombination events [21,26,31]. In contrast, the l Red recombineering system [39,41] has many advantages as a fast, efficient and reliable means of generating targeted genetic modifications in prokaryotes [11,61] and eukaryotes [7]. The l Red system expresses Exo, Beta and Gam proteins that work coordinately to recombine single and double stranded DNA [11,38,61], and has been exploited for genome modifications in Escherichia coli, Salmonella enterica and other Gram-negative bacteria [9,11,40,61]. Exo has a 5 0 -3 0 double stranded DNA (dsDNA)-dependent exonuclease activity for generating 3 0 single stranded DNA (ssDNA) overhangs [6,32,34] which then serve as a substrate for ssDNA-binding protein Beta to anneal complementary DNA strands for recombination [8,28,38]. Gam, an inhibitor of host exonuclease activity due to RecBCD [44], helps to improve the efficiency of l Red-mediated recombination with linear double-strand DNA. Unlike recA-dependent homologous recombination which requires longer regions of sequence homology with the targeted genetic region [25], the l Red apparatus can efficiently recombine DNA with homologous regions as short as 30-50 bp which can directly be incorporated into oligonucleotide primers in a PCR [11,61]. The recombineering technique is widely used to generate precise deletions [11], substitutions [33], insertions [36] or tagging [57] of targeted genes. One of the biggest advantages of the recombineering method is that modifying DNA can precisely eliminate the antibiotic selection markers for subsequent modification of the targeted DNA [11,42,67]. While this recombineering system works well in a model bacterium such as E. coli [37,39], bacteria often express restriction endonucleases that make them recalcitrant to foreign DNA even among naturally competent strains [1,3]. In fact, it was the study of experimental infections of E. coli strains with bacteriophage l that led to the discovery of restriction-modification (RM) systems [2]. Overcoming host RM systems can be accomplished via the passage of plasmids through a methylation-minus E. coli strain [51], but in highly methylated bacterial strains it may be necessary to use an in vitro or in vivo methylation strategy to achieve more efficient electroporation [12,13,29]. However, modulating the plasmid DNA methylation status is inefficient and labor-intensive compared to using conjugal transfer to introduce foreign DNA into a bacterial strain using a broad host range plasmid like IncP when electroporation is problematic [14,15,17]. Our need to generate targeted genetic deletions in Gramnegative bacterial pathogens of farmed catfish led to the development of recombinogenic plasmids that could be introduced into Gram-negative bacteria via conjugation. Our studies focused on two bacterial pathogens, including motile Aeromonas septicemia (MAS) and enteric septicemia of catfish (ESC) caused by Aeromonas hydrophila and Edwardsiella ictaluri, respectively, which are responsible for significant economic losses to the channel catfish industry in the Southeastern United States [56]. Fish diseases caused by strains of E. ictaluri are also frequently reported in catfish farming in Asia [46]. While E. ictaluri was formerly the most important bacterial pathogen in farmed US catfish, in 2009 US catfish farmers experienced epidemic disease outbreaks of motile Aeromonas septicemia (MAS) caused by a highly virulent Aeromonas hydrophila strain [20]. This newly emergent and virulent A. hydrophila strain, which has been implicated to have an Asian origin [23], is responsible for the death of millions of pounds of food-sized channel catfish in the US [23]. Though both E. ictaluri and A. hydrophila pose serious threats to the US catfish industry [24,45,56] as well as global fish farming [46,62], highly efficient genome modification techniques have not been developed yet to study the virulence mechanisms and permit generation of avirulent vaccines for these two pathogens. Though recombineering techniques are widely being used for genome modification of domesticated laboratory isolates such as E. coli strains, the implementation of these techniques for primary pathogenic isolates is quite challenging. In this study, we modified the available l Red recombination tools [11,54] to generate markerless mutants of E. ictaluri and A. hydrophila. Several conjugally transferable and temperature-sensitive plasmids were constructed to facilitate the genome modification by recombineering and removal of antibiotic resistance marker followed by Table 1 List of bacterial strains and plasmids used in this study. Bacterial strains or plasmid Features References Bacterial strains and plasmids The list of bacterial strains and plasmids used in this study is presented in Table 1. E. ictaluri and A. hydrophila strains were routinely grown on Trypticase Soy Broth (TSB) or Agar (TSA) medium at 28 C and 30 C, respectively. E. coli SM10lpir [50] was routinely used for the conjugal transfer of mobilizable plasmids to strains of E. ictaluri and A. hydrophila as previously described. E. coli BW25141 and BT340 [11] were received from the Yale University Genetic Stock Center. When antibiotic selection was required, bacterial growth media were supplemented with kanamycin (50 mg/ml), chloramphenicol (15 and 25 mg/ml for strains of E. ictaluri and A. hydrophila, respectively), tetracycline (10 mg/ml) and/or colistin (10 mg/ml). Recombinant DNA techniques and conjugal transfer of recombinogenic plasmids The list of primers used in this study is presented in Table 2. All oligonucleotides were purchased from Eurofins MWG Operon (Huntsville, AL). For cloning purposes, we routinely used electrocompetent E. coli ("E. cloni 10G", Lucigen Corp., Middleton, WI). PCR amplifications were carried out using EconoTaq DNA polymerase (Lucigen Corp.), Pfu DNA polymerase (Life Technologies, Grand Island, NY) and TaKaRa Ex Taq (Clontech, Mountain View, CA) as appropriate. Genomic DNAs and plasmids were extracted using the E.Z.N.A. DNA Isolation Kit (Omega Biotek, Atlanta, GA) and FastPlasmid Mini Kit (5 Prime, Gaithersburg, MD), respectively. Restriction enzymes and T4 DNA Ligase (Quick ligase) used for restriction digestion of DNAs and cloning, respectively, were purchased from New England Biolabs (Ipswich, MA). Restriction digested DNAs with sticky ends were blunt-ended using a DNA Terminator kit (Lucigen Corp.). Digested DNAs and ligation mix were purified using DNA Clean and Concentrator-5 (Zymo Research, Irvine, CA). DNA concentrations were quantified using a Qubit 2.0 Fluorometer (Life Technologies). The mobilizable recombinogenic plasmids pMJH46 and pMJH65, and flp recombinase plasmid pCMT-flp were introduced into E. coli SM10lpir by electroporation according to a previously published method [47]. Plasmids were conjugally transferred into E. ictaluri and A. hydrophila by filter mating experiments according to the methods described previously [35]. E. ictaluri and A. hydrophila transconjugants were selected on LB plates supplemented with chloramphenicol and colistin, or tetracycline and colistin, respectively. The introduction of plasmids into E. ictaluri or A. hydrophila was confirmed by their growth in the presence of appropriate antibiotics and by conducting PCR with a plasmid-specific primer set. Construction of broad host range recombinogenic plasmids A list of plasmids used in this study is presented in Table 1. The mobilizable plasmid pMJH46 was constructed by introducing the oriT sequence and chloramphenicol acetyltransferase (cat) gene into the recombinogenic plasmid pKD46 [19] which contains an arabinose-inducible l-Red cassette (exo, bet and gam genes) required for recombineering (Fig. 1). The oriT sequence and cat gene were PCR amplified from pGNS-BAC [27] using primers MobicatF and MobicatR, and CatF and CatR, respectively. Amplicons for the oriT sequence and cat gene were fused by splicing by overlap extension (SOE) PCR [52] using primers MobicatF (forward) and CatR (reverse). The oriT-cat cassette and pKD46 plasmid were digested with EcoRV and NcoI, respectively. NcoI digested pKD46 plasmid was blunt-ended and ligated to oriTcat cassette using a DNA Terminator kit (Lucigen Corp., Middleton, WI) and T4 DNA ligase (Promega, WI), respectively. The ligation mixture was then transformed into electrocompetent E. coli (E. cloni 10G, Lucigen Corp.) for cloning. Transformants were selected on 2 Â YT medium supplemented with ampicillin and chloramphenicol after incubation overnight at 30 C. The introduction of the oriT-cat cassette into pKD46, resulting in pMJH46, was confirmed by PCR and sequencing as described below. To construct the recombinogenic plasmid pMJH65, plasmid pMJH46 was digested with BstZ17I and SfiI, and blunt-ended using the DNA Terminator kit. A tetracycline resistance gene (tetA) cassette was PCR amplified from pACYC184 using primers tetAF and tetAR and ligated to blunt-ended pMJH46 using T4 DNA ligase. The ligation mixture was then transformed into electrocompetent E. coli (E. cloni 10G, Lucigen Corp.) for cloning. Transformants were selected on 2ÂYT medium supplemented with tetracycline after overnight incubation at 30 C. The construction of plasmid pMJH65 was confirmed by PCR and sequencing as described below. Construction of conjugally transferable Flp plasmid pCMT-flp The flp gene, which is required for FRT mediated site-specific recombination [7], was PCR amplified from pCP20 using primers Flp-pRhamF and Flp-pRhamR and was cloned into pRham N-His SUMO vector (Lucigen Corp.) under the control of the rhaPBAD promoter. The resulting plasmid pRham-flp was then digested with XbaI and blunt-ended in order to insert a tetracycline resistance gene (tetA) which was PCR amplified from pMJH65 using primers tetAF and tetAR. After cloning this tetA cassette into the pRham-flp plasmid, resulting in plasmid pRham-flp-tetA, the flp-tetA cassette was digested with AlwNI and BsaAI, and blunt-ended for cloning into repA101-oriR101 cassette which was PCR amplified from pMJH65 using primers UP-F-flp-oriR and DN-R-oriT. After cloning flp-tetA into repA101-oriR101 cassette, the construction of the resulting plasmid pCMT-flp was confirmed by sequencing as described below. To determine the efficacy of pCMT-flp plasmid in excision of an antibiotic resistance cassette flanked by FRT sequences, pCMT-flp was transferred into strains of A. hydrophila mutants by conjugation as described above. Preparation of linear double stranded DNA (dsDNA) substrate for recombineering The linear dsDNA fragments used for deletion of the ompLC gene from E. ictaluri with recombineering were generated by PCR amplification of the kanamycin resistance gene (kanR) cassette with its flanking FRT sequences using plasmid pKD4 as a template [11]. All other linear dsDNA used for deletion of E. ictaluri genes eihA and dtrA were PCR amplified from a kanR cassette located within the genome of E. ictaluri Alg-08-183ompLC::kanR mutant generated in this study by recombineering. Likewise, the linear dsDNA substrate used for recombineering in A. hydrophila was generated by PCR amplification of the cat gene with its flanking FRT sequences integrated within the genome of A. hydrophila ML09-119 (see below). Recombineering primers contained 50-60 bp of homology to the targeted genes at their 5 0 ends and 20-22 bp of homology to the cat cassette at their 3 0 ends. Primers were modified with four consecutive 5 0 phosphorothioates bonds when appropriate to reduce the chance of degradation by exonucleases during recombination. To introduce $250 and $500 bp homologous arms on either ends of the recombineering substrates for the determination of the effect of length homology in recombination frequency, primers were designed to anneal $250 and $500 bp upstream and downstream, respectively, of the cat gene of A. hydrophila ML09-119 waaL::cat mutant generated by recombineering in this study. PCR amplification of the respective antibiotic resistance gene cassette using these gene-targeted primers was performed using high fidelity Takara Ex Taq Polymerase (Clontech) and EconoTaq PLUS GREEN (Lucigen Corp.). At least 10 positive PCR amplicons of 50 ml volume were pooled together and purified by phenol-chloroform extraction followed by ethanol precipitation [47] or using the Wizard DNA Clean-Up System (Promega, Madison, WI). Purified PCR products were resuspended in nuclease-free water and used for transformation into electrocompetent E. ictaluri and A. hydrophila strains harboring recombinogenic plasmids pMJH46 and pMJH65, respectively. 2.6. Deletion of E. ictaluri and A. hydrophila genes by recombineering Electrocompetent E. ictaluri and A. hydrophila harboring recombinogenic plasmids pMJH46 and pMJH65, respectively, were prepared as described follows. E. ictaluri strains were grown in TSB at 28 C for overnight in the presence of chloramphenicol, whereas A. hydrophila was grown at 30 C for overnight in TSB supplemented with tetracycline. Cultures were then diluted 1:70 in 40 ml of Super Optimal broth (SOB) medium supplemented with appropriate antibiotics and 10 mM L-arabinose, and grown with vigorous shaking until the OD 600 reached to 0.45 or 0.6 for E. ictaluri and A. hydrophila, respectively. Cells were harvested by centrifugation at 5000 Â g for 8 min at 4 C, washed three times with ice-cold 10% glycerol and finally cells were concentrated 400-fold by resuspending with 100 ml of ice-cold GYT (10% glycerol, 0.125% yeast extract and 0.25% tryptone) medium or 10% glycerol. Freshly prepared electrocompetent cells were immediately used for electroporation. For deletion of targeted genes from E. ictaluri using recombineering, a dsDNA substrate of 10 mg were mixed with 50-55 ml of electrocompetent cells in a pre-chilled electroporation cuvette (0.1-cm gap), and pulsed at 1.8 kV with 25 mF and 200 W using an Eppendorf Electroporator 2510 (Hamburg, Germany). For A. hydrophila, the same electroporation procedures were followed with the exception that cells were pulsed at 1.2 kV. Immediately after electroporation, 950 ml of SOC supplemented with 10 mM Larabinose was added and incubated at an appropriate temperature with vigorous shaking for at least 4 hrs for E. ictaluri and overnight for A. hydrophila. Cells were then spread onto 2 Â YT agar plates supplemented with kanamycin and chloramphenicol for E. ictaluri and A. hydrophila, respectively, and incubated at an appropriate temperature to obtain mutants with the targeted deletions. Mutants grown on antibiotic selective plates were purified by streaking on TSA plates for isolated colonies. The correct deletions of the targeted genes were confirmed by PCR and/or sequencing as previously described [11]. To determine the effect of (1) phosphorothioate-modified primers, (2) the size of the gene-specific region of homology and (3) the concentration of the dsDNA substrates on recombination frequencies, each experiment was repeated independently at least three times. Flp-mediated excision of antibiotic resistance gene cassettes to generate unmarked mutants Before removal of the antibiotic resistance gene cassettes using Flp/FRT mediated recombination, recombinogenic plasmids were Fig. 1. Schematic maps of conjugally transferable recombinogenic and flp recombinase plasmids constructed in this study. The oriT sequence cloned into these plasmids facilitates the conjugal transfer of these plasmids using appropriate donor E. coli strain. Red recombinogenic plasmids pMJH46, pMJH65 and flp recombinase plasmid pCMTflp are easily cured after heat induction at 37 C due to temperature sensitive repA101 gene. Plasmid maps were generated by CLC Genomics Workbench (version 4.9). cured from the mutants of E. ictaluri and A. hydrophila. Plasmid pMJH46 was cured from E. ictaluri mutants by growing cells on TSB medium at 28 C until the OD 600 reached to 1.0 and then cells were subjected to heat induction at 43 C for 1 h with shaking at 250 rpm. Heat-induced cultures were serially diluted in sterile water and spread for isolated colonies onto BHI Blood Agar plates that were then incubated at 28 C for 36 h. To cure plasmid pMJH65 from A. hydrophila mutants, cultures were grown in TSB broth at 37 C overnight and streaked onto TSA plates for isolated colonies. The loss of plasmid pMJH46 and pMJH65 from E. ictaluri and A. hydrophila mutants were confirmed by determining the lack of ability of individual mutant colonies to grow on TSA plates supplemented with chloramphenicol and tetracycline, respectively. Plasmid pCP20 that contains the Flp recombinase [7] required for FRT sequence-specific recombination was electroporated into E. ictaluri mutants according to the methods described above. E. ictaluri mutants harboring pCP20 were selected on 2 Â YT agar plates supplemented with chloramphenicol. These E. ictaluri mutants were grown in TSB at 28 C until OD 600 of 1.0 and temperature was shifted by incubating at 37 C for 1 h with shaking at 250 rpm to induce the removal of kanamycin resistance gene cassette by FLP recombinase. To obtain isolated colonies diluted cultures were plated onto BHI Blood Agar plates and incubated at 28 C for up to 36 h. Flp recombinase plasmid pCMT-flp constructed in this study was conjugally transferred to A. hydrophila mutants as described above and induced for the removal of chloramphenicol resistance gene cassette by incubating at 37 C. Induced cultures were streaked onto TSA plates and colonies grown on non-selective plates that subsequently failed to grow on antibiotic selective plates were tested by PCR and sequencing to confirm the Flpmediated excision of antibiotic resistance gene cassette which was introduced by recombineering. Cloning large genomic inserts without PCR amplification of the targeted genetic locus To construct a small, conjugally transferrable, and low copynumber plasmid backbone, the cat gene and p15A origin of replication (oriR) were PCR amplified using primers Li-CCatF and CCatR-oriT and CatFseq and Li-AAAAR, respectively, from the genome of A. hydrophila ML09-119hlyA::cat (generated in this study) and plasmid p1R17 (unpublished) with p15A of pACYC184 origin, respectively. The reverse primer CCatR-oriT used for amplification of the cat gene contains 87 bp of oriT sequence ( Table 2) to facilitate the conjugal transfer of large insert clones to Gram-negative bacteria. The amplicons of cat-oriT cassette and p15A (oriR) were fused together to construct a 2097 bp plasmid backbone cat-oriT-oriR (pMJH97) using SOE PCR with outermost primers Li-CCatF and Li-AAAAR. To clone the ymcABC genetic cluster (unpublished data, manuscript in preparation) of A. hydrophila ML09-119, the pMJH97 plasmid backbone was PCR amplified using primers Li-CCatF and Li-AAAR that are homologous to the nucleotide regions 3,497,544-3497603 and 3,499,203-3499265, respectively, of the A. hydrophila ML09-119 genome [53]. These regions correspond to a specific region which is upstream of the ymcABC genetic cluster in the A. hydrophila ML09-119 genome. Purified PCR products were electroporated into A. hydrophila ML09-119 harboring plasmid pMJH65 for genomic integration into the targeted regions by recombineering. Colonies selected on 2 Â YT plates containing chloramphenicol were subjected to PCR to confirm the correct integration of the pMJH97 backbone plasmid into the genome using primers p15AF and Li234R-HindII, and amplicons of the expected size were selected for sequencing. Once the correct integration of pMJH97 into the genome of A. hydrophila ML09-119 was confirmed by PCR and sequencing, genomic DNA was extracted from ML09-119::cat-oriT-oriR and restriction digested with BbvCI and NotI. Blunt-ended and purified genomic DNA fragments were selfligated using T4 DNA ligase and electroporated into E. coli (E. cloni 10G, Lucigen Corp.) for cloning. Clones were selected on 2 Â YT plates with chloramphenicol and the cloned plasmid pBBC2 was verified by PCR and sequencing using primers CCatR and ymcA-CM-1F for the presence of the complete ymcABC genetic cluster as an insert. Once the complete ymcABC cloning was confirmed, the pBBC2 was introduced into E. coli SM10lpir by electroporation. The plasmid was conjugally transferred into A. hydrophila ML09-119 as described above. Ten transconjugants which were grown on 2 Â YT plates supplemented with chloramphenicol and colistin were double purified and subjected to PCR to confirm pBBC2 mobilization into A. hydrophila ML09-119 using primers CCatR and ymcA-CM-1F. Construction of conjugally transferable recombinogenic plasmids The expression of exo, bet and gam within bacterial cells substantially improves their recombination frequencies that can be exploited to modify bacterial genomes by recombineering [11]. Though published reports indicate that some E. ictaluri strains are capable of accepting foreign DNA of up to 45 kb by electroporation [23], our repeated attempts failed to introduce the recombinogenic plasmid pKD46 [11] into primary disease isolates of E. ictaluri or A. hydrophila. To introduce the recombinogenic l-Red cassette into E. ictaluri, a mobilizable plasmid was constructed by introducing the 'mob cassette' (oriT region, traJ and traK) along with a chloramphenicol resistance (cat) gene into pKD46, resulting in plasmid pMJH46 (Fig. 1, accession no. JQ070344). The cat gene introduction broadens the applicability of this plasmid since some E. ictaluri strains are intrinsically resistant to ampicillin [58]; therefore, the original plasmid pKD46 expressing the bla gene is incompatible for these E. ictaluri isolates. In this study, we successfully transferred recombinogenic plasmid pMJH46 into different E. ictaluri strains by conjugation with E. coli SM10lpir. In subsequent studies, the pMJH46 plasmid was modified by replacing the cat gene with tetA to construct recombinogenic plasmid pMJH65 (Fig. 1, accession no. KF195927) which allows the use of the cat gene as a recombineering substrate. The plasmid pMJH65 was successfully introduced into highly virulent catfish isolate A. hydrophila ML09-119 [53] in order to generate genomic modifications through recombineering. Deletion of E. ictaluri and A. hydrophila genes by recombineering To determine the feasibility of using this recombineering system in E. ictaluri, we deleted the ompLC gene that is required for phage FeiAU-183 attachment to E. ictaluri strain Alg-08-183 [22]. The PCR screening of colonies grown on antibiotic selection plates showed that 1% colonies were true mutants (data not shown). Unfortunately, a large number of colonies grown on 2 Â YT plates supplemented with kanamycin were determined to be false positive even though the suicide plasmid pKD4 [10] used as template was treated with DpnI before electroporation into E. ictaluri. To avoid the occurrence of background colonies, we subsequently used the genomic DNA of the E. ictaluri Alg-08-183 ompLC::kanR mutant as a PCR template for amplification of the kanamycin resistance gene cassette. Using this chromosomal template to prepare amplicons, we obtained at least ten colonies per experiment, of which $80% of them were true mutants (Fig. 2). In addition to ompLC of E. ictaluri Alg-08-183, we deleted two additional genes that included dtrA of E. ictaluri Alg-08-183, and eihA of E. ictaluri R4383 [59] (Fig. 2). In this study, using a recombineering approach, we also deleted seven different genes from the primary disease isolate A. hydrophila ML09-119 (Table 1). PCR and sequencing confirmed that all genes that were targeted for deletion from E. ictaluri and A. hydrophila strains were successfully deleted by recombineering. As a control experiment, A. hydrophila ML09-119 (pMJH65) and this same strain without the presence of the recombineering plasmid were both subjected to electroporation with equal amounts (900 ng) of a waaL::cat PCR construct (Table 1), and only in the presence of pMJH65 were any transformants obtained at a frequency of 0.45 AE 0.27 transformants per ng of amplicon DNA. Effects of primer modification, length of homology and dsDNA substrate concentration on recombination frequency To determine the effect of strand protection through primer modifications on recombination frequencies in A. hydrophila ML09-119, four different primers combinations were used for the preparation of dsDNA substrates to delete the waaL gene of A. hydrophila ML09-119 [53]. In the type "+/+"primer combination, both the forward and reverse primers (Ligase-catF and Ligase-catR, in Table 2) were modified with four consecutive 5 0 -phosphorothioate bonds, whereas in the type "À/À" primer combination both the forward and reverse primers (Li-catF and Li-catR) were unmodified. In the type "+/À" primer combination, only the forward primer (Ligase-catF) was modified, whereas in the type "À/+" primer combination only the reverse primer (Ligase-catR) was modified with four consecutive 5 0 -phosphorothioate bonds. In the latter two cases, the alternative primers were unmodified. We found that dsDNA substrate prepared with both the leading and lagging strand-specific phosphorothioate modified primers (type "+/+"in Fig. 3B) provided significantly more mutants, whereas three other combinations did not affect recombination frequency (Fig. 3B). Once we determined that modified primers provided significantly more mutants, all of our subsequent recombineering experiments in A. hydrophila were carried out using both modified primers. To determine the effect of the length of the gene-specific regions of homology of the dsDNA substrate on recombination efficiency, three different dsDNA substrates that included approximately 60 bp, 250 bp and 500 bp of homologous sequence at both the 5 0 and 3 0 ends were used for targeted deletion of the waaL gene (Panel A) Colonies gown on 2 Â YT plates supplemented with kanamycin were selected for PCR screening of ompLC gene deleted mutants. Lanes 1, 3-9 and 11 represent the PCR products of ompLC gene mutants disrupted with the kanR gene (ompLC::kanR) and lanes 2, 10 and 12 represents the PCR product of wild type ompLC gene of E. ictaluri strain Alg-08-183. (Panel B) Removal of the kanamycin resistance marker using the Flp recombinase of plasmid pCP20. PCR screening of E. ictaluri mutants plated after temperature induction showed that all tested mutants had lost the antibiotic resistance marker. (Panel C) PCR confirmation of deletion of the ompLC and drtA genes from E. ictaluri strain Alg-08-183 and eihA from E. ictaluri strain R4383. were generated using modified and unmodified primers. Modified primers included four consecutive phosphorothioate bonds at the 5 0 end of the primers. Type "À/À" used unmodified primers as a negative control, type "+/À" included modification of the forward primer but not the reverse primers, type "À/+" included modification to the reverse but not forward primer, and type "+/+" included phosphorothioate bonds in both primers. The latter condition in which both primers were modified provided significantly more mutants than any other types of dsDNA substrates used for recombineering (***p-value = 0.0026). (Panel C) The effect of varying the length of the homologous regions of the dsDNA substrate to the targeted chromosomal site on the recombination frequency was determined using approximately 60 bp, 250 bp and 500 bp of homologous sequence at both the 5 0 and 3 0 ends. The average number of mutants obtained was derived from three independent recombineering experiments. of A. hydrophila ML09-119 [53]. The number of mutants obtained from this experiment demonstrated that the recombination frequencies were not significantly different in A. hydrophila ML09-119 due to the varying length of homologous arms flanking to the targeted gene (Fig. 3C). To determine the effect of dsDNA substrate concentration on recombination frequencies in A. hydrophila, we used four different concentrations of dsDNA substrate that included 0.75, 1.5, 3.0 and 5.0 mg of dsDNA substrate for each recombineering experiment. Our findings demonstrated that increasing the dsDNA substrate concentrations did not change the recombination frequency significantly in A. hydrophila ML09-119 (Fig. 3A). The number of mutants we routinely obtained in this experiment was within the range of approximately 30-200 per recombineering reaction. Removal of antibiotic resistance cassette by Flp recombinase The temperature induction of E. ictaluri Alg-08-183ompLC:: kanR, dtrA::kanR and E. ictaluri R4383 eihA::kanR mutant at 43 C for 1 h followed by plating on BHI blood agar plates resulted in the curing of the recombinogenic plasmid pMJH46 (data not shown). We found that only highly rich medium such as BHI supplemented with 5% Sheep Blood, unlike TSA, supported the growth of the high temperature-induced E. ictaluri strains. The introduction of plasmid pCP20 containing the Flp recombinase by electroporation [7] followed by their growth at 37 C resulted in removal of the antibiotic marker from the E. ictaluri ompLC mutant (Fig. 2B). PCR amplification of the targeted genes with their flanking primers indicated a 100% frequency for removal of the antibiotic selection marker. The antibiotic resistance markers from the E. ictaluri dtrA and eihA mutants were also removed using the Flp recombinase (Fig. 2C). We found that, in addition to the removal of the antibiotic resistance marker, heat induction efficiently cured the plasmid pCP20 from all mutant colonies tested. Cured mutants lacking theantibiotic resistance cassette could be subsequently targeted for deletion of additional genes. Since genes from A. hydrophila were replaced using the cat gene cassette, plasmid pCP20 containing the cat gene was not compatible for conducting Flp/FRT mediated recombination in A. hydrophila mutants. Therefore, we constructed a new flp recombinase plasmid pCMT-flp (Fig. 1D) with a tetracycline selectable marker. This plasmid was conjugally transferred into A. hydrophila mutants for markerless mutant construction. The screening of A. hydrophila mutants harboring the pCMT-flp plasmids for lack of growth in the presence of chloramphenicol resulted in more than 10% of the mutants with documented loss of the antibiotic resistance cassette (data not shown). Cloning without PCR amplification of large inserts Since the cloning of large inserts using traditional cloning techniques is challenging and PCR amplification of the targeted inserts can introduce unwanted mutations, we developed a novel technique to clone large genomic inserts of A. hydrophila that does not require PCR amplification of the targeted insert (Fig. 4). As a proof of concept of this technique, we targeted for cloning the 3.6 kb ymcABC operon of A. hydrophila strain ML09-119 for cloning. For this purpose, we constructed a small conjugally transferrable low copy-number plasmid backbone (pMJH97) which was integrated contiguous to the ymcABC operon of A. hydrophila ML09-119 by recombineering (data not shown). We confirmed the correct integration of the plasmid backbone (pMJH97) upstream of the ymcABC operon by PCR and sequencing. Restriction digestion of the genomic DNA isolated from the integrant and self-ligation followed by electroporation resulted in hundreds of chloramphenicol-resistant E. coli clones on selective plates. Two of the clones selected for PCR and sequencing confirmation demonstrated that the intact ymcABC operon was cloned into the plasmid pMJH97 (data not shown). This plasmid was conjugally transferred into A. hydrophila ML09-119 to determine its conjugal transferability; screening of ten transconjugants using PCR demonstrated that all of the transconjugants harbored plasmid with an intact ymcABC operon insert (data not shown). Discussion The genetic manipulation of primary pathogenic isolates, compare to domesticated laboratory isolates, can be challenging due to many factors including antibiotic resistance [16,30], poor recombination efficiency and wide-spread occurrence of restrictionmodification systems [37,54]. Our attempts to genetically modify the fish pathogens E. ictaluri and A. hydrophila were inhibited due to our inability to introduce the l Red recombineering system into these bacterial isolates. Similar difficulties were observed by several other researchers who reported reduced transformation efficiency of pKD46 in E. coli by electroporation [48], demonstrating the need for an alternative route to introduce the recombineering system, i.e., via conjugation. In this study we describe the development of a fast, efficient, and reliable technique for genetic modification of E. ictaluri and A. hydrophila (and presumably other Gram-negative bacteria) using a recombineering system that is readily transferrable by conjugation. The introduction of a mob cassette to pKD46 [11] permitted the resulting plasmid pMJH46 to transfer into different E. ictaluri strains by conjugation. Additional modified recombinogenic plasmids were constructed to make it compatible for gene deletion in a highly virulent strain of A. hydrophila. Furthermore, we demonstrated the applicability of this method by creating multiple mutants in E. ictaluri and A. hydrophila. Our first experiments using recombineering in E. ictaluri unfortunately were plagued by a large number of background colonies on the antibiotic selection plates that were not successful recombinants. These results were obtained even though we used suicide plasmid pKD4 as a template for PCR amplification of the antibiotic cassette and treated the DNA with DpnI treatment, as had been shown to reduce the number of background colonies [49]. An alternative solution to reducing the high background of antibiotic resistant colonies was to use genomic DNA isolated from a successful genomic integrant (E. ictaluri Alg-08-183ompLC::kanR) constructed in this study as a template for PCR of the recombineering construct. Therefore, all of our subsequent recombineering experiments for gene deletion in E. ictaluri and A. hydrophila used genomic DNA as template for PCR amplification of the respective antibiotic resistance gene cassettes. We were able to use the Flp recombinase encoded on the temperature-sensitive plasmid pCP20 [7] to successfully remove a FRT-flanked antibiotic resistance cassette used for genome modification in E. ictaluri. Before introducing pCP20 into E. ictaluri mutants, pMJH46 was cured by heat induction since both plasmids contain the cat gene. Unlike E. coli [11], E. ictaluri mutants required a highly rich medium (BHI supplemented with 5% sheep blood) to recover after heat-induction at 43 C, which may be due to the mesophilic growth temperature (28 C) of E. ictaluri. Because of antibiotic resistance marker incompatibility, a new conjugally transferable flp recombinase plasmid, pCMT-flp, was constructed that can efficiently remove FRT-flanked antibiotic resistance gene cassettes from mutants of A. hydrophila. In addition to developing techniques for genetic modification in E. ictaluri and A. hydrophila, we devised a technique for cloning large fragments of bacterial genomes without PCR amplification of the targeted region. Similar in concept to the VEX-capture system that uses a lox/Cre site-specific recombination system [60], or the use of an in vivo recombineering method [55], these cloning systems are advantageous in allowing the cloning of larger fragments of genomic DNA without the need for PCR amplification, given the difficulties in producing larger amplicons and the potential for incorporating PCR-mediated errors. This method was validated by the cloning of the A. hydrophila genetic operon ymcABC, as an example of this method that can overcome the shortcomings of PCR-based methods for the cloning and conjugal transfer of genetic elements. The maximum possible size of the cloned region will depend on multiple factors, such as the presence of suitable restriction sites and the efficiency of conjugal transfer, but would be expected to be theoretically suitable for genomic regions such as genomic islands, prophages, and other genetic clusters. We have described a highly efficient and rapid procedure for the generation of markerless mutants in E. ictaluri and A. hydrophila by recombineering. The newly constructed conjugally transferable recombinogenic plasmids pMJH46 and pMJH65 and recombinase plasmid pCMT-flp can presumably be used for other Gram-negative bacteria for generating markerless mutants, especially for bacterial isolates that are recalcitrant to electroporation. Finally, the development of a PCR-free system for cloning and transfer will facilitate cloning and complementation of much larger genetic elements. Fig. 4. Strategy for PCR-free cloning of large bacterial genetic regions. The major steps of cloning large genetic inserts are indicated. The catR-oriT-oriR (pMJH97) cassette was PCR amplified using primer pairs with 50-60 bp homologous sequence at their 5 0 -ends specific to the targetred site. Depending on the choice of restriction enzymes, the resulting dsDNA substrate can be integrated upstream or downstream of the targeted site of the genome using the recombineering system. Once the catR-oriT-oriR (pMJH97) cassette integration into the genome was confirmed by PCR and sequencing using primers P1 and P2, the genomic DNA of integrants was restriction digested with an appropriate restriction enzyme to clone into E. coli after self-ligation using T4 DNA ligase. The cloning of the correct insert into the plasmid pMJH97 was verified by PCR and sequencing using vector and insert specific primers P3 and P4, respectively. The plasmids with cloned inserts were then readily transfered to other Gram-negative bacterial strain by oriT sequence-mediated conjugal transfer using an appropriate donor strain.
2018-04-03T03:09:14.906Z
2015-08-28T00:00:00.000
{ "year": 2015, "sha1": "57649a17dfc46b5e22bda4d479ae81c33e9610b5", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.btre.2015.08.005", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "57649a17dfc46b5e22bda4d479ae81c33e9610b5", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
248344830
pes2o/s2orc
v3-fos-license
Comparison of CLEIA and ELISA for SARS-CoV-2 Virus Antibodies after First and Second Dose Vaccinations with the BNT162b2 mRNA Vaccine The global severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic has required rapid action to control its spread and vaccines are a fundamental solution to this pandemic. The development of rapid and reliable serological tests to monitor the antibody response to coronavirus disease vaccines is necessary for post-vaccination immune responses. Therefore, in this study, anti-SARS-CoV-2 antibody titers after the first and second doses were monitored using two different measurement systems, a highly sensitive analytical platform of chemiluminescent enzyme immunoassay (CLEIA) and an enzyme-linked immunosorbent assay (ELISA). Our study included 121 participants who received two doses of the BNT162b2 vaccine. Both methods show significant increase in anti-spike protein IgG antibody levels one week after the first vaccination, and then reached at a plateau at week five (week two after the second dose), with a 3.8 × 103-fold rise in CLEIA and a 22-fold rise in ELISA. CLEIA and ELISA showed a good correlation in the high titer range, >10 binding antibody unit (BAU)/mL. Both methods detected higher IgG antibody levels in females compared with male participants after the second vaccination, while CLEIA exhibits the sex difference after the first dose. Thus, our study showed better performance of CLEIA over ELISA in sensitivity, especially in the low concentration range, however ELISA was also useful in the high titer range (>10 BAU/mL) corresponding to the level seen several weeks after the first vaccination. Introduction The novel coronavirus disease (COVID- 19) was first reported in Wuhan, China, in late December 2019, and it then spread globally. The WHO declared a pandemic in March 2020 [1]. Rapid responses to the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic are required to control its global spread and vaccines are a major solution to this pandemic. Therefore, vaccine development is being fast-tracked globally and has led to new vaccine types (e.g., DNA, RNA, inactivated forms of virus) [2]. More than 10 billion doses of the vaccine have now been administered globally [3]. As of 26 January 2022, 204,528,007 vaccine doses have been administered in Japan [4]. A total of three vaccines have been approved for use in Japan, and the mRNA Pfizer-BioNTech BNT162b2 and TAK-919 (Moderna formulation) vaccines are the most widely used [5]. mRNA-based vaccines avoid the risk of integrating viral genetic material into the host cell's genome and can produce pure viral proteins. SARS-CoV-2 vaccines are based on virus mRNA, specifically the fragment encoding the spike (S) protein, which attaches the virion to the host cell's membrane [6]. SARS-CoV-2 encodes four main structural proteins: spike (S), envelope, membrane, and nucleocapsid (N), as well as multiple nonstructural proteins and accessory proteins [7]. After SARS-CoV-2 infection, individuals typically start producing virus-specific antibodies including immunoglobulin (Ig)G, IgM, and IgA, which mainly target two viral proteins, the S protein, and the N protein (NP). The S protein is the outermost protein on the virus surface and contains a receptor-binding domain (RBD) [8][9][10] and for SARS-CoV-2, the S protein is cleaved into S1 and S2 subunits by furin protease before the virion is released. The S1 subunit contains an immunologically RBD, which is a major antibody target [11]. Antibody responses may correlate with the level of protection of vaccine-induced immune responses. SARS-CoV-2 mRNA vaccines were proposed to be administered at least twice with a spacing of 21 or 28 days to increase the activation of the immune system. A two-dose regimen of the BNT162b2 mRNA vaccine (Pfizer-BioNTech, NY USA/Mainz, Germany) was found to be safe and 95% effective against COVID-19 [12][13][14]. A recent study indicated that anti-spike IgG levels were associated with protection from infection after two dose BNT162b2 vaccination and even greater after prior infection [15]. However, antibody profiles vary across individuals. The level of S-IgG antibodies was maintained for six months after the second vaccination of BNT162b2 mRNA COVID-19 vaccination among healthcare worker in Korea [16]. In Israel, the individuals who received the two doses of the BNT162b2 vaccine have different kinetics of antibody levels compared to patients who had been infected with the SARS-CoV-2 virus, with higher initial levels but decrease faster in the first group [17]. Another study found that protection against SARS-CoV-2 infection wanes over time after full vaccination with BNT162b2 [18]. Furthermore, limited waning in effectiveness of the BNT162b2 vaccine and a duration of protective immunity was observed in older adults and in those in a clinical risk group [19]. This will be closely monitored in years to come and there will be an increasing demand for reliable rapid serological tests. Serology-based immunoassays are an inexpensive and rapid testing method for epidemiological surveillance to detect antibodies produced by infected individuals in response to SARS-CoV-2 exposure or vaccine response and predict protective immunity since it has been demonstrated that the COVID-19 vaccines can induce a humoral response thereby protecting individuals from symptomatic COVID-19 [11,20,21]. Effective and reliable serological detection methods have a critical role in monitoring the abundance of antibodies in infected patients and quantifying the quality of immune responses to new vaccines. Several SARS-CoV-2 immunoassays have been developed [22,23]; however, they use different units and have different measurement accuracies, making a direct comparison of measurement values difficult and important uncertainty about the assay accuracy remains. Furthermore, clinical implementation requires validation of these different assays. The highly sensitive chemiluminescence enzyme immunoassay (CLEIA) has gained increasing attention because of its high reproducibility, low cross-reactivity with other coronavirus antigens, and low interference from common blood components [24]. Therefore, this pilot study compared CLEIA and an enzyme-linked immunosorbent assay (ELISA) to detect anti-SARS-CoV-2 antibodies in vaccinated individuals. Ethics Statement, Participants, and Sample Processing This study was approved by the Ethics Committee for Clinical Research of the School of Medicine Saga University, Saga, Japan (No R2-44 and R3-9). All participants provided written informed consent before undergoing any study procedure. Study Design and Participants The study group consisted of 121 participants (59 hospital staff, 20 healthcare workers, and 42 students) from hospitals in Saga prefecture who were invited to be vaccinated with two doses of BNT162b2 (Pfizer/BioNTech). The participants were aged 22 to 63 years and 43% were male. Some participants (20.7%) reported disease histories of chronic conditions before vaccination ( Table 1). The first vaccine dose was scheduled for February, April, and May 2021, and the second dose was administered 21 days after the first dose. We also included a non-vaccinated student from the same university as a member of the students group in this study to observe the antibody production in the same manner used for other participants (negative control). Serological Tests Blood samples were collected prior to the first vaccination and every other week after the second vaccination for healthcare workers, four weeks after the second vaccination for hospital staff, and prior to the first dose, three weeks after the first vaccination, and four weeks after the second vaccination for students (Supplementary, Table S2). The first dose of the BNT162b2 (Pfizer/BioNTech) vaccine (30 µg dose) was administered at day 0 and the second dose at day 21. Then, immune responses were evaluated by measuring IGs IgM and IgG. Serum was collected on the day of blood collection and stored at −80 • C until analysis. CLEIA and ELISA were used to quantify the in vitro quantification of human IgM and IgG antibodies to SARS-CoV-2. Chemiluminescent Enzyme Immunoassay (CLEIA) We measured three anti-SARS-CoV-2 antibodies, anti-S1-IgG, anti-S1-IgM, and anti-N-IgG using the high-sensitivity CLEIA platform (HISCL) (Sysmex Co., Kobe, Japan), which was reported by Noda et al. [24]. HISCL is a fully automated immunoassay system using the chemiluminescent sandwich principle. First, the serum sample was placed in contact with SARS-CoV-2-specific recombinant antigens bound to magnetic beads. Then, after a first round of bound/free separation, the antigen-antibody complex was incubated with an alkaline phosphatase-conjugated antibody against human IgG or IgM to form a sandwich immunocomplex. After a second round of bound/free separation, a luminescent substrate was added into the solution to allow for luminescence measurement. Chemiluminescence intensity was obtained within 17 min following substrate addition. In the reaction chamber, the temperature was maintained at 42 • C throughout the procedure [24]. The reproducibility (coefficient of variation, CV values) was previously reported as 1.4% at 3.3 AU/mL and 2.1% at 24.5 AU/mL for anti-nucleocapsid protein IgG, 1.2% at 4.2 Sysmex Unit (SU)/mL and 33.4 SU/mL for anti-S1 IgM, and 3.3% at 4.2 SU/mL and 2.5% at 35.6 SU/mL for anti-S1 IgG. Regarding anti-S1 IgG, SU/mL is converted to Binding Antibody Units (BAU)/mL. The antigen immobilized on plates in RCOEL961-N was the SARS-CoV-2 N protein expressed in Escherichia coli, and in RCOEL961-S1 it was the S protein S1 expressed in E. coli. The N protein (aa 1-419) and S protein S1 (aa 251-660) were derived from the Wuhan strain. Serum samples were diluted with 1% bovin serum albumin (BSA) in phosphate buffered saline with Tween-20 (PBST) for measurement. The dilution ratio was 1:1000 for RCOEL961N, and 1:200 for RCOEL961-S1. For measurements, 100 µL of diluted measurement sample was added to a well on the antigen protein-immobilized plate, and the mixture was incubated at room temperature for 1 h. After incubation, the well was washed five times with 200 µL PBST. Diluted Horseradish Peroxidase (HRP)-conjugated antibody was added, and the mixture was incubated at room temperature for 1 h. After incubation, the well was washed five times with 200 µL PBST and 100 µL TMB was added to each well. Then, 100 µL of 1 M HCl was added to stop the reaction and the absorbance at 450 nm was measured by SH-1000 (CORONA ELECTRIC Co., Ltd., Ibaraki, Japan). The reproducibility of the assays is shown in Table S1 (Supplementary). The coefficient of variation of ten measurements was 5.3-13.0%. Statistical Analyses The paired t-test was used to compare and evaluate the immune responses after vaccination assessed by ELISA and CLEIA. A mixed model was used to compare logtransformed IgG levels between males and females considering repeated measures and the random effect of the subpopulation (proc mixed). As participants with steroid use did not show apparently low levels of anti-S1 IgG, e.g., ant-S1 IgG in non-users and users showed 3803-fold rise and 3099-fold rise at week five after the first dose, they were also included in the analysis with adjustment (fixed effect). Statistical analyses were performed using SAS9.4 TS Level 1M5 for Windows (SAS Institute, Cary, NC, USA). p < 0.05 was considered statistically significant. Results The SARS-CoV-2 N IgG responses assessed by CLEIA and ELISA in this study had no peak after the first and second vaccinations with CV values below 120%, suggesting no participants had COVID-19 infection ( Table 2). As a negative control, one of the nonvaccinated individuals was tested four times, at 0, 3, 7, and 15 weeks, and the anti S1 IgG titers were found to be as low as 0.5-1.1 BAU/mL. As shown in Figure 1, CLEIA showed a significant increase in anti-S1 IgM and IgG levels at weeks two and one after the first vaccination, respectively (p < 0.0001 and p = 0.0159 by paired t-test with log-transformed values, n = 19 and 20, respectively). ELISA showed a similar increase in antibody titers ( Figure 1). Anti-S1 IgM and IgG levels increased exponentially and reached a plateau at week four (week one after the second dose, a 54-fold increase) and at week five (week two after the second dose, a 3.8 × 10 3 -fold increase), respectively, using CLEIA after the first mRNA vaccination (Figure 1). ELISA showed the anti-S1 IgM and IgG levels peaked at the same time as those assessed by CLEIA; however, ELISA levels were much lower than those determined by CLEIA (1.8-fold and 22-fold increases for IgM and IgG). IgM and IgG showed 7-fold and 1249-fold increases at 15 weeks after the first vaccination (12 weeks after the second vaccination) by CLEIA, and only 1.2-fold and 11-fold increases at week 15 by ELISA (Figure 1). Figure 2 shows the sex differences in anti-S1 IgG titers; higher IgG levels were observed in samples from females compared with males at weeks one, two, four and five (one and two weeks after each of the two vaccinations) by CLEIA (p = 0.007, 0.047, 0.012 and 0.024 at weeks one, two, four and five, respectively); the least square means (LSM) in male and female were 0.7 and 1.8 BAU/mL at week one, 34 and 70 BAU/mL at week two, 670 and 1697 BAU/mL at week four, and 1165 and 2616 BAU/mL at week five. By ELISA, higher IgG levels were detected in females at weeks four-six (one-three weeks after second dose) (p = 0.027, p = 0.035, and p = 0.038 at weeks four, five, and six, respectively); the LSM (in OD450 nm) in male and female was 0.9 and 1.8 at week four, 1.2 and 2.1 at week five, and 1.1 and 2.0 at week six. Antibody titers measured by CLEIA (upper panel) and ELISA (lower panel) are shown. Data represent least square geometric mean ± geometric standard error estimated by mixed model; interactions between sex and time course were tested in a mixed model considering sex, age, steroid use, and number of weeks (categorical variables) as fixed effects and repeated measures of the same subject and target population (healthcare workers, hospital staff, and students) as random effects. *, p < 0.05; #, p = 0.056 for interactive effects (sex × time). In total, forty-two students (vaccinations started in May 2021) had blood specimens collected before vaccination and at three, seven, eleven, and fifteen weeks after vaccination. Twenty healthcare workers (vaccinations started in April) had blood specimens collected at all time points. Fifty-nine hospital staff (vaccinations started in February) had blood specimens collected seven weeks after the first vaccination. Arrows indicate vaccination timing. One male student lacked measurements after 11 weeks. One female healthcare worker lacked measurements at week two and after four weeks. Another female healthcare worker lacked a measurement at week four. Figure 1. SARS-CoV-2 S1 protein-specific antibody levels measured by CLEIA and ELISA. Antibody titer ratios measured by CLEIA (upper panel) and ELISA (lower panel) are shown (geometric mean ± geometric standard error). Forty-two students (started vaccination in May 2021) had specimens collected before vaccination and at 3, 7, 11, and 15 weeks after vaccination. Twenty healthcare workers (started vaccination in April) had specimens collected at all time points. Arrows indicate vaccination timing. One of the male students lacks measurements at the weeks after 11. One of the female healthcare workers lacks measurements at week 2 and at the weeks after 4. Another female healthcare worker lacks a measurement at week 4. Arrows indicate vaccination. Unless otherwise noted, differences from pre-vaccination are significant by paired t-test using log-transformed value. Sex differences in SARS-CoV-2 S1 protein-specific antibody levels. Antibody titers measured by CLEIA (upper panel) and ELISA (lower panel) are shown. Data rep-resent least square geometric mean ± geometric standard error estimated by mixed model; interac-tions between sex and time course were tested in a mixed model considering sex, age, steroid use, and number of weeks (categorical variables) as fixed effects and repeated measures of the same subject and target population (healthcare workers, hospital staff, and students) as random effects. *, p < 0.05; #, p = 0.056 for interactive effects (sex × time). In total, forty-two students (vaccinations started in May 2021) had blood specimens collected before vaccination and at three, seven, elev-en, and fifteen weeks after vaccination. Twenty healthcare workers (vaccinations started in April) had blood specimens collected at all time points. Fifty-nine hospital staff (vaccinations started in February) had blood specimens collected seven weeks after the first vaccination. Arrows indicate vaccination timing. One male student lacked measurements after 11 weeks. One female healthcare worker lacked measurements at week two and after four weeks. Another female healthcare worker lacked a measurement at week four. Figure 3 right panel, CLEIA and ELISA testing had a good correlation with the detection of antibodies in a high concentration range (>10 BAU/mL by CLEIA), with a correlation coefficient (r) of 0.9 with log-transformed IgG value; however, no association was observed at a low concentration range corresponding to one-two weeks after the first vaccination (<10 BAU/mL, r = 0.3 with log-transformed values). There was a poor correlation for IgM between CLEIA and ELISA (r = 0.3 using log-transformed values) (left panel). Table S2). The dashed line represents 10 BAU/mL and the plot is divided into high and low concentration ranges. Correlation coefficients (R), p-values, and N are shown for each range. Discussion This study compared the sensitivity of two immunoassays, CLEIA and ELISA, to measure SARS-CoV-2 antibodies in the blood of individuals vaccinated with BNT162b2 mRNA. In our study group, we showed that ELISA and CLEIA detected increased titers of antibodies consistent with the time-dependent nature of antibody responses to the BNT162b2 vaccine. Our observation indicates that anti-S protein S1 IgG levels significantly increased seven days after the first dose of the BNT162b2 vaccine, which is consistent with a previous report where the onset of protection was observed less than 14 days after the first vaccination in participants without evidence of prior infection [12,24]. The results of this study showed elevated S-IgG levels in participants and remained high after the second dose of the BNT162b2 vaccine, which is consistent with previous studies [15,16]. Another finding in our study was that females had slightly higher immune responses immediately after vaccination compared with males. Previous studies reported that females produced higher levels of S IgG at day 21 [25] and day 28 [26] post-vaccination with BNT162b2. These data suggest that females might develop a stronger antibody response related to differences in hormones that regulate adaptive and innate immune responses compared with males [27]. Anti-S protein S1 IgG antibody levels were significantly increased in the first week after vaccination using the two methods, showing a good correlation between the methods in the high titer range corresponding to the level seen several weeks after the first vaccination, approximately >10 BAU/mL. The CLEIA detected higher IgG antibody levels in females compared with male participants one week after the first vaccination. Therefore, CLEIA and ELISA are useful to monitor antibody levels after full dose vaccinations among a population with normal antibody responses, whereas highly accurate systems may be required in the early stages of immune responses or for people with disturbed immune responses. Antibody testing remains the best method to estimate SARS-CoV-2 infection and positive vaccine responses [28]. However, different antibody tests have different sensitivities. A rapid, appropriate, and sensitive test may be beneficial for serological studies and rational decision-making regarding booster vaccinations. However, a simple method that does not require special equipment such as ELISA is advantageous for its wide use in any location. Our study has limitations. Firstly, the relatively small sample size potentially limits the generalizability of these results. Secondly, the sample consists of three subpopulations which may introduce demographic variability, although we attempted to minimize the effect from subpopulation by mixed model. Conclusions In conclusion, we demonstrated two immunoassays, ELISA and CLEIA, could be used to quantify the acquisition of immunity to SARS-CoV-2 as a result of vaccination, especially when measuring high concentrations of Ig. It was also suggested that a highly sensitive method, such as CLEIA, is required for monitoring low concentrations of Ig. Because of the lack of data on the persistence of immunity acquired after vaccination, it is important to monitor the level of antibodies over time. Such assays will be an important part of clinical and research studies as well for forming public health policy guidelines. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/vaccines10040487/s1, Table S1: Reproducibility of antibody test results using ELISA based on the optical density (O.D.) at 450 nm. Table S2: Number of total blood samples taken from the participants. Funding: This study was funded by a research grant for Research on Emerging and Re-emerging Infectious Diseases, Health and Labour Science Research Grants from the Ministry of Health, Labour and Welfare, Japan (R2-SHINKOGYOSEI-SHITEI-003). The funding body had no role in the design of the study, collection, analysis, and interpretation of data, or in writing the manuscript. Institutional Review Board Statement: The study was conducted according to Ethics Committee for Clinical Research of the School of Medicine Saga University (Approval number No R2-44 and R3-9 date of approval: 2021). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data presented in this study are available on request from the corresponding author (A.M.). The data are not publicly available due to privacy concerns.
2022-04-02T13:08:29.776Z
2022-03-22T00:00:00.000
{ "year": 2022, "sha1": "7d7d644bc7634fd0d14fde57f34dd098feccd3c1", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "196941f26910f95d55f2d2adb5dc43b7f7a68f06", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
118857336
pes2o/s2orc
v3-fos-license
Limit density of 2D quantum walk: zeroes of the weight function Properties of the probability distribution generated by a discrete-time quantum walk, such as the number of peaks it contains, depend strongly on the choice of the initial condition. In the present paper we discuss from this point of view the model of the two-dimensional quantum walk analyzed in K. Watabe et al., Phys. Rev. A 77, 062331, (2008). We show that the limit density can be altered in such a way that it vanishes on the boundary or some line. Using this result one can suppress certain peaks in the probability distribution. The analysis is simplified considerably by choosing a more suitable basis of the coin space, namely the one formed by the eigenvectors of the coin operator. I. INTRODUCTION Quantum walks [1][2][3] were proposed as extensions of the concept of a classical random walk to the unitary evolution of a quantum particle on a discrete graph or lattice. They have found promising applications in quantum information processing, e.g. in search algorithms [4], graph isomorphism testing [5], finding structural anomalies in graphs [6], and perfect state transfer [7]. Moreover, quantum walks were shown to be universal tools for quantum computation [8]. Suitable tools for the analysis of homogeneous quantum walks on infinite lattice are the Fourier transformation [9] and the weak-limit theorems [10]. While the properties of many quantum walks on a line are well understood [11][12][13][14], less is know about quantum walks on higher-dimensional latices. Indeed, there are many technical difficulties, e.g. diagonalization of the evolution operator. One of the few models of 2D quantum walks which is well understood is the one analyzed in [15]. This model is a one-parameter extension of the 2D Grover walk which preserves its key feature, namely the trapping effect (or localization) [16]. The coin parameter controls the area covered by the quantum walk, which in general is an elliptic disc and reduces to a circle for the 2D Grover walk. In the present paper we focus on the role of the initial conditions on the shape of the probability distribution resulting from the 2D quantum walk of [15]. We are interested in initial states which lead to non-generic probability distributions, such as those with reduced number of peaks. In order to find them we first simplify the results of [15] by converting them to a more suitable basis of the coin space. Following [14] we choose the basis formed by the eigenvectors of the coin operator. We then discuss various initial coin states which result in non-generic probability distribution. In particular, we show that the limit density can be set to zero on some line. This can be used to suppress peaks in the probability distribution. The paper is organized as follows: First, in Section II the results of [15] are briefly reviewed. Next, we convert them into more suitable basis to simplify the following analysis. In Section III various initial states which lead to non-generic probability distributions are discussed. We conclude and present an outlook in Section IV. II. 2D QUANTUM WALK Let us first briefly review the results of [15]. The authors have considered a quantum walk on a two-dimensional square lattice where the particle can in each step move from its present position (x, y) to the nearest neighbours (x ± 1, y) and (x, y ± 1). These displacements correspond to the four states |R , |L , |U and |D which form the standard basis of the coin space H C . In this standard basis the coin operator is given by the following matrix where the parameter p ranges from 0 to 1. For p = 1 2 the coin operator (1) reduces to the familiar 4 × 4 Grover matrix. This particular model was analyzed in detail in [16]. Using the Fourier analysis and the weak limit theorem [10] the authors have derived the limit density ν(v x , v y ) of the 2D quantum walk. This allows one to evaluate the asymptotic values of all moments of re-scaled position (or pseudo-velocity) through the formula The limit density of the 2D quantum walk is given by [15] ν Here µ(v x , v y ) denotes the fundamental density which reads [15] µ where 1 E denotes the indicator function of the elliptic disc The function 1 E equals 1 if the point (v x , v y ) belongs to E and zero otherwise. The symbol M(v x , v y ) denotes the weight function which is a second order polynomial in v x and v y with coefficients M j determined by the coin parameter p and the initial coin state. Its explicit form in the standard basis is given in [15]. Finally, δ 0 denotes the Dirac delta function and ∆ corresponds to the localization probability around the origin. The second term in (2) ensures that the limit density is properly normalized As we illustrate in Fig. 1, generic probability distribution w(x, y, t) resulting from the studied 2D quantum walk has five characteristic peaks. Four of them are propagating and after t steps of the quantum walk they are located at positions The propagating peaks correspond to the divergencies of the limit density (2) at points These points lie at the boundary ∂E of the elliptic disc. In addition, the probability distribution w(x, y, t) contains a stationary peak located at the origin. On the level of the limit density (2) the stationary peak is described by the Dirac delta function. The peak does not vanish in the asymptotic limit t → +∞. Hence, this feature is usually called trapping (or localization), since the particle has a non-zero probability to remain close to the origin even in the limit of large number of steps. The trapping effect arises from the fact that the evolution operator of the studied 2D quantum walk has, apart from the continuous spectrum, two eigenvalues ±1 with infinite degeneracy [15]. The exact form of the trapping probability is not know, however, it decays rapidly (exponentially) with the distance from the origin. However, we will not analyze this feature in the present paper, since we focus on the properties of the limit density (2). On the left we display the probability distribution after 50 steps. The right plot shows the limit density (2). Notice the four peaks in the probability distribution located at positions given by (5) which correspond to the divergencies of the limit density (6). The central peak in the left figure corresponds to the trapping probability which is not discussed in the present paper. In the following we consider various initial conditions resulting in non-generic probability distributions. We show that the weight function (4) can be altered such that it vanishes on the boundary ellipse ∂E or on some line in the v x , v y plane. Using this result we can suppress certain peaks in the probability distribution. Before we turn to the detailed analysis of the weight function we first simplify it by turning into a more suitable basis of the coin space. For this purpose we consider the orthonormal basis formed by the eigenvectors of the coin operator (1), which can be expressed in the following form The eigenvectors satisfy the relations The initial coin state is decomposed into the eigenvector basis according to Simple algebra reveals that the coefficients of the weight function in terms of the amplitudes g j are given by We see that the terms M 1 , M 4 and M 5 are determined by pairs of probabilities, while M 2 , M 3 and M 6 depend on the interference of a pair of amplitudes, i.e. the coherences between the |σ j states. The simple form of (10) allows us to identify initial coin states which lead to non-generic probability distributions in a straight-forward way. III. NON-GENERIC PROBABILITY DISTRIBUTIONS Let us now discuss the role of the initial coin state on the shape of the probability distribution. We begin with the eigenstate |σ + . In such a case the weight function reduces to which vanishes on the boundary ellipse ∂E. Hence, the divergencies of the limit density are suppressed and all propagating peaks will be absent in the resulting probability distribution. We illustrate this effect in Fig. 2, where we choose the coin parameter p = 0.4. Next, we consider the eigenstate |σ 1 . For this particular initial coin state the trapping effect vanishes, as was identified already in [15]. We illustrate this feature in Fig. 3 where we take the coin parameter p = 0.6. Let us now consider the eigenstate |σ 2 as the initial coin state. We find that the weight function reduces to FIG. 2. 2D quantum walk with the initial coin state |σ+ . The coin parameter was chosen as p = 0.4. On the left we display the probability distribution after 50 steps. Notice the absence of the peaks on ellipse. Indeed, the limit density vanishes at the boundary, which we illustrate on the right. The central peak corresponds to the trapping effect. FIG. 3. 2D quantum walk with the initial coin state |σ1 . The coin parameter was chosen as p = 0.6. The left plot shows the probability distribution after 50 steps. Notice the absence of the central peak. Indeed, for the initial coin state |σ1 the trapping effect vanishes. The right plot illustrates the limit density. Hence, the limit density vanishes on the line v x = 0. This effect is illustrated in Fig. 4 for the coin parameter p = 0.8. In a similar way, the choice of the initial coin state |ψ C = |σ 3 leads to the weight function of the form Therefore, for |σ 3 the density vanishes for v y = 0. This feature is depicted Fig. 5. More generally, when we choose the initial coin state of the form |ψ C = g 2 |σ 2 + g 3 |σ 3 , the weight function reduces into FIG. 4. 2D quantum walk with the initial coin state |σ2 . The coin parameter was chosen as p = 0.8. On the left we display the probability distribution after 50 steps of the quantum walk. Notice the suppression of the probability near the line x = 0. Indeed, the limit density vanishes for vx = 0, as we illustrate in the right plot. FIG. 5. 2D quantum walk with the initial coin state |σ3 . The coin parameter was chosen as p = 0.7. On the left we display the probability distribution after 50 steps of the quantum walk. The probability distribution is considerably suppressed along the y = 0 line, as predicted by the limit density which is present in the right figure. Hence, when both g 2 and g 3 are real the weight functions vanishes on the line determined by We can use this fact to suppress two peaks of the probability distribution. Indeed, choosing the initial coin state as eliminates the peaks at v x = p, v y = −(1 − p) and v x = −p, v y = 1 − p. Similarly, for the initial coin state the peaks at v x = p, v y = 1 − p and v x = −p, v y = −(1 − p) vanishes. For illustration of this effect we display in Fig. 6 the probability distribution of the 2D quantum walk with the initial coin state (15) and the coin parameter p = 0.3. FIG. 6. 2D quantum walk with the initial coin state given by (15). The coin parameter was chosen as p = 0.3. On the left we display the probability distribution after 50 steps of the quantum walk. Notice that there are only two peaks on the boundary ellipse. The remaining two are suppressed since they lie on the line (14) where the limit density vanishes. This is illustrated in the right plot. Finally, we consider a situation when the weight function reduces to a polynomial only in one variable, either v x or v y . We find that for g + = g 3 = 0 the weight function reduces to This means that the weight function vanishes on the line provided that both g 1 and g 2 are real. Hence, we can eliminate the peaks on the line v x = ±p by choosing the initial state Similarly, when we choose g + = g 2 = 0 the weight function reduces to This means that the weight function vanishes on the line provided that both g 1 and g 3 are real. Hence, we can eliminate the peaks on the line v y = ±(1 − p) by choosing the initial state We illustrate this feature in Fig. 7 where we consider the 2D quantum walk with the initial coin state and the coin parameter p = 0.5. FIG. 7. 2D quantum walk with the initial coin state given by (16). The coin parameter was chosen as p = 0.5. On the left we display the probability distribution after 50 steps of the quantum walk. Notice that there are only two peaks on the right-hand side of the probability distribution. The remaining two are suppressed since they lie on the line vx = −p where the limit density vanishes. This is illustrated in the right plot. IV. CONCLUSIONS We have discussed in detail the role of the initial conditions on the shape of the probability distribution generated by the 2D quantum walk model analyzed in [15]. The analysis is simplified considerably by converting the results of [15] into the basis formed by the eigenvectors of the coin operator. It was found that the weight function can vanish on a certain line in the v x , v y plane. Using this fact one can eliminate a pair of peaks in the probability distribution with a proper choice of the initial coin state. Moreover, the weight function can vanish on the boundary which leads to elimination of all propagating peaks. The properties of the trapping effect were not discussed in the present contribution and remain an open question. In principle, the explicit form of the trapping probability can be obtained using similar methods as for quantum walks on a line. There it was found that the trapping probability can be highly asymmetric [13,14]. In fact, it might be present on one half-line and vanish completely on the other. It would be interesting to see if similar features can be found in the present 2D quantum walk model.
2017-01-02T14:45:57.000Z
2017-01-02T00:00:00.000
{ "year": 2017, "sha1": "158c507ab11a0a255c4d7461aed995e0b2ce3c05", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/iis/23/1/23_2017.A.03/_pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "158c507ab11a0a255c4d7461aed995e0b2ce3c05", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
182043999
pes2o/s2orc
v3-fos-license
Safety and effectiveness of cannabinoids for the treatment of neuropsychiatric symptoms in dementia: a systematic review Background: Neuropsychiatric symptoms (NPS) in dementia impact profoundly on the quality of life of people living with dementia and their care givers. Evidence for the effectiveness and safety of current therapeutic options is varied. Cannabinoids have been proposed as an alternative therapy, mainly due to their activity on CB1 receptors in the central nervous system. However, little is known regarding the safety and effectiveness of cannabinoid therapy in people with dementia. A literature review was undertaken to identify, describe and critically appraise studies investigating cannabinoid use in treating NPS in dementia. Methods: We undertook a systematic review adhering to PRISMA guidelines. Twenty-seven online resources were searched, including Medline, PsycINFO and Embase. Studies assessing the safety and or effectiveness of cannabinoids in treating NPS in dementia in people aged ⩾ 65 years were included. Study quality was assessed using the Johanna Briggs Institute and Cochrane Collaboration critical appraisal tools. Results: Twelve studies met the inclusion criteria. There was considerable variability across the studies with respect to study design (50% randomized controlled trials), intervention [dronabinol (33%), nabilone (25%) or delta-9 tetrahydrocannabinol (THC; 42%)] and outcome measures. Dronabinol (three studies) and THC (one study) were associated with significant improvements in a range of neuropsychiatric scores. The most common adverse drug event (ADE) reported was sedation. A high risk of bias was found in eight studies. The highest-quality trial found no significant improvement in symptoms or difference in ADE rate between treatment arms. Included studies used low doses of oral cannabinoids and this may have contributed to the lack of demonstrated efficacy. Conclusion: While the efficacy of cannabinoids was not proven in a robust randomized control trial, observational studies showed promising results, especially for patients whose symptoms were refractory. In addition, the safety profile is favourable as most of the ADEs reported were mild. Future trials may want to consider dose escalation and formulations with improved bioavailability. Introduction Dementia is a group of diseases characterized by progressive and debilitating symptoms including cognitive decline, memory loss, changes in perception and personality. 1 In 2015, the World Alzheimer's report estimated that there were 50 million people living with dementia worldwide, with projections of this population doubling every 20 years. 2 The most common types of dementia are Alzheimer's disease (50-70%) and vascular dementia (20-30%) The occurrence of NPS differs across the course of dementia, with anxiety and depression reported more common in the early stages and psychosis and aggression more common in the advanced stages of dementia. 3 Regardless of the stage of dementia, occurrence of NPS impacts profoundly on the morbidity and mortality of people living with dementia, often precipitating the use of additional medications, hospitalization and institutionalized care. [3][4][5] It has been reported that behavioural symptoms of dementia have more significant consequences for both the patient and caregiver than cognitive decline, in part due to injury to either party through aggression and wandering. 7 Other reported impacts on carers include reduced quality of life, depression, distress and reduced employment income. 8,9 Carers of individuals living with dementia and NPS report more distress and depression than carers of individuals living with dementia with no NPS. 9 First-line treatment for NPS in dementia involves a range of nonpharmacological interventions, despite a limited and disparate evidence base. 6,7 These interventions are based upon identifying unmet physical and emotional needs that may be triggering the NPS and may include assessment for inadequately treated pain and unpleasant environmental factors. 7,10 Complementary nonpharmacological interventions include carer education and carer support. 7 Second-line treatment of NPS involves the use of pharmacological interventions, usually in addition to the nonpharmacological strategies. Pharmacological intervention is usually warranted when a person becomes a risk to either themselves or others and nonpharmacological interventions have been unsuccessful. 6 The most common class of medications used to treat NPS of dementia is atypical antipsychotics (e.g. risperidone, olanzapine and quetiapine), although not all countries approve the use of these medications to treat NPS in dementia. 11 In Australia, risperidone is funded under the Pharmaceutical Benefits Scheme (PBS) for treating aggression and psychotic symptoms in patients with Alzheimer's disease for up to 12 weeks. 12 A similar arrangement is available in the UK. 13 Use of antipsychotics 'off label' is widespread in the aged-care setting. A retrospective cohort study of over 300,000 nursing home residents in the US, reported that 23.5% of residents were prescribed at least one antipsychotic and 86% of this prescribing was 'off label'. In addition, residents with dementia were 3.2 times more likely to use these medications 'off label' compared with residents without dementia. 14 The evidence for effectiveness of atypical antipsychotics to treat NPS of dementia is limited. 15,16 Systematic reviews have found modest improvements in agitation and psychosis in dementia. 6 There have been many studies showing an association of these medications with harmful outcomes in patients with dementia including falls, cerebrovascular events and death. 15,16 Antidepressants and antiepileptics are also used to manage NPS of dementia but are not approved for this indication. A Cochrane review undertaken in 2011, found the antidepressants sertraline and citalopram to be associated with a reduction in agitation and psychosis in dementia and were well tolerated compared with antipsychotics, however the studies were small, limiting the generalizability of these results. 17 A recent review of studies investigating the efficacy and safety of antiepileptic drugs for treating agitation and aggression in dementia found that while carbamazepine was effective at reducing symptoms, its clinical use is limited by poor tolerability and the potential for drug-drug interactions. 18 The use of valproate is not recommended due to questionable efficacy and poor tolerability and there is currently insufficient evidence to recommend the use of other antiepileptic medications in this setting. 18 Given the limited range, questionable effectiveness and side-effect profile of current therapeutic options for treating NPS of dementia, research into alternative therapies is a priority for this growing and vulnerable population. In recent years, research has focused on developing novel therapeutic agents to treat a range of NPS in dementia. Following the identification of two main cannabinoid receptors (CB1, predominantly in the central nervous system and CB2, predominantly in the immune system), cannabinoids have been investigated for their safety and efficacy in treating NPS of dementia. 19 While little is known regarding the mechanism by which cannabinoids exert their journals.sagepub.com/home/taw 3 effects in NPS of dementia, in vivo studies have consistently shown a role for the endocannabinoid system in both modulating neurotransmission and exhibiting neuroprotective effects. 20,21 The endocannabinoid system has been shown to interact with several neurotransmitter systems (dopamine, norepinephrine, serotonin, GABA and acetylcholine) all of which have been implicated in the manifestations of NPS. 20 CB1 and CB2 receptors have been shown to exert neuroprotective effects through reduction of glutamate production and anti-inflammatory actions. 21 This systematic review was undertaken to identify, describe and critically appraise all studies investigating cannabinoid use in treating NPS of dementia. The objectives of this review were to identify safety and effectiveness criteria used in the trials with a focus on the risk-benefit profile of each cannabinoid and the criteria used to measure outcomes. Methods This study was registered in the PROSPERO database (the international prospective register of systematic reviews, registration number CRD42018086202). The study design complies with the PRISMA guidelines (Preferred Reporting Items for Systematic Reviews and Meta-Analysis protocols). 22 Study eligibility criteria All original peer-reviewed studies assessing the safety and or effectiveness of any cannabinoid in treating NPS in patients with a diagnosis of any type of dementia were included, regardless of publication date. Where possible, all sources and healthcare databases were searched from inception to 1 January 2018. The inclusion criteria included studies of any design, conducted in the older population (⩾65 years) and assessing the use of any cannabinoid in treating one or more of the known NPS of dementia. Included studies needed to assess one or both of the safety or effectiveness of cannabinoids in the study population and be written in English. Exclusion criteria included studies undertaken in younger populations (<65 years), studies assessing only the delay of onset or progression of any type of dementia and studies where the full text was not available. Identification and selection of studies In total, 27 online resources were systematically searched as shown in Table 1. This included healthcare databases, clinical trial databases, Alzheimer's disease and dementia advocacy group websites, cannabinoid advocacy websites and Google Scholar. A reference checklist of identified studies deemed suitable for inclusion was also undertaken. The search terms used for Medline were: One reviewer (JH) undertook the database and website searches and sifted through the titles and abstracts to identify suitable studies. Two reviewers (JH and GC) independently assessed the abstracts and full text of identified studies for inclusion. Disagreements were resolved with a third reviewer (NS). Data extraction and assessment of study quality was undertaken independently by two reviewers (JH and GC) and disagreements resolved with a third reviewer (NS). Data collection and study appraisal Four reviewers (JH, GC, CA and NS) developed and agreed upon the study protocol, search terms and data extraction tool. The data extraction tool journals.sagepub.com/home/taw Therapeutic Advances in Drug Safety 10 Synthesis of findings For the included studies, data are presented for each study as described by the data extraction tool, including both quantitative and qualitative information. Where possible, aggregate data have been presented in proportions. Due to the expected paucity and heterogeneity of identified studies, a meta-analysis was not planned, nor undertaken. Results The search results are presented in the PRIMSA flowchart ( Figure 1). We identified a total of 4951 studies of which 12 met the inclusion criteria ( Figure 1). Study descriptions and participant baseline characteristics are summarized in Table 2. Included studies were published over a 20-year period (1997-2017). Six studies were RCTs, two were cohort studies and four were case series or studies. Four studies were from the Netherlands, two from the UK and USA, respectively, and one each from Germany, Israel, Switzerland and Canada. Seven studies were undertaken in hospital (psychogeriatric units), two studies in the community, two were in both the community and hospital, and one study was undertaken in the community and nursing home settings. Five studies included participants with any type of dementia (Alzheimer's disease, vascular dementia, frontotemporal dementia and mixed dementia), four studies included participants with Alzheimer's disease only and three studies included one or two of a selection of dementia types. The cannabinoids used in the included studies were dronabinol (dose range = 2.5-7.03 mg/day), delta-9 tetrahydrocannabinol (THC; Namisol®; journals.sagepub.com/home/taw Therapeutic Advances in Drug Safety 10 dose range = 1.5-15 mg/day) or nabilone (dose range = 0.5-2.0 mg/day) and were all administered orally. The number of participants in the studies ranged from 2 to 50. The mean age ranged from 72.7 to 81.5 years, with the proportion of males included ranging from 30 to 100%. Baseline cognition (assessed using the Mini-Mental State Examination or MMSE) was reported in nine studies and scores ranged from 4 (severe cognitive impairment) to 22 (mild cognitive impairment). In four studies (all RCTs), participants had to score ⩾ 10 on the Neuropsychiatric Inventory Index (NPI) to be included. Reporting of prior treatment for NPS varied across the included studies but overall, was limited. Studies reporting concomitant medication use at the time of the trial recorded multiple medication use, including antipsychotics, antidepressants and neuromodulators; however, it was often unclear which medications were indicated for treating NPS of dementia. Outcomes reported in the included trials varied substantially with heterogeneity of cannabinoid type used and evaluation of effectiveness criteria (Table 3). Half the identified studies used more than one set of criteria to evaluate effectiveness. The most common tool for assessing effectiveness of cannabinoid therapy for treating NPS of dementia was the NPI, which was used in five studies. Of these five studies, two RCTs reported no significant improvement in NPI, 26,27 one case series reported significant improvement in overall NPI score and three subscales (aberrant motor behaviour, agitation and night-time behaviours), 33 one cohort study reported significant improvement in 6 of the 13 NPI subscales (agitation, disinhibition, irritability, aberrant motor behaviour, night-time behaviour and caregiver burden). 31 The fifth study, an RCT with two participants, reported improvement in overall NPI score. 25 The Cohen-Mansfield Agitation Inventory (CMAI) was used in three RCTs with no significant changes reported. [26][27][28] Reporting of adverse drug events (ADEs) was the most frequently used method for assessing safety in the included trials, used in 10 studies (one case series and one case study did not report ADEs; Table 3). Methods for ascertaining ADEs included participant and carer reports, predetermined lists of ADEs to aid in identification, medical notes or clinical observation or a combination of these. Four RCTs reported no significant difference in ADEs between treatment arms 26,27,29,30 and two studies (one RCT and one case series) reported no ADEs during the trial. 25,33 One RCT used a blinded independent physician to rate causality of an ADE to treatment with a cannabinoid. In this study, dizziness, fatigue and agitation were all considered related to Namisol® administration and were dose related (Table 3). 30 Serious ADEs were reported in three trials. One RCT reported three serious ADEs (one seizure and two serious infections). 28 Another RCT reported four serious ADEs (gastroenteritis, worsening of NPS, exacerbation of vestibular disorder and malignancy). 27 One cohort reported three serious ADEs (dysphagia, fall and confusion). 31 Overall, the most common ADE reported was sedation. In addition to reporting ADEs, a variety of physical parameters related to safety were monitored in seven studies and included blood pressure, heart rate, electrocardiogram and weight. One RCT and two cohort studies reported no significant change in parameters. 26,31,32 One RCT reported significant increases in body sway, stride and change in internal perception. 27 Another RCT reported a significant increase in weight. 28 One RCT reported significant decrease in balance and increase in stride. 29 Lastly, an RCT reported significant changes in perception, increase in body sway, increase and decrease in systolic blood pressure (dose dependent) and increase in heart rate. 30 Two RCTs formally assessed caregiver burden using the Zarit Burden Interview 27 and Caregiver Clinical Global Impression of Change (CCGIC) 26 with no significant differences reported between treatment and placebo arms, respectively. A case series reported families felt the treatment to be effective and with a reduction in their emotional burden. 21 Overall, the quality of included trials was low with a high risk of bias assessed for 8 of the 12 studies (66.7%; Table 3). Half (3/6) of the RCTs were assessed as low quality mainly due to small sample size, unclear methods of randomization and blinding, and selective reporting of results. The two cohort studies were of moderate and low quality, with both inadequately identifying confounding factors. In addition to the inadequacies identified above, the cohort study rated lowest quality had time-varying exposure to dronabinol and incomplete reporting of results. 32 The four included case series/studies were all assessed as low quality, as there was insufficient clinical and demographic information reported. The highest-quality trial was a randomized placebo-controlled crossover trial, which had a low risk of bias and was sufficiently powered to find a clinically significant change in NPI. 26 This study found no significant improvement in NPI compared with baseline between placebo and THC 1.5 mg three times daily for 21 days. Similarly, this study found no significant differences between placebo and treatment arm for ADEs or other physical parameters. Table 4 provides a summary of the overall riskbenefit profile of each type of cannabinoid used in the included studies. Dronabinol was used in four studies with a daily dose range of 2.5-7.03 mg. The use of dronabinol was associated with significant improvements in several NPS scores [Pittsburg Agitation Scale, negative affect, Clinical Global Impression (CGI) and NPI], increase in weight, reduction in nocturnal activity and increase in percentage of food consumed. Dronabinol was not significantly associated with any ADEs; however, at a dose of 2.5 mg twice daily, dronabinol was associated with three serious ADEs (one seizure and two serious infections). Nabilone was used in three studies with a daily dose range of 0.5-2.0 mg. Nabilone was not significantly associated with an improvement in NPS of dementia or any ADEs; however, at a dose of 1.5 mg twice daily, nabilone caused severe sedation which required withholding of the Therapeutic Advances in Drug Safety 10 medication. THC was used in six studies with a daily dose range of 1.5 to 15 mg. THC was associated with significant improvement in NPS of dementia in one trial (improvement in CGI and NPI). 31 THC was the only cannabinoid to be significantly associated with ADEs, including increase in body sway, increase and decrease in systolic blood pressure and increase in heart rate and in one trial, there were two dropouts due to pneumonia and nausea. Discussion To our knowledge, this systematic review is the most comprehensive presentation of the effectiveness and safety of cannabinoids in treating NPS of dementia. Overall, it was difficult to make generalizations about the safety and effectiveness of cannabinoids in treating NPS of dementia due to the heterogeneity of the included studies (even within study design), the range of assessment tools used and the poor quality of identified studies. While the efficacy of cannabinoids was not proven in a robust RCT, observational studies showed promising responses, especially for refractory patients. In addition, the safety profile presented was favourable, as the majority of ADEs reported were mild. Within the RCT group of included trials, inclusion criteria differed substantially with respect to baseline MMSE, prior treatment for, and severity of, NPS. In addition, these trials used a variety of tools to measure change in NPS of dementia. Evidence of effectiveness and safety was limited by the power of the RCTs to detect a statistically significant change: only one trial predetermined a clinically relevant change and powered the study to detect such a change. 26 Five of the six RCTs were therefore underpowered to detect true effectiveness of cannabinoids in treating NPS of dementia. The quality of the observational studies was limited by inadequately described comorbidities and medication use making it difficult to determine if cannabinoids were the only contributing factor to improvement in NPS of dementia. More robust cohort studies adjusting for concomitant medications and comorbidities may give insight into the true effectiveness of cannabinoids in refractory patients. The NPI was the most frequently used tool to assess NPS in dementia response to cannabinoid treatment. This tool uses an informant to report on the severity and frequency of a broad range of symptoms (depression, anxiety, apathy, hallucinations, delirium, agitation, sleep, irritability and elation). 37 The CMAI was the second most frequently used tool to measure effectiveness of cannabinoids in treating NPS of dementia. This tool is specific to agitation with information gained through observation and informant reporting. 38 Two RCTs used both the NPI and CMAI and both did not find significant differences in NPS between treatment arms on either scale. Improvement in NPS in dementia as measured by the NPI was reported for one RCT (two patients), one cohort and one case series. This may be due to observational study design characteristics such as closer and prolonged observation of patients, more in-depth knowledge of patients and nonblinding of the assessor. Several studies used multiple tools to assess response to treatment which covered a range of NPS and global functioning. There was minimal discussion in the included studies regarding the selection criteria for assessment tool(s) and who administered the test(s). Clear description of why each tool was chosen, how it was administered and how the tools relate to one another would assist with interpreting the results. The use of caregiver reporting of response to therapy provides additional insights. Although highly subjective in nature, these reports can represent a change in physical and emotional burden for caregivers. In one case series, caregivers reported improvement in NPS and carer burden posttreatment. However, this was not supported by the formal assessment of caregiver burden in two RCTs. The safety profile of the cannabinoids used in the identified studies appears to be reasonable. Studies reported mostly mild adverse effects such as sedation, somnolence and fatigue. Changes in blood pressure, balance and infections were reported from several studies and are worth a more in-depth surveillance in future studies, as these effects may have serious outcomes in an older, frailer population. Long-term safety was not established in the studies due to short exposure times. Conducting clinical trials for treating NPS in dementia has many obstacles. As demonstrated Therapeutic Advances in Drug Safety 10 by the included studies, this cohort is frail with multiple comorbidities and multiple chronic medications making it difficult to conduct trials where there is only one variable of change. In addition, severity of a person's NPS in dementia can vary over time. Several of the included RCTs used a crossover design which can assist with adjusting for confounders, however, will not adjust for within-person changes over time. The main deficits in the identified studies were incomplete reporting of study design, patient clinical characteristics and outcomes. Future studies should focus on improving the quality of study design and reporting as outlined in Table 5, which includes appropriate study design, accurate assessment and reporting of baseline cognition, medication use and rescue medications, calculation of effect size and study power a priori, use of a validated and reliable assessment tool to evaluate change and the thorough and accurate reporting of all study results. Despite the paucity of evidence for safety and effectiveness of cannabinoids in treating NPS of dementia, it is important to recognize the identified trials' contribution to knowledge in this field. Identified observational studies have shown potential for cannabinoid use in treating refractory NPS of dementia with minimal side effects. While larger RCTs did not show effectiveness of cannabinoids, the reported safety profile was acceptable. However, doses used in these studies may be a key issue in terms of safety and effectiveness of cannabinoids. The included studies used low doses of oral cannabinoids relative to studies treating other indications, and this may have contributed to the lack of demonstrated efficacy. A systematic review of RCTs investigating the use of cannabinoids for treating chemotherapy-induced nausea and vomiting found that doses of oral nabilone 2 mg twice daily and oral dronabinol 10-15 mg/m 2 up to six times daily were effective in reducing nausea and vomiting by up to 50%; however, adverse effects were commonly reported with fatigue significantly associated with treatment. 39 Therefore, dose escalation may be warranted in future studies investigating the efficacy and safety of cannabinoids in treating NPS in dementia. Furthermore, orally administered cannabinoids have poor oral bioavailability due to high first-pass hepatic metabolism. 40 For example, a 10 mg oral dose of dronabinol has a bioavailability of approximately 6-7%. 40 Namisol® is a THC formulation with reported enhanced bioavailability (up to 30%) due to Alitra®, a lipophilic delivery technology. 41 Future studies investigating the efficacy and safety of cannabinoids in treating NPS of dementia should trial innovative administration modalities to improve bioavailability. This study was a systematic review registered with PROSPERO and followed the PRISMA guidelines. However, we cannot confirm that we have correctly identified all relevant studies. The inclusion criteria of studies in the older population may have reduced the likelihood of including executive and frontal variants of dementia, which are more common in the younger dementia population. However, the symptoms of NPS are more likely in the older population. Two reviewers independently selected studies for inclusion and extracted the relevant information. The studies identified in our study are similar to those reported in the Cochrane systematic review of RCTs 42 and a recently published review. 43 Our systematic review identified two more RCTs than a review published in 2017 which assessed effectiveness of cannabinoids. 44 The 2017 systematic review assessed the quality of the included RCTs as 'unclear'. In addition, we have graded the quality of each study and identified areas of bias in each study. Conclusion This systematic review has found that the quality of studies examining the use of cannabinoids to treat NPS of dementia is poor. While the efficacy of cannabinoids was not proven in a robust RCT, observational studies showed promising responses, especially for refractory patients. In addition, the safety profile appears favourable, as most ADEs reported were mild. However, formulations and doses of the cannabinoids used in the identified studies may have limited the ability to demonstrate cannabinoid efficacy and safety for this indication. A large, well-controlled trial is warranted given the current limited treatment options available for NPS in dementia patients.
2019-06-07T21:13:30.786Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "5c39b00afd5754b3f08ee5dc104df2486a8d099d", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2042098619846993", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5c39b00afd5754b3f08ee5dc104df2486a8d099d", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
1211239
pes2o/s2orc
v3-fos-license
Implication from the predicted docked interaction of sigma H and exploration of its interaction with RNA polymerase in Mycobacterium tuberculosis M. tuberculosis is adapted to remain active in the extreme environmental condition due to the presence of atypical sigma factors commonly called extra cytoplasmic function (ECF) sigma factors. Among the 13 sigma factors of M. tuberculosis, 10 are regarded as the ECF sigma factor that exerts their attributes in various stress response. Therefore it is of interest to describe the structural prediction of one of the ECF sigma factors, sigma H (SigH), involved in oxidative and heat stress having interaction with the β׳ subunit of M. tuberculosis. RNA polymerase (Mtb-RNAP). The model of Mtb-SigH was build using the commercial package of Discovery Studio version 2.5 from Accelerys (San Diego, CA, USA) containing the inbuilt MODELER module and that of β׳ subunit of Mtb-RNAP using Phyre Server. Further, the protein models were docked using the fully automated web tool ClusPro (cluspro.bu.edu/login.php). Mtb-SigH is a triple helical structure having a putative DNA-binding site and the β׳ subunit of MtbRNAP consists of 18-beta sheets and 22 helices. The SigH-Mtb-RNAP β׳ interaction studies showed that Arg26, Gln19 andAsp18, residues of SigH protein are involved in binding with Arg137, Gln140, Arg152, Asn133 and Asp144 of β׳ subunit of Mtb-RNAP. The predicted model helps to explore the molecular mechanism in the control of gene regulation with a novel unique target for potential new generation inhibitor. Background: Bacterial gene expression is primarily regulated at the level of transcription. Sigma factor plays a pivotal role in binding with the core RNAP and subsequently in promoter recognition [1]. In the last few years, after the publication of the M. tuberculosis whole genome sequence, the 13 sigma factors of M. tuberculosis have become an important subject of investigation of which 10 are regarded as the ECF sigma factors. ECF sigma factors play important role in bacterial pathogenecity and stress, among them SigH is involved in oxidative stress and heat stress [2,3]. Understanding the structural-functional relationship of these atypical sigma factors will help to understand the physiology and virulence mechanisms of M. tuberculosis and will help to design new strategies to fight against the deadly pathogen. These sigma factors are called as atypical due to the fact that they consist of two structural domain instead of four that is usually found in the housekeeping sigma factor SigA or Sig70 (in case of E. coli) [4,5]. Domain 2 and domain 4 recognises the -10 and -35 promoter-binding element, respectively [6,7]. Biochemical studies on bacterial RNA polymerase typically describe that domain 2 region of sigma factor binds with the β subunit of RNAP [1]. The sigma binding locus on the β subunit is quite conserved throughout the bacterial kingdom. The present study delineates the structural understanding of Mtb-SigH and β subunit of Mtb-RNAP and thereby implicates the predicted interaction between them as one of the vital target for gene expression. Homology search The sequence of Mtb-SigH (Accession Number CCP46040.1) and β subunit of Mtb-RNAP (Accession Number CCP43411.1) were obtained from NCBI database (http://www.ncbi.nlm. nih.gov). The protein sequences retrieved were used in BLAST (National Library of Medicine, USA) search against PDB databases [8] in order to obtain a suitable template for homology modeling. Mtb-SigH and β subunit of Mtb-RNAP were modeled using 75 and 974 amino acid residues, respectively. Model building The homology models of Mtb-SigH and β subunit of Mtb-RNAP were developed using the N-terminal fragment of SigR from Streptomyces coelicolor (PDB ID 1H3L) and the crystal structure of Thermus thermophilus (Tth) transcription initiation complex containing 2 nucleotide of nascent RNA (PDB ID: 4G7O) as template respectively. The sequence similarities of the templates with the targets were verified through multiple sequence alignment. The model of Mtb-SigH was build with MODELER module using Discovery Studio version 2.5 from Accelerys (San Diego, CA, USA) and that of β subunit of Mtb-RNAP using Phyre Server [9]. Mtb-SigH showed 58.9% sequence identity and 67.4% sequence similarity with its template and β subunit of Mtb-RNAP showed 51% homology with the template. Model evaluation and refinement Structural evaluation of the built model was done using SAVES program (http://nihserver.mbi.ucla.edu/SAVES) (which at a time runs PROCHECK, Verify3D, What_Check, ERRAT). All these together are important to clarify the overall fold or structure, errors over localized regions and stereochemical parameters such as bond lengths and angles. Analysis of the binding site of Mtb-SigH Active site prediction computes the cavities of a given protein. The active site of Mtb-SigH was predicted using the server from supercomputing facility for bioinformatics and computational biology, IIT, Delhi (http://www.scfbioiitd.res.in/) [10]. Top cavity is analysed to find out the amino acid residues involved in binding with the core Mtb-RNAP. Docking Studies Protein-protein interaction study between Mtb-SigH and β subunit of Mtb-RNAP were carried out using the fully automated web based program ClusPro (cluspro.bu.edu/ login.php) [11,14]. It is a web-server that uses the algorithm of Global rigid search: FFT using DOT or ZDOCK program. The docked structures and interface residues were analyzed using Discovery Studio version 4 from Accelerys (San Diego, CA, USA). Accessible surface area of both Mtb-SigH and β subunit of Mtb-RNAP were calculated using Surface racer version 5 Ramachandran plot which depicts that 92.9% of the amino acid residues are in the core region (red), 6% residues in allowed (yellow) and 1.5% in generously allowed region. B) Ramachndran plot analysis for β subunit of Mtb-RNAP. Containing total 1019 residues among which 92.9% residues in the core region (red); 6.3 % residues in allowed (yellow) and 0.8% in generously allowed (light yellow) region. None of the residues are present in the disallowed region. Results & Discussion: The present study reports the 3D structure of two globular proteins; Mtb-SigH and β subunit of Mtb-RNAP. Mtb-SigH, an ECF sigma factor is 216 amino acids long, expressed under specific condition of oxidative stress, hypoxia, heat shock etc. Figure 1A) against PDB using Discovery Studio version 2.5. Pairwise sequence alignment between the template and the target molecule showed 58.9% sequence identity and 67.4% sequence similarity ( Figure 1B). The model is constructed using 75 amino acid residues consisting domain 2. The best-selected model was then energy minimized keeping their backbone fixed to ensure proper interactions. 1130 cycles of conjugate gradient energy minimization steps were applied with CHARMm force field until the structure reached a final derivative of 0.01 Kcal/mole. The refined model after energy minimization was subjected for PROCHECK analysis in order to analyze the stereochemical property of the model. Ramachandran plot retrieved from PROCHECK analysis showed 92.5% of the residues in the most favored region and none in the disallowed region (Figure 2A) that confers Mtb-SigH to be a good model. The overall structural quality of the model is validated through verify 3D graph. The predicted model is the triple helical structure ( Figure 1C) having a probable DNA-binding role. The built model was superimposed with the template (Figure 1D) and the root mean square deviation (RMSD) value between template and predicted model is 0.264Å, indicative of a good model. β subunit of Mtb-RNAP (rpoC) is 1316 amino acid residues long and the present model is constructed with 974 amino acid residues ( Figure 3A) using Phyre server. The model is built using the crystal structure of Thermus thermophilus (Tth) transcription initiation complex containing 2 nucleotide of nascent RNA (PDB ID: 4G7O) as template. The target sequence showed 51% sequence identity with the template. The phi-psi torsion angle of the protein is shown in the Ramachandran plot that depicts that 92.8% of the amino acid residues are in the core region and none of these are in the disallowed region ( Figure 2B) which predicts the overall stereochemical quality of the protein is significantly high. The modeled structure consists of 18 beta sheet and 22 alpha helices ( Figure 3B). The top cavity is used to analyze the result using the parameters like shape complementarities, solvation energy, and electrostatistics. Amino acid residues like Asp18, Gln19 and Arg26 found in the active site are also found in the binding interface of Mtb-SigH and β subunit of Mtb-RNAP. Moreover the sequence alignment of sigma domain 2-binding region in various β homolog is found to be quite conserved throughout the bacterial RNAP ( Figure 3C). The study reflects how the atypical sigma factor interacts with the core RNAP in order to initiate the promoter recognition. In consideration of the various experimental data on bacterial RNAP it is well known that generally the domain 2 of the sigma factor binds with the α helical coil-coiled region of β subunit of Mtb-RNAP ( Figure 4A). From the present study it has also been predicted that Mtb-SigH and β subunit of Mtb-RNAP interaction takes place through α helical coil-coiled region ( Figure 4B). The report confers that Arg26, Gln19, and Asp18 of Mtb-SigH and Arg137, Gln140, Arg152, Asn133 and Asp144 of β subunit of Mtb-RNAP are probably found at the binding interface ( Figure 4C). The result elicited that the predicted amino acid residues present in the binding sites are also present in the binding cavities. The binding energy of the docked complex is calculated using the forcefield CHARMm which shows the potential energy of the complex to be -33177.633 Kcal/mol. Most of the interface residues form conventional intermolecular H-bonding within 3.4 Å, while a salt bridge is found between Asp18 of Mtb-SigH and β subunit of Mtb-RNAP. The interface residue shows an electrostatic mode of interaction in between Arg26 of Mtb-SigH (electropositive) and Asp144 of β subunit of Mtb-RNAP (electronegative). No potential van der Waal clashes are noted within 0.7 Å of the binding interface Table 1 (see supplementary material). The unfavorable interactions (including bumps) are remarkably not found in the interacting site of the two proteins. Accessible surface area (ASA) of both Mtb-SigH and β subunit of Mtb-RNAP is calculated using the van der Waal radii from Richards 1977 taking probe radius 1.4 Å. The output of which typically showed that the interacting residues of both the subunit have much higher ASA value than those of the non-interacting ones Table 2 (see supplementary matrerial). Thus the result clearly predicts the fact that Arg26, Gln19, and Asp18 of Mtb-SigH and Arg137, Gln140, Arg152, Asn133 and Asp144 of β subunit of Mtb-RNAP have a significant ASA making them potent as the interface residues. The binding region of the sigma factor with the core RNAP is quite conserved throughout the bacterial RNAP. Bacteriological studies with Tth-RNAP and Ec-RNAP showed similar amino acid residues in the binding interface [19][20]. The fine structural model of the undertaken molecule (Mtb-RNAP and Mtb-SigH) could be used to understand the function of other transcription associated protein in much detail. This work will further assist and deliver clue to perform precise experimental work with such macromolecules. The developed model will help to explore the interacting inhibitors by means of chemi-informatics. Thus in-silico understanding could attribute to develop inhibitor molecule for sigma factors and Mtb-RNAP. The structural understanding could further delineate the regulation mechanism by the ECF sigma factors. The region of interaction is quiet conserved in the prokaryotic RNAP and hence targeting such stress responsive atypical sigma factor is worth to explore the possible control of gene expression. There are very limited reports on the gene loci that are transcribed by Mtb-SigH which in turn could be an excellent material to explore the putative promoter element which can be recognized by Mtb-SigH. Moreover, such studies will enlighten the future research of drug discovery where such sigma factors could be treated as the target molecules.
2018-04-03T03:42:29.955Z
2015-06-30T00:00:00.000
{ "year": 2015, "sha1": "5e7023828c1ac756fc72dd7e7d92c5b62ab853e5", "oa_license": "CCBY", "oa_url": "http://www.bioinformation.net/011/97320630011296.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "5e7023828c1ac756fc72dd7e7d92c5b62ab853e5", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
7238248
pes2o/s2orc
v3-fos-license
Quartz-Seq: a highly reproducible and sensitive single-cell RNA sequencing method, reveals non-genetic gene-expression heterogeneity Development of a highly reproducible and sensitive single-cell RNA sequencing (RNA-seq) method would facilitate the understanding of the biological roles and underlying mechanisms of non-genetic cellular heterogeneity. In this study, we report a novel single-cell RNA-seq method called Quartz-Seq that has a simpler protocol and higher reproducibility and sensitivity than existing methods. We show that single-cell Quartz-Seq can quantitatively detect various kinds of non-genetic cellular heterogeneity, and can detect different cell types and different cell-cycle phases of a single cell type. Moreover, this method can comprehensively reveal gene-expression heterogeneity between single cells of the same cell type in the same cell-cycle phase. Background Non-genetic cellular heterogeneity at the mRNA and protein levels has been observed within cell populations in diverse developmental processes and physiological conditions [1][2][3][4]. However, the comprehensive and quantitative analysis of this cellular heterogeneity and its changes in response to perturbations has been extremely challenging. Recently, several researchers reported quantification of gene-expression heterogeneity within genetically identical cell populations, and elucidation of its biological roles and underlying mechanisms [5][6][7][8]. Although gene-expression heterogeneities have been quantitatively measured for several target genes using single-molecule imaging or singlecell quantitative (q)PCR, comprehensive studies on the quantification of gene-expression heterogeneity are limited [9] and thus further work is required. Because global gene-expression heterogeneity may provide biological information (for example, on cell fate, culture environment, and drug response), the question of how to comprehensively and quantitatively detect the heterogeneity of mRNA expression in single cells and how to extract biological information from those data remains to be addressed. Single-cell RNA sequencing (RNA-seq) analysis has been shown to be an effective approach for the comprehensive quantification of gene-expression heterogeneity that reflects the cellular heterogeneity at the single-cell level [10,11]. To understand the biological roles and underlying mechanisms of such heterogeneity, an ideal single-cell transcriptome analysis method would provide a simple, highly reproducible, and sensitive method for measuring the gene-expression heterogeneity of cell populations. In addition, this method should be able to distinguish clearly the gene-expression heterogeneity from experimental errors. Single-cell transcriptome analyses, which can be achieved through the use of various platforms, such as microarrays, massively parallel sequencers and bead arrays [12][13][14][15][16][17], are able to identify cell-type markers and/or rare cell types in tissues. These platforms require nanogram quantities of DNA as the starting material. However, a typical single cell has approximately 10 pg of total RNA and often contains only 0.1 pg of polyadenylated RNA, hence, o obtain the amount of DNA starting material that is required by these platforms, it is necessary to perform whole-transcript amplification (WTA). Previous WTA methods for single cells fall into two categories, based on the modifications that are introduced into the first-strand cDNAs in the PCR-based methods. One approach is based on the poly-A tailing reaction, and the other on the template-switching reaction. In principle, the goal of poly-A tailing is to obtain both full-length firststrand cDNAs and truncated cDNAs. The aim of template switching is to obtain first-strand cDNAs that have reached the 5' ends of the RNA templates. These modified cDNAs are amplifiable by subsequent PCR enrichment methods. Kurimoto et al. reported a quantitative WTA method based on the poly-A-tailing reaction for single-cell microarrays [12]. They used this single-cell transcriptome analysis, and published initial validation data for technical replicates, each of which required 10 pg of total RNA. The Pearson correlation coefficient (PCC) for the reproducibility of this method using 10 pg of total RNA per reaction was approximately 0.85 [12]. Using a method similar to the one used by Kurimoto et al., Tang et al. performed single-cell RNA-seq. When they applied their method to a single mouse oocyte (around 1 ng of total RNA), these researchers were able to detect a larger number of genes than could be identified using a microarray approach [13]. However, these methods are complicated because they require multiple PCR tubes for a single cell, and gel purification is required for the removal of unexpected byproducts [18,19]. Furthermore, detailed quantitative analysis of the performance of the Tang et al. single-cell RNA-seq method, including its reproducibility and sensitivity, has not been analyzed. Two single-cell RNA-seq methods based on the template-switching reaction have been reported. Islam et al. described a method called single-cell tagged reverse transcription sequencing (STRT-seq), which is a highly multiplexed single-cell RNA-seq method that can detect the restricted 5' ends of mRNAs [14]. Ramsköld et al. developed Smart-Seq (the WTA part of Smart-Seq is now marketed as SMARTer Ultra Low RNA Kit for Illumina Sequencing, Clontech, Mountain View, CA, USA), which exhibits a greater read coverage across transcripts than previously developed methods [16]. The PCCs for the reproducibility of the methods using 10 pg of total RNA were both approximately 0.7. Recently, Hashimshony et al. described CEL-Seq (Cell Expression by Linear amplification and Sequencing), which is an in vitro transcription (IVT)-based method but not a PCR-based method. CEL-Seq is a highly multiplexed single-cell RNA-seq method that can detect the 3' end of mRNA [17]. CEL-Seq was shown to detect significantly more genes in single mouse embryonic stem (ES) cells compared with STRT-Seq. The performance of these reported methods is sufficient for the identification of cell-type markers. However, their specifications for WTA did not validate whether the methods are sufficient to quantitatively assess the global gene-expression heterogeneity that is indicative of cellular heterogeneity. Because the PCC for reproducibility is greater than 0.95 for conventional non-WTA RNA-seq, it would be desirable to improve the reproducibility and sensitivity of single-cell RNA-seq to a greater degree than is possible with existing methods. To comprehensively and quantitatively detect geneexpression heterogeneity, we have developed a simple and highly quantitative single-cell RNA-seq approach that we term Quartz-Seq. In this study, we identified some defective factors that allowed us to simplify the experimental procedures and improve the quantitative performance. In particular, to maintain the simplicity and enhance the quantitative performance of WTA, we improved three crucial aspects: 1) we achieved robust suppression of byproduct synthesis; 2) we identified a robust PCR enzyme that allows the use of a single-tube reaction; and 3) we determined the optimal conditions of reverse transcription (RT) and second-strand synthesis for the capturing mRNA and the first-strand cDNA. We also performed a quantitative comparison between our method and previously developed methods using 10 pg of total RNA as the starting material the reproducibility and sensitivity of the Quartz-Seq method was better than those of the other methods. When used in the global expression analysis of real single cells, the single-cell Quartz-Seq approach successfully detected gene expression heterogeneity even between cells of the same cell type and in the same cell-cycle phase. This observed gene-expression heterogeneity was found to be highly reproducible in two independent experiments, and could be distinguished from experimental errors, which were measured through technical replicates of pooled samples. We also found that single-cell Quartz-Seq was able to discriminate more easily between different cell types and/or between different cell-cycle phases. Therefore, single-cell Quartz-Seq is a useful method for the comprehensive identification and quantitative assessment of cellular heterogeneity. Whole-transcript amplification for single-cell Quartz-Seq and Quartz-Chip The WTA for Quartz-Seq and Quartz-consists of five main steps ( Figure 1). The first step is a reverse transcription with an RT primer to generate the first-strand cDNAs from the target RNAs. The second step is a primer digestion with exonuclease I; this is one of the key steps to prevent the synthesis of byproducts. The third step is the addition of a poly-A tail to the 3' ends of the first-strand cDNAs, and the fourth step is the second-strand synthesis using a tagging primer, which prepares the substrate for subsequent amplification. The fifth step is a PCR enrichment reaction with A T A A A A T T T TT 7 M A A A A A T T T T TT 7 M + A T A A A A T T T TT 7 M T T T T TT 7 M + T T T T TT 7 T T T T TT7 GM A A A A A T T T T T GM T7" GM" A A A A A GM" T T T T TT7 GM A A A A A T T T T T GM T7" GM" A A A A A GM" Microarray Illumina sequencing Survival T T T T TT 7 M A A A A A T T T T T M T T T T TT 7 M A A A A A 6. Purification T T T T TT7 GM A A A A A T T T T T GM T7" GM" A A A A A GM" Survival T GM A A A T A A T T T7 GM" G M T T T T TT7 GM A A A A A T T T T T GM T7" GM" A A A A A GM" T T Figure 1 Schematic of the single-cell Quartz-Seq and Quartz-Chip methods. All of the steps of the whole-transcript amplification were executed in a single PCR tube. The first-strand cDNA was synthesized using the reverse transcription (RT) primer, which contains oligo-dT24, the T7 promoter (T7) and the PCR target region (M) sequences. After the first-strand synthesis, the majority of the RT primer was digested by exonuclease I, although it was not possible to eliminate the RT primer completely using this procedure. A poly-A tail was then added to the 3' ends of the first-strand cDNA and to any surviving RT primer. After the second-strand synthesis with the tagging primer, the resulting cDNA and the byproducts from the surviving primers contained the whole-transcript amplification (WTA) adaptor sequences, which include the RT primer sequence and the tagging primer sequence. These DNAs were used for the suppression PCR, which used the suppression PCR primer. Enrichment of the short DNA fragments, such as the byproducts, was suppressed. After the enrichment, the high-quality cDNA, which did not contain any byproducts, was obtained. The amplified cDNAs then had the T7 promoter sequence at the 3' ends of the DNA. These cDNAs were used for the Illumina sequencing and microarray experiments. a suppression PCR primer to ensure that a sufficient quantity of DNA is obtained for the massively parallel sequencers or microarrays. All five steps are completed in a single PCR tube without any purification. The amplified cDNA contains WTA adaptor sequences from the RT primer and the tagging primer. Quartz-Chip Quartz-Seq The amplified cDNA was then used in a massively parallel sequencer (Quartz-Seq) and a microarray system (Quartz-Chip). For the Quartz-Seq, the amplified cDNA was fragmented using the Covaris shearing system. The fragmented cDNA was ligated to adaptors, which enable the multiplex production of paired-end (PE) sequences. The DNA sequencing library was analyzed using an Illumina sequencer. For the Quartz-Chip method, we synthesized labeled cRNA from the amplified cDNA using in vitro transcription. The labeled cRNA was used for the microarray analysis. Performance improvements of whole-transcript amplification In previous WTA methods based on the poly-A-tailing reaction, excessive amounts of byproducts are produced (see Additional file 1, Figure S1). These byproducts (usually DNA < 200 bp in length) are derived from the RT primer. The RT primer is modified by terminal deoxynucleotidyl transferase, similarly to the first-strand cDNA. The modified RT primer then causes synthesis of the byproducts [18], and the amplified byproducts need to be removed by gel purification [18,19] (see Additional file 2, Supplementary note). This gel-purification step for the removal of these byproducts increases the complexity of the method. The byproducts contain WTA adaptor sequences. We found that the byproducts cause early saturation of the PCR amplification, and reduce the molar ratio between the objective cDNA and the byproducts. The contamination rate from the WTA adaptor sequence was dramatically increased using the Illumina sequencing method (see Additional file 1, Figure S2e,f). To overcome this byproduct contamination, the byproduct synthesis was completely eliminated using a combination of exonuclease I treatment, restricted poly-A tailing, and an optimized suppression PCR (see Additional file 1, Figure S3 and Figure S4). We successfully eliminated the synthesis of byproducts using the following three improvement points, thus eliminating the need for gel purification. The first improvement point is the adjustment of the RT primer concentration after the RT procedure described above. We used the minimum primer concentration for the RT. Moreover, we removed the RT primer by treating with exonuclease I, which digests single-strand DNAs such as primers. This exonuclease I digestion suppressed the synthesis of byproducts (see Additional file 1, Figure S2a). However, the primer removal was not complete at this point, which is in agreement with the results of a previous study [18] (see Additional file 1, Figure S1). The molar ratio between the single-cell-level mRNA (0.1 pg; Ensembl Mouse Transcript 1817 bp average size) and the RT primer is greater than 190,000. Complete removal solely by exonuclease I digestion was difficult. The remaining primers were then modified by terminal deoxynucleotidyl transferase, similarly to the first-strand cDNA; these modified primers caused the production of byproducts. In the second improvement point, to prevent amplification of the modified primers, we used suppression PCR technology. Suppression PCR is very effective in the suppressing amplification of small-size DNA that contains complementary sequences at both ends of the template DNA [20]. In suppression PCR, these complementary sequences can bind to each other, and the self-bound template DNA forms a 'pan-like' structure. In addition, the DNA is not amplified by PCR because the PCR primer cannot bind to the template DNA (see Additional file 1, Figure S4). The target DNA size of suppression PCR depends on the end of the complementary sequences (including the length and GC content). We also identified a good primer sequence for the suppression PCR, and showed that the B-primer effectively suppressed the synthesis of byproducts (see Additional file 1, Figure S2c and Figure S4b). In the third improvement point, to shorten the length of the remaining primers modified by the terminal transferase and thus make them targets for suppression PCR, we restricted the reaction time of the terminal transferase. This restriction suppressed the synthesis of byproducts (see Additional file 1, Figure S2b). It should be noted that in addition, we found that topoisomerase V could suppress byproduct synthesis (see Additional file 1, Figure S2d). However, the mechanism of byproduct suppression by topoisomerase V is not known. Using the combination of the three previously discussed improvement points, we successfully suppressed the synthesis of byproducts completely in a single-tube reaction (see Additional file 1, Figure S2e), without using topoisomerase V. Furthermore, we selected a robust PCR enzyme that was optimal for the single-tube reaction. The use of this DNA polymerase (MightyAmp DNA Polymerase (Takara Bio, Inc., Tokyo, Japan), also marketed as Terra PCR Direct Polymerase (Clontech, Mountain View, CA, USA)) improved the yield of cDNA (see Additional file 1, Figure S2c and Figure S5) and the reproducibility of the WTA replication (see Additional file 1, Figure S2d and Figure S5). In addition, by using this DNA polymerase (Might-yAmp), the number of PCR cycles in WTA could be reduced. Moreover, we improved the efficiencies of the RT and second-strand synthesis steps to counter the lowered reproducibility of the WTA that occurred as a result of the variable efficiencies of these steps. We identified the optimal annealing temperature that reduced the variability in the efficiencies of these steps (see Additional file 1, Figure S2a,d b). Our simplified method enabled us to consistently obtain highly reproducible cDNA that was optimized for RNA-seq (see Additional file 1, Figure S6). Reproducibility and sensitivity of single-cell Quartz-Seq We performed single-cell Quartz-Seq with 10 pg of diluted total ES-cell RNA to validate the reproducibility of the technical replicates. We prepared a multiplex, PE DNA sequencing library from the amplified cDNA produced from the 10 pg of total RNA. The DNA sequencing library was analyzed using a massively parallel sequencer (HiSeq 1000/2000; Illumina). Pairwise comparisons of the products of triplicate amplifications were used to quantify the reproducibility of the protocol based on the PCCs (Figure 2a) in log 10transformed fragments per kilobase of transcript per million fragments sequenced (FKPM). The PCCs of these comparisons were approximately 0.93. We counted highly reproducible expressed transcripts with FPKM greater than 1.0 and that exhibited less than twofold expression changes between technical replicates. The single-cell Quartz-Seq method was capable of reproducibly detecting a (mean ± SD) 8,110.3 ± 100.8 (82.1 ± 0.6%) of 9,872.6 ± 54.4 transcripts (Figure 2b; for pairwise plots, see Additional file 3, Figure S7, and for linear regression and correlation analysis, see Additional file 4, Table S1). To evaluate the sensitivity of Quartz-Seq, we compared the results of conventional RNA-seq (non-WTA) and Quartz-Seq. The PCCs of these comparisons were approximately 0.89, Figure 2a). We also counted highly sensitive transcripts that had FPKM greater than 1 and exhibited less than two-fold expression changes between technical replicates. The single-cell Quartz-Seq method was capable of sensitively detecting 6,605 ± 139.9 (68.1 ± 0.5%) of 9,686 ± 63.7 transcripts (Figure 2b). To evaluate the over-representation of the sequences derived from the WTA and the library preparation, we searched for the WTA adaptor sequence in all of the sequence reads using sequence similarity (see details in Materials and methods). Using Smart-Seq, 21.1 ± 3.05% of the sequences were identified as WTA adaptor (Smart-Seq, 10 pg ES-cell DNA, PE 30 million reads, n = 4; see Additional file 1, Figure S8) whereas 7.68 ± 0.66% of the sequences were identified as WTA adaptors by Quartz-Seq (Quartz-Seq, 10 pg ES-cell DNA, PE 60 million reads, n = 3; see Additional file 1, Figure S8). We also evaluated the number of reads required for the method to detect mRNAs. From 0.01, 0.1, 0.5 1.0, 5.0, 10, 30, and 45 million reads (uniquely mapped reads, single-ended, 50 bp), we counted the number of detected genes and calculated the PCCs between different samples of the same origin (10 pg of total RNA). We found that a Quartz-Seq result from more than 1 million reads had a correlation of greater than 0.9 and detected more than 7,642 ± 40 transcripts (73 ± 0.19%) (see Additional file 1, Figure S9). Furthermore, we compared the cDNA lengths resulting from the Quartz-Seq and conventional RNA-seq methods (see Additional file 1, Figure S10). The median of the read coverage across the expressed transcripts (FPKM ≥ 10) was 53.8% (705 bp) for Quartz-Seq compared with 84.8% (1,326 bp) using conventional RNA-seq. Comparison between Quartz-Seq and other methods We carefully compared the quantitative performance of Quartz-Seq/Chip and three reported methods for singlecell transcriptome analysis. To compare the reproducibility of Quartz-Seq and Smart-Seq, four Smart-Seq data sets from the first Smart-Seq paper published by Ramsköld et al. were downloaded from the National Center for Biotechnology (NCBI) Gene Expression Omnibus (GEO) repository (GSE38495). We calculated the PCC between pairs of samples using 10 pg of total RNA (see Additional file 3, Figure S7; see Additional file 4, Table S1). The read numbers for the Quartz-Seq and Smart-Seq methods were adjusted to approximately 30 million reads of single-end sequences of 50 bp. The PCC values for Quartz-Seq and Smart-Seq were approximately 0.93 and 0.7, respectively ( Figure 3a). Next, we compared the quantitative performance between Quartz-Seq and CEL-Seq. CEL-Seq data sets from the original paper published by Hashimshony et al. were downloaded from the NCBI Sequence Read Archive (SRA) (SRP014672). We calculated the PCC between pairs of samples using 10 pg of C. elegans total RNA (see Additional file 3, Figure S7; see Additional file 4, Table S1). The PCCs of these comparisons were approximately 0.72. We counted highly reproducible expressed transcripts that had greater than 1.0 tags per million (tpm) and exhibited less than twofold expression changes between technical replicates. CEL-Seq was capable of reproducibly detecting 2,564 ± 183.8 of 5,196.8 ± 364.9 transcripts (49.3 ± 1.7%). Moreover, we reanalyzed CEL-Seq data with mouse ES cell. We counted highly reproducible expressed transcripts that were greater than 1.0 tpm using the data. CEL-Seq detected 4,070.3 ± 332.4 transcripts in single mouse ES cells (n = 9), whereas Quartz-Seq detected 6,069.1 ± 854.9 transcripts in the same cell type (n = 35). Subsequently, we compared the quantitative performances of our method and other methods based on the poly-A-tailing reaction. The detailed quantitative performance of the single-cell RNA-seq method of Tang et al. using 10 pg total RNA has not been analyzed. Therefore, we evaluated the performance (reproducibility and sensitivity) of the Quartz-Chip and Kurimoto et al. methods using a chip array (GeneChip; Affymetrix Inc., Santa Clara, CA, USA) with 10 pg of diluted total RNA. For this comparison, we reanalyzed the original data from Kurimoto et al., and compared the results with the Quartz-Chip data. We first compared the technical duplicates to quantify the reproducibility of both protocols (Figure 3c, upper panels). In this analysis, we counted a transcript that had robust multi-array averaging (RMA) expression greater than 7.0 and exhibited less than two-fold expression change between technical duplicates. The Quartz-Chip method was capable of reproducibly detecting 7,520 of 9,622 transcripts (78.2%) in the technical duplicates; however, the Kurimoto et al. Limitations of Quartz-Seq We investigated whether there are specific transcript structures that have greater noise or are under-represented. We calculated and compared the GC content and cDNA lengths of the amplified and unamplified isoforms obtained by each single-cell RNA-seq method (see Additional file 1, Figure S11). As expected, we found that the unamplified isoforms from Quartz-Seq had a higher GC content (mean 52.1%) than the amplified isoforms (mean 50.2%). In addition, the unamplified isoforms from Quartz-Seq had a higher GC content (mean 52.1%) than those from Smart-Seq (mean 51.5%). In the analysis of cDNA lengths, we found that the unamplified isoforms from Quartz-Seq had a shorter cDNA length (mean 1,684.0 bp) than the amplified (or detected) isoforms (mean 2558.6 bp). We then represented the detailed comparison of technical noise between different polymerases. We performed Quartz-Seq with Ex Taq DNA polymerase (TaKaRa) instead of MightyAmp DNA polymerase because Ex Taq DNA polymerase has been used in previous methods (Kurimoto et al. and of Tang et al.) that were based on the poly-A-tailing reaction. We calculated the GC contents and cDNA lengths of the amplified and unamplified isoforms (see Additional file 1, Figure S11). Using Quartz-Seq, the unamplified isoforms produced by MightyAmp had a higher GC content (mean 52.1%) than those produced by Ex Taq (mean 51.7%), while the unamplified isoforms produced by MightyAmp had a shorter cDNA length (mean 1,684.0 bp) than those produced by Ex Taq (mean 2,481.1 bp). Single-cell Quartz-Seq detects different cell types Heterogeneous cell populations, such as cultured cell lines and tissues, are composed of various types of single cells, which have different gene-expression patterns. We therefore tested whether single-cell Quartz-Seq can distinguish between different cell types and whether it can detect the differentially expressed genes that are characteristic of each cell type. We performed single-cell Quartz-Seq with 12 mouse ES cells and with 12 primitive endoderm (PrE) cells that are directly differentiated from ES cells [21]. We collected single cells directly into PCR tubes using fluorescence-activated cell sorting (FACS), as previously reported [22]. In this system, all sorted cells were readily discerned as single cells (see Additional file 1, Figure S13). We successfully obtained amplification products from almost all of these single cells (98%, n = 50, see Additional file 1, Figure S13). The ES and PrE cells were collected during the G1 phase of the cell cycle (see Additional file 1, Figure S12). Although the cell population that combined cells from all cell-cycle phases contained an average of approximately 10 pg of total RNA per cell, the cells in G1 phase contained only approximately 6 pg of total RNA per single cell (see Additional file 1, Figure S14). We first performed a cluster analysis of all the transcripts from all of the samples. The global expression patterns of the ES and PrE cells were clearly divided into two clusters (Figure 4a). A heat map of the ES and PrE marker genes and the non-differentially expressed genes is shown in Figure 4b. We detected 1,620 and 1,436 differentially expressed genes in the ES and PrE cells, respectively. These differentially expressed genes included the ES marker genes (for example, Nanog, Pou5f1, and Fgf4) and the PrE marker genes (for example, Gata4, Gata6, and Dab2). In addition, these marker genes had clear differential expression between the ES and PrE cells. By contrast, the non-differentially expressed genes, such as Gnb1 l and Eef1b2, did not exhibit a differential expression pattern (Figure 4b). We then validated the expression patterns of the ES and PrE marker genes and the non-differentially expressed genes using an amplification-free single-cell qPCR method. To avoid any amplification bias, we directly detected the gene expression from single cells without amplification (see details in Materials and methods). The results show expression patterns for the ES and PrE cell markers and the non-differentially expressed genes that were highly correlated with the single-cell RNA-seq data. The geneexpression levels of Pou5f1 and Zfp42 were dramatically decreased in the majority of the single PrE cells. However, the gene-expression levels of Pou5f1 and Zfp42 remained high in a small number of single PrE cells (Figure 4c). This trend was observed in both the single-cell Quartz-Seq and the amplification-free single-cell qPCR methods. Single-cell Quartz-Seq detects the different cell-cycle phases of a single cell type In addition to being a result of different cell types, geneexpression heterogeneity can also result from different cell-cycle phases. To investigate the performance limits of the single-cell Quartz-Seq method, we tested whether this method is able to distinguish cell-cycle-dependent heterogeneity among ES cells. We performed single-cell Quartz-Seq with ES cells in different cell-cycle phases (G1, S, and G2/M) and then used principal components analysis (PCA) to analyze the results. The single PrE cells and the single ES cells formed two clearly divided clusters: the ES and PrE clusters (see Additional file 1, Figure S15a; see Additional file 5, Supplementary movie 1). When the single PrE cells were excluded, the single ES cells from each cell-cycle phase formed three different clusters, although a few single cells from the G1 and S phases were close to the G2/M cluster (see Additional file 1, Figure S15b; see Additional file 6, Supplementary movie 2). As expected, the differences between the cell-cycle phases were smaller than the difference between the stem cells and the more differentiated cells. Despite these smaller differences, the single-cell Quartz-Seq method was able to detect the different cellcycle phases within a single cell type. Single-cell Quartz-Seq reveals that the gene expression fluctuations in a single cell type in the same cell-cycle phase If the differences associated with the different cell types and cell-cycle phases are excluded, there still remains a small amount of heterogeneity due to the fluctuations in gene expression among single cells. To test whether single-cell Quartz-Seq can detect these gene-expression fluctuations, we calculated and plotted the standard deviations from two independently amplified sets of single-cell Quartz-Seq data (n = 12 and n = 8) of ES cells in G1 phase (Figure 5a). The PCC of these standard deviations from two independent sets was approximately 0.85 (Figure 5a). We performed an F-test of the equality of the two Quartz-Seq data variances to identify reproducible gene-expression fluctuations (falsediscovery rate (FDR) > 0.6). Variances in 17,064 gene expressions were reproducibly observed. These results suggest that the gene-expression fluctuations detected by single-cell Quartz-Seq are highly reproducible and thus not due to experimental errors. To further validate the observed gene-expression fluctuations using an independent experimental method, we used amplification-free single-cell qPCR to assess the gene expression of nine genes (Figure 5b), which were selected according to their gene-expression levels. To assess both the gene-expression fluctuations and the experimental errors, we prepared samples from single cells (for the assessment of the gene-expression fluctuations) and single-cell-sized samples of pooled cells (for the assessment of the experimental errors). We expected that, if the gene-expression fluctuations detected by single-cell Quartz-Seq were not solely due to experimental errors, the gene-expression fluctuations detected by the single-cell qPCR of the single-cell samples would be greater than the experimental errors detected from the pooled samples. As expected, we found that the geneexpression fluctuations detected by single-cell qPCR for the single-cell samples were indeed significantly greater than the experimental errors detected from the pooled samples (F-test, P<0.001, Figure 5b). This result indicates that single-cell qPCR can clearly distinguish geneexpression fluctuations from experimental errors. If the single-cell Quartz-Seq method quantitatively detects gene-expression fluctuations, we also expected that the gene-expression fluctuations would be highly correlated between the single-cell Quartz-Seq and the qPCR methods. To confirm this, we compared the single-cell Quartz-Seq data with the single-cell qPCR data. To compare these two different platforms (single-cell Quartz-Seq and qPCR), we chose to use a relative measure, such as coefficient of variation (CV), rather than an absolute measure, such as standard deviation. As expected, we found that the CVs of the single-cell Quartz-Seq approach were highly correlated with those obtained with single-cell qPCR (PCC for the CVs was 0.992) (Figure 5c), suggesting that the single-cell Quartz-Seq method quantitatively detects gene-expression fluctuations. To assess the functional features of fluctuated genes, we performed over-representation analysis using the Gene Ontology and the REACTOME pathway database for single-cell Quartz-Seq with 20 mouse ES cells in G1 phase (see Additional file 1, Figure S16). First, we performed clustering using PCA to collect groups of similar fluctuated genes. Each principal component was calculated using a hypergeometric test with the Gene Ontology and REACTOME pathway database. We found that the chromosome maintenance, G1/S−specific transcription, and RNA polymerase II transcription pathways were significantly over-represented in PC 1. Discussion In this study, we established a novel WTA method that is optimized for single-cell RNA-seq, and detects geneexpression heterogeneity between individual cells. This WTA method for single-cell Quartz-Seq is substantially easier to perform than other previously developed methods that are based on the poly-A-tailing reaction [18,19] (see Additional file 1, Figure S1). For example, the Kurimoto et al. method requires approximately 17 PCR tubes and 11 reaction steps for a single cell [18], whereas the single-cell Quartz-Seq amplification, requires only 1 PCR tube and 6 reaction steps per single cell; all of the steps are completed in a single PCR tube without any purification. These improvements, which drastically simplify the single-cell Quartz-Seq method, will be useful for highthroughput production of single-cell preparations. In addition to its simplicity, the single-cell Quartz-Seq method is highly quantitative (Figure 2). We validated the performance of single-cell Quartz-Seq using 10 pg samples of purified total RNA prepared from pooled cell populations, and found that the quantitative performance of single-cell Quartz-Seq was better than that of previously developed single-cell methods (Figure 3). Moreover, Quartz-Seq was useful for the analysis of cell subpopulations (50 cells containing 300 to 350 pg of total RNA) with highly quantitative performance (R = 0.99; see Additional file 1, Figure S17). Any method based on PCR amplification will have difficulty amplifying transcripts with an extremely high GC content, and thus we would expect these to be under-represented in the Quartz-Seq. We performed a detailed comparison of technical noise between different polymerases, and found that Quartz-Seq is more robust against high GC content when MightyAmp DNA polymerase is used for amplification of GC-rich sequences compared with Ex Taq DNA polymerase. In the analysis of cDNA lengths in each method, we found that the unamplified isoforms from Quartz-Seq had a shorter cDNA length (mean 1,684.0 bp) than the amplified isoforms (mean 2558.6 bp). This seems counterintuitive but can be explained by the principle of massively parallel sequencing with WTA, in which a longer cDNA generates more reads and therefore can be detected more sensitively than a shorter cDNA (see Additional file 1, Figure S11d). The unamplified isoforms from Quartz-Seq also had a shorter cDNA length (mean 1,684.0 bp) compared with the mean length of all of the Ensembl Mouse Transcripts (mean 1,817 bp). The unamplified isoforms from Quartz-Seq had a significantly shorter cDNA length (mean 1,684.0 bp) than those from Smart-Seq (mean 2,382.0 bp), suggesting that Quartz-Seq is more robust against a shorter cDNA length. We also found that Smart-Seq was unable to amplify 3,924 ± 124.5 isoforms, whereas the number of isoforms that could not be amplified by Quartz-Seq was only 1,614 ± 88.9. As a result of its higher reproducibility and sensitivity, single-cell Quartz-Seq can distinguish not only different cell types but also different cell-cycle phases of the same cell type. In addition, this method can also comprehensively detect gene-expression fluctuations within the same cell type and cell-cycle phase; these fluctuations were highly reproducible in two independent experiments (Figure 5a,c) and were distinguished from experimental errors measured from technical replicates of pooled samples (Figure 5b). Therefore, our method is capable of comprehensively and quantitatively revealing gene-expression fluctuations. Such fluctuations can be generated by both the intrinsic stochastic nature of gene expression and the extrinsic environmental differences between cells [5][6][7][8]. In fact, it has been reported that individual cells in a population of ES cells exhibit fluctuations in both mRNA and protein expression under the same culture conditions (for example, as reported for Nanog, Zfp42 Whsc2, Rhox9, and Zscan4) that might be associated with different cellular phenotypes [1,[23][24][25]. Hence, the single-cell Quartz-Seq approach should be useful for the analysis of the roles and mechanisms of non-genetic cellular heterogeneity. Conclusions Single-cell Quartz-Seq is a simplified protocol compared with previously established methods based on the poly-Atailing reaction. All of the steps are completed in a single PCR tube without any purification. The reproducibility and sensitivity of Quartz-Seq were higher than those of other single-cell RNA-seq methods. Use of Quartz-Seq in technical replicates with 10 pg each of total RNA produced a PCC of approximately 0.93, whereas the reproducibility of previous methods is approximately 0.7. To evaluate the sensitivity of Quartz-Seq, we compared the performance of conventional RNA-seq and single-cell RNA-seq methods with 10 pg total RNA and found the PCC to be approximately 0.88 in Quartz-Seq compared with approximately 0.7 in other methods. When used in the global expression analysis of real single cells, the single-cell Quartz-Seq approach successfully detected geneexpression heterogeneity even between cells of the same cell type and/or between different cell-cycle phases. This observed gene-expression heterogeneity was found to be highly reproducible in two independent experiments, and could be distinguished from experimental errors, which were measured using technical replicates of pooled samples. Therefore, single-cell Quartz-Seq is a useful method for the comprehensive identification and quantitative assessment of cellular heterogeneity. Cell culture We used EB5 ES cells for preparation of total RNA. This cell line is derived from E14tg2a ES cells, in which a blasticidin-resistance gene disrupts one endogenous Pou5f1 allele. We used 5G6GR ES cells for the single-cell Quartz-Seq. This cell line was generated by random integration of the linearized Gata6-GR-IRES-Puro vector into EB5 ES cells [21]. These cells were cultured on gelatin-coated dishes, in the absence of feeder cells and in Glasgow minimal essential medium (GMEM; Sigma-Aldrich, St Louis, MO, USA) supplemented with 10% fetal calf serum, 1000 U/ml leukemia inhibitory factor (ESGRO; Invitrogen Corp., Carlsbad, CA, USA), 100 µmol/l 2-mercaptoethanol (Nacalai Tesque Inc., Kyoto, Japan), 1× non-essential amino acids (Invitrogen), 1 mmol/l sodium pyruvate (Invitrogen), 2 mmol/l L-glutamine (Nacalai Tesque), 0.5× penicillin/streptomycin (Invitrogen), and 10 µg/ml blasticidin (Invitrogen). For culture of 5G6GR ES cells, 0.5 µg/ml puromycin (Sigma-Aldrich) was also added to the culture. For differentiation of 5G6GR ES cells into PrE cells, the cells were seeded into medium supplemented with 100 mmol/l dexamethasone instead of blasticidin, and cultured for 72 hours. After this 72 hours culture, the 5G6GR cells had completely differentiated into PrE cells. RNA preparation The total RNA was purified from the ES cells (EB5 cell line) using reagent (TRIzol; Life Technologies Corp., Carlsbad, CA, USA) and a commercial kit (RNeasy Mini Kit; Qiagen Inc., Valencia, CA, USA). The amount of total RNA from an ES cell was quantified using an absorptiometer (ND-1000; LMS, Tokyo, Japan). The length distribution of the total RNA was measured using a RNA 6000 Nano Kit (Agilent Biotechnology, Santa Clara, CA, USA), which produced an RNA integrity number of 10 for the total RNA. The spike RNAs were synthesized using the pGIBS-LYS, pGIBS-DAP, pGIBS-PHE, and pGIBS-THR plasmids (American Type Culture Collection (ATCC), Manassas, VA, USA) and the MEGAscript T3 kit (Ambion Inc., Austin, TX, USA), as previously reported [12]. The spike RNAs were added to the total RNA from the ES cells as follows (per 10 pg of total RNA): Lys, 1000 copies; Dap, 100 copies; Phe, 20 copies; and Thr, 5 copies. The total RNA containing the spike RNAs was diluted to 25 pg/µl using single-cell lysis buffer (0.5% NP40, Thermo) immediately before amplification. Single-cell collection using FACS The cultured cells were dissociated using trypsin-EDTA at 37°C for 3 minutes. The resulting cells were subsequently washed with PBS buffer. A total of 0.5 × 10 6 cells were stained with 1 ml of PBS containing 10 µg/ml Hoechst 33342 at 37°C for 15 minutes. The stained cells were sorted as previously reported, based on the Hoechst 33342-stained cell area of the FACS distribution [22]. To increase the amplification success rate, we added several bubbles to the single-cell lysis buffer using a micropipette (see Additional file 1, Figure S13). We sorted each single cell into a 0.4 µl aliquot of lysis buffer with a bubble; the buffer was pre-chilled to 0°C using a PCR chill rack (Iso-Freeze; Labgene Scientific, Châtel-St-Denis, Switzerland). Subsequently, we performed WTA with each single-cell lysis sample. For details of all of the samples used, see Additional file 7, Table S2. Whole-transcript amplification for single-cell Quartz-Seq To reduce the risk of RNase contamination, the workbench environment and all experimental equipment were cleaned using an RNase removal reagent (RNase Out; Molecular BioProducts, San Diego, CA, USA). We used low-retention single PCR tubes or sets of 8-linked PCR tubes for single-cell amplifications (TaKaRa Bio Inc., Otsu, Japan). The cells and 10 pg total RNA samples were dissolved in 0.4 µl of single-cell lysis buffer (0.5% NP40) in an aluminum PCR rack at 0°C and transferred to ice. These solutions were mixed using a bench-top mixer (MixMate; Eppendorf, Westbury, NY, USA) at 2,500 rpm and 4°C for 15 seconds and then at 3000 × g and 4°C for 10 seconds. Immediately after the second centrifugation, 0.8 µl of priming buffer (1.5× PCR buffer with MgCl 2 (TaKaRa Bio), 41.67 pmol/l of the RT primer (HPLC-purified; Table 1), 4 U/µl of RNase inhibitor (RNasin Plus; Promega Corp., Madison, WI, USA), and 50 µmol/l dNTPs were added to each tube. The solutions were mixed at 2,500 rpm and 4°C for 15 seconds. The denaturation and priming were performed at 70°C for 90 seconds and 35°C for 15 seconds using a thermal cycler (C1000 and S1000; Bio-Rad Laboratories, Inc., Hercules, CA, USA), and the reaction tubes were placed into an aluminum PCR rack at 0°C. Subsequently, 0.8 µl of RT buffer (1× PCR buffer, 25 U/µl reverse transcriptase (SuperScript III; Life Technologies), and 12.5 mmol/l DTT) was added to each tube. The reverse transcription was performed at 35°C for 5 minutes and 45°C for 20 minutes, and the reactions were heat-inactivated at 70°C for 10 minutes. The reaction tubes were then placed into an aluminum PCR rack at 0°C. We consistently used the latest available lots of the reverse transcriptase (SuperScript III) for the single-cell amplification. After centrifugation at 3000 × g and 4°C for 10 seconds, 1 µl of the exonuclease solution (1× Exonuclease buffer and 1.5 U/µl exonuclease I; both TaKaRa Bio) was added to each tube. The primer digestion was performed at 37°C for 30 minutes, and the reactions were heat-inactivated at 80°C for 10 minutes. The reaction tubes were placed into an aluminum PCR rack at 0°C. After centrifugation at 3,000 × g and 4°C for 30 seconds, 2.5 µl of poly-A-tailing buffer (1× PCR buffer, 3 mmol/l dATP, 33.6 U/µl terminal transferase (Roche Applied Science, Indianapolis, IN, USA), and 0.048 U/µl RNase H (Invitrogen) was added to each tube in the aluminum PCR rack at 0°C. The reaction tubes were mixed at 2,500 rpm and 4°C for 15 seconds. Immediately after centrifugation at 3000 × g and 0°C for 10 seconds, the reaction tubes were placed into a thermal cycler block, which was pre-chilled to 0°C. Subsequently, the poly-A-tailing reaction was performed at 37°C for 50 seconds and heat-inactivated at 65°C for 10 minutes. The reaction tubes were then placed into an aluminum PCR rack at 0°C. After centrifugation at 3000 × g and 4°C for 30 seconds, the reaction tubes were placed into an aluminum PCR rack at 0°C. We then added 23 µl of the secondstrand buffer (1.09× MightyAmp Buffer v2 (TaKaRa), 70 pmol/l tagging primer (HPLC-purified; Table 1), and 0.054 U/µl MightyAmp DNA polymerase (TaKaRa)) to each tube. The reaction tubes were mixed at 2500 rpm and 4°C for 15 seconds. After centrifugation at 3000 × g and 4°C for 10 seconds, the second-strand synthesis was performed at 98°C for 130 seconds, 40°C for 1 minute, and 68°C for 5 minutes. The reaction tubes were then immediately placed into an aluminum PCR rack at 0°C, and 25 µl of PCR buffer (1× MightyAmp Buffer version 2 and 1.9 µmol/l suppression PCR primer (HPLC-purified; Table 1)) was added. The reaction tubes were mixed at 2,500 rpm and 4°C for 15 seconds. After centrifugation at 3000 × g and 4°C for 10 seconds, the PCR enrichment was performed using the following conditions per cycle for a total of 21 PCR cycles: 98°C for 10 seconds, 65°C for 15 seconds, and 68°C for 5 minutes. After the PCR step, the reaction tubes were incubated at 68°C for 5 minutes. The reaction tubes were then placed into an aluminum PCR rack at 25°C. The amplified cDNA was purified using a PCR purification column (MinElute; Qiagen) or a PCR purification bead system (Agencourt AMPure XP; Beckman Coulter Inc., Brea, CA, USA). The obtained amplified cDNA was used for subsequent detection by each platform. For the Smart-Seq analysis, we amplified cDNA from 10 pg of the total RNA from ES cells using a commercial kit (SMARTer Ultra Low RNA Kit for Illumina sequencing; Clontech, Mountain View, CA, USA). After 19 cycles of PCR enrichment from the 10 pg of total ES-cell RNA, 2 to 3 ng of amplified cDNA were obtained. Library preparation for single-cell Quartz-Seq We prepared a library for conventional RNA-seq (with non-WTA) using a commercial kit (TruSeq RNA Sample Kit; Illumina Inc., San Diego, CA, USA) in accordance with the manufacturer's protocol with the exception of the PCR enrichment, for which we used a different polymerase (HiFi DNA polymerase; Kapa Biosystems Inc., Woburn, MA, USA). To prepare the library for Quartz-Seq (with WTA) and Smart-Seq (with WTA), we prepared a DNA sequencing library using our optimized library preparation method, which we call ligation-based Illumina multiplex library preparation (LIMprep). For LIMprep, we used the same library preparation kit as before (Kapa Biosystems) and self-produced the TruSeq (Illumina) adaptors and PCR primers. For Quartz-Seq, 20 ng of amplified cDNA was diluted in 130 µl of Tris-EDTA (TE) buffer. The solutions were transferred into snap-cap microtubes (Covaris Inc., Woburn, MA, USA). The amplified cDNAs in the microtubes were fragmented using a focused ultrasonicator (model S220; Covaris). The ultrasonication process was configured as follows: duty factor 10%; peak incident power 175 W; cycles per burst 100; and time 600 seconds. The fragmented cDNA was purified into 10 µl of nuclease-free water using a concentrating column (DNA Clean and Concentrator-5; Zymo Research, Irvine, CA. USA). Subsequently, 40 µl of the reaction mix (1.25× End Repair Buffer and 1.25× End Repair Enzyme Mix) (Kapa Biosystems) was added to 10 µl of the fragmented cDNA solution. The end-repair reaction was performed at 20°C for 30 minutes. The end-repaired DNA was purified into 12.5 µl of EB1/10 buffer (1 mmol/l Tris-HCl pH 8.0) using a concentrating column as before (DNA Clean and Concentrator-5; Zymo Research). Subsequently, 12.5 µl of the A-tailing mix (2× A-tailing Buffer and 2× A-tailing Enzyme) was added to 12.5 µl of the end-repaired DNA solution. The A-tailing reaction was performed at 30°C for 30 minutes. The A-tailed DNA was purified into 12.5 µl of EB1/10 buffer using a concentrating column as before (DNA Clean and Concentrator-5; Zymo Research). Subsequently, 12.5 µl of the adaptor ligation mix (2× ligation buffer, 2× DNA ligase, and 10 pmol of each self-produced adaptor) was added to 12.5 µl of the A-tailed DNA solution at 4°C. The adaptor ligation was performed at 20°C for 15 minutes. For the adaptor ligation, we used 10 pmol of the self-produced TruSeq adaptor per sample. Each self-produced TruSeq adaptor was prepared using the following HPLC-purified primers (Hokkaido System Science Co. Ltd., Sapporo, Japan): TRSU, TRSI-2, TRSI-4, TRSI-5, TRSI-6, TRSI-7, and TRSI-12 (Table 1). Each primer was dissolved in the adaptor buffer (10 mmol/l Tris-HCl pH 7.8, 0.1 mmol/l EDTA pH 8.0, and 50 mmol/l NaCl) to a concentration of 100 µmol/l. Equal amounts of 100 µmol/l TRSU and 100 µmol/l of each TRSI primer were added to the PCR tubes. After mixing, these primers were incubated at 95°C for 2 minutes. The primer annealing was then performed (95°C for 2 minutes, followed by a temperature decrease of -0.5°C per cycle for 170 cycles). Subsequently, the reaction tubes were incubated at 4°C for 5 minutes. The resulting adaptors were diluted with adaptor buffer to a concentration of 10 µmol/l. We prepared 1 µl aliquots with 10 µ°of each adaptor, which were stored at -80°C until use. The removal of the adaptor dimer was performed as follows: 25 µl of binding support buffer (1 mol/l NaCl, 20 mmol/l MgCl 2 , and 20 mmol/l Tris-HCl pH 7.8) and 60 µl of bead solution (Agencourt AMPure XP; Beckman Coulter) was added to 25 µl of the adaptor-ligated DNA solution. After 15 minutes of incubation at 25°C, the beads were separated using a magnetic stand for at least 10 minutes. Then washed twice with 80% ethanol for 1 minute. The adaptor-ligated DNA was eluted GeneChip The cDNA was synthesized from 0.25 µg of total RNA using random six-mers (Promega) and reverse transcriptase (SuperScript II; Invitrogen) in accordance with the Illumina standard protocol. The cDNA synthesis, cRNA labeling reactions, and hybridization to the high-density oligonucleotide arrays for Mus musculus (Mouse Genome 430 Array; Affymetrix Inc., Santa Clara, CA, USA) were performed in accordance with the instructions detailed in the manufacturer's instructions (Expression Analysis Technical Manual; Affymetrix). For single-cell Quartz-Chip, 10 ng of amplified cDNA was used in the cRNA labeling reactions. The expression values of the Kurimoto et al. and the Quartz-Chip methods were quantified using the RMA method. All of the data were normalized using the quantile normalization method to compare the expression values between the different microarrays [26]. Bioinformatics analysis All raw sequencing reads were trimmed using Trimmomatic software to remove the sequencing and WTA primers. All of the trimmed sequence reads were mapped to the mouse reference genome (mm9) using TopHat (version 2.0.3 [27]) with the default parameters. FPKMs were calculated using Cufflinks (version2.0.1 [28]) with a transcriptome reference (Ensembl Mouse Transcript). The sample clustering of the single-cell RNA-seq data was performed using R software and the pvclust package [29] with 1000 bootstrap resamplings and Ward distance functions. All of the data were visualized using R, the ggplot2 package, and the cummeRbund software. To identify the significant differential expression between two differentiated cell states from single-cell RNA-seq data, we performed the Wilcoxon rank test, and calculated the mutual information between the geneexpression distributions of the ES cells and the PrE cells using an empirical Bayes estimator. Although our novel single-cell RNA-seq method is highly sensitive, cost-effective, and easy to perform, it is not completely without amplification bias. To identify the differentially expressed genes from the single-cell RNA-seq method with small sample datasets, we inferred mutual information between the gene-expression distributions of the two types of cells using an empirical Bayes estimator: the mutual information MI(X, Y) for pairs of cell states × and Y, where × and Y may, for example, represent expression levels of the two cell states. The MI is considered the Kullback-Leibler distance from the joint probability density to the product of the marginal probability densities as follows: The MI is always non-negative, symmetric, and equal to 0 only if × and Y are independent. The MI can be represented as a summation of entropies: To infer the entropy from gene-expression data with a small sample size, we applied an empirical Bayes approach, namely, the so-called James-Stein estimator [30]. First, the gene-expression data were made discrete using the Freedman and Diaconis algorithm to determine the number of bins and the width of the histograms. We then estimated the K2 cell frequencies of the K × K contingency table for each cell-state pair × and Y using the James-Stein estimator. Finally, we calculated H(X), H(Y), H(X, Y) and MI(X, Y). To define the reproducibility of the variation in the measured expression levels between the single cells analyzed using Quartz-Seq, we used FDR to apply the F-test to two independent Quartz-Seq datasets with multi-testing adjustments. Accession codes All data are available on GEO [GSE42268]. qPCR validation of reverse-transcribed cDNA and quantification of amplified cDNA To validate the quantification of the reverse-transcribed cDNA and the amplified cDNA, we detected the gene expression using a SYBR Green system (Power SYBR Green PCR Master Mix; Applied Biosystems, Foster City, CA, USA) and qPCR primers as described previously [12]. For the qPCR primer sets for each gene, see Additional file 8, Table S3. The non-WTA sample was prepared by reverse transcription using 200 ng of total ES-cell (EB5 cell line) RNA containing the spike RNAs (Lys, Dap, Phe, and Thr). A sample of 200 ng total RNA was dissolved in 10 µl of single-cell lysis buffer (0.5% NP40) on an aluminum PCR rack at 0°C and placed on ice. Immediately after, 20 µl of priming buffer (1.5× PCR buffer with MgCl 2 (TaKaRa), 41.67 pmol/l of RT primer (HPLC-purified; Table 1), 4 U/ µl RNase (RNasin Plus; Promega), and 50 µmol/l dNTPs) was added to each tube. The solution was mixed at 2,500 rpm and 4°C for 15 seconds. The denaturation and priming were performed at 70°C for 90 seconds and 35°C for 15 seconds using a thermal cycler (C1000 and S1000; Bio-Rad Laboratories), and the reaction tubes were placed in an aluminum PCR rack at 0°C, then 20 µl of RT buffer (1× PCR buffer, 25 U/µl SuperScript III (Life Technologies), and 12.5 mmol/l DTT) was added to each tube. The RT was performed at 35°C for 5 minutes and 45°C for 18 minutes and then heat-inactivated at 70°C for 10 minutes. Subsequently, the reaction tubes were placed in an aluminum PCR rack at 0°C. Amplification-free single-cell qPCR We used a semi-skirted 96-well PCR plate for the amplification-free single-cell qPCR. Single cells in the G1 cellcycle phase were individually collected into 1 µl of lysis buffer (0.25% NP40 and 1 U/µl RNasin (Promega) plus RNase inhibitor) at 4°C in a PCR chill rack (IsoFreeze; Labgene Scientifid). After centrifugation at 2000 × g and 4°C for 2 minutes, these solutions were mixed at 2,000 rpm for 15 seconds. To detect the variability of the reverse transcription of each gene, we prepared an 'averaged' single-cell pooled sample. For the pooled sample, 200 single cells in the G1 phase were collected into 200 µl of lysis buffer. After mixing, 1 µl of the pooled lysis solution was dispensed into each well of a 96-well PCR plate. Subsequently, 2 µl of RT buffer (1.5× VILO Reaction Buffer (contains random primers) and 1.5× SuperScript Enzyme Mix; both Invitrogen) was added to each well. These plates were incubated as follows: 25°C for 10 minutes, 42°C for 60 minutes, and 85°C for 10 minutes. Then 15 µl of the qPCR dilution solution (0.0001% NP40) was added and mixed well. Mouse genomic DNA was used to normalize the qPCR quantification. To a 384-well qPCR plate, we added 7 µl of the qPCR solution (1.4× QuantiTect SYBR Green PCR Master Mix (Qiagen), 5 pmol forward primer, and 5 pmol reverse primer) and 3 µl of diluted solution were added The qPCR plate was incubated at 95°C for 15 minutes. Subsequently, qPCR was performed for 45 cycles, which consisted of 95°C for 15 seconds, and 60°C for 1 minute. The data were collected at 60°C. For the primer sets for each gene, see Additional file 8, Table S3. We also thank Hitoshi Niwa (Laboratory for Pluripotent Stem Cell Studies at RIKEN CDB) for providing the ES and PrE cells. This work was supported by the Program for Innovative Cell Biology by Innovative Technology from the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan, the Leading Project for the Realization of Regenerative Medicine, and the Director's Fund 2011 of RIKEN CDB. The Special Postdoctoral Researchers Program from RIKEN also supported this work. Some of the calculations were performed using the RIKEN Integrated Combined Cluster (RICC) and supercomputer system in the National Institute of Genetics (NIG), Research Organization of Information and Systems (ROIS).
2017-06-23T14:10:41.135Z
2013-04-01T00:00:00.000
{ "year": 2013, "sha1": "05d9f5dcffc3ef87da0fff4222941bc2bbb24b1e", "oa_license": "CCBY", "oa_url": "https://genomebiology.biomedcentral.com/track/pdf/10.1186/gb-2013-14-4-r31", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c48832e57c0de6de17ceb1134dafd3e316a7f3f4", "s2fieldsofstudy": [ "Biology", "Materials Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
245861276
pes2o/s2orc
v3-fos-license
Operation of the Egyptian Power Grid with Maximum Penetration Level of Renewable Energies Using Corona Virus Optimization Algorithm : Countries around the world are looking forward to fully sustainable energy by the middle of the century to meet Paris climate agreement goals. This paper presents a novel algorithm to optimally operate the Egyptian grid with maximum renewable power generation, minimum voltage deviation and minimum power losses. The optimal operation is performed using Corona Virus Algorithm (CVO). The proposed CVO is compared to the Teaching and Learning-Based Optimization (TLBO) algorithm in terms of voltage deviation, power losses and share of renewable energies. The real demand, solar irradiance and wind speed in typical winter and summer days are considered. The 2020 Egyptian grid model is developed, simulated, and optimized using DIgSILENT software application. The results have proved the effectiveness of the proposed CVO, compared to the TLBO, to operate the grid with the highest share possible of renewables. The paper is a step forward to achieve Egyptian government targets to reach 20% and 42% penetration level of renewable energies by 2022 and 2035, respectively. Introduction The world is moving towards achieving the sustainable development goals (SDGs) [1]. One of the most famous SDGs is SDG 7 which is targeting access to clean energy. The whole world is looking forward to fully sustainable energy to meet 2016 Paris agreement goals for climate change [2]. From this point, many countries put a plan to reach 100% sustainable energy; there are promising and significant situations. Iceland has 100% renewable power generation, Norway achieved more than 98%, Costa Rica has now more than 95% of its electricity from renewable resources [3]. In Africa, there are two promising situations; Kenya achieved 70% penetration level of renewable energy as primary source of electricity production. Egypt has a region with installed renewable energy generators, mainly hydro and photovoltaics [4]. Egypt is also aiming to achieve 42% renewable power generation by 2035 [5]. Many researchers have tracked the idea of 100% renewable energy power grids. In [6], the researchers presented techniques for secondary voltage control of power grid with 100% renewable energies. In [7], the authors presented a study to convert the Japanese power grid to 100% sustainable power system. In [8], the authors studied the setting of standard parameters of a power system with high share of photovoltaics. In [9], the authors presented how to convert a conventional power system to work with 100% sustainable energy from the policies, technical and institutional perspectives. In [10], the paper illustrates how to convert Macedonia to 100% sustainable energy country by 2050. In [11], the paper presents techniques to optimally operate a power system with high share of renewables while reducing the running cost of the grid generators. Sweden has set a plan to reach 100% clean electricity by 2040 [12]. In [13], the research presented a techno-economically optimized energy system for Portugal to achieve 100% renewable energy in 2050 through hydropower stations. In [14,15], the authors studied the repowering of wind turbines and how this could improve the penetration level of renewable energies in Spain. In [16], the authors studied the role of communication and IT in achieving sustainable development goals including SDG 7. Many power system optimization techniques have been applied to perform many tasks. In [17], an optimization process is applied to operate a power system including conventional and renewable generators with minimum cost which was achieved through maximum operation of wind systems. In [4], the authors applied optimal power flow to maximize the renewable power generation in Egypt while minimizing the total power losses. Several optimization techniques have been used to improve the performance of various systems. Genetic Algorithm (GA) is presented in [18], Particle Swarm optimization algorithm (PSO) is employed in [19]. Rao in [20] presented Teaching Learning-Based Optimization (TLBO) for the first time in 2011. In [4], TLBO is used to operate the Egyptian power system with maximum sharing of renewable energies and minimize the total power losses. Since 2020, the world has been living in the biggest pandemic which is Corona Virus . Turning the behavior of Corona virus to an optimization algorithm may be useful to be applied in various systems. In [21], Martınez-Alvareza et al. presented the Corona Virus Optimization (CVO) for the first time in November 2020. The main contributions of this paper are: i. Maximization of renewable power generation in the Egyptian power system considering also achieving the lowest total power losses possible and minimum voltage deviation. ii. The paper presents the application of the new CVO optimization technique to the Egyptian power system. The Egyptian power system performance is compared in the following three cases: (i) with CVO optimal power flow, (ii) with TLBO optimal power flow and (iii) without optimization. iii. The research is also focusing on operating the Upper-Egypt region with 100% renewable energy for most of the daytime hours as it currently includes three renewable energy power stations in terms of a large photovoltaic park and two hydro power stations. The rest of the paper is partitioned into sections. Section 2 illustrates the developed model of the power system in Egypt. Section 3 illustrates the renewable energy technologies integrated to the Egyptian power system. Section 4 illustrates system optimization using CVO technique. Section 5 is the results of optimization applied to the simulated grid. Section 6 is the discussion on the results and the applicability of the research while Section 7 concludes the highlights of this research. Egyptian Power System The 2016 Egyptian grid is modelled in [22] using DIgSILENT power factory software application. The 2016 model includes all the power stations, transmission lines, transformers, reactors and loads at that time. In [4], the model was updated to simulate 2020 power system including new power stations which are now in service: Benban one of the world's largest photovoltaic parks (1.8 GW as planned) and expanded Gabalzeet wind farm (0.54 GW) in addition to the three 4.8 GW combined cycle power stations which are located at Burullus, Beni Suef and New Capital as shown in Figure 1. The mentioned large power plants in the updated model and their associated expanded transmission have led the system to be partitioned geographically into six regions. The six regions are: Figure 1. The Egyptian grid. Renewable Energies in Egypt In Egypt, there are three renewable energy technologies now applied, which are photovoltaics, hydro power stations and wind farms. Photovoltaics Egypt is famous for high solar radiation over the whole year. The average daily radiation varies between 5 and 8 kWh/m 2 [23]. Aswan has average solar intensity 0.367 kW/m 2 [24]. It was decided in September 2014 to start the installation of a large photovoltaic park in Aswan as part of the Egyptian sustainable development strategy 2035 [25]. Benban photovoltaic power station is planned to have installed capacity 1800 MW as one of the world's largest photovoltaic parks without storage. The power station is sited 40 km in the western desert northwest Aswan. The park is connected to the grid through four substations. Three substations are linked to the 220 kV grid while the fourth is connected to the 500 kV grid through a 220/500 kV step up transformer [20]. The main apparatuses of a PV system are PV modules, two converters DC/DC and DC/AC with transformer as shown in Figure 2 Benban is simulated in DIgSILENT power factory in the form of a PV static generator [23,24]. The static generator has three types of action which are: The Upper-Egypt region has three power plants, all from sustainable resources, namely High Dam power plant (2.1 GW), Aswan reservoir power plant (0.55 GW) and Benban photovoltaic park (1.8 GW as planned). The Upper-Egypt region is connected to the Middle-Egypt and Canal regions. The Canal region includes wind farms and fossil fuels power stations. In this work, optimal operation is applied to accomplish maximum penetration level of renewable energies, minimum total power loss and minimum voltage deviation in the Egyptian power system. The paper also studies how many hours the Upper-Egypt region can operate with 100% sustainable energy without importing electricity generated by fossil fuel energies through the Middle-Egypt region. The work is performed in DIgSILENT power factory software application due to its ability to perform different studies and apply optimization and control to large power grids. The following functions can be applied by DIgSILENT [22]: Power flow calculations 2. Optimal power flow calculations 3. Short circuit analysis 4. Transient analysis Renewable Energies in Egypt In Egypt, there are three renewable energy technologies now applied, which are photovoltaics, hydro power stations and wind farms. Photovoltaics Egypt is famous for high solar radiation over the whole year. The average daily radiation varies between 5 and 8 kWh/m 2 [23]. Aswan has average solar intensity 0.367 kW/m 2 [24]. It was decided in September 2014 to start the installation of a large photovoltaic park in Aswan as part of the Egyptian sustainable development strategy 2035 [25]. Benban photovoltaic power station is planned to have installed capacity 1800 MW as one of the world's largest photovoltaic parks without storage. The power station is sited 40 km in the western desert northwest Aswan. The park is connected to the grid through four substations. Three substations are linked to the 220 kV grid while the fourth is connected to the 500 kV grid through a 220/500 kV step up transformer [20]. The main apparatuses of a PV system are PV modules, two converters DC/DC and DC/AC with transformer as shown in Figure 2. In this work, Benban is selected to work in voltage control mode. According Egyptian grid code for hosting large solar power stations, the reactive power ran Benban park varies between −0.33 p.u. and +0.33 p.u. at rated active power as sho Figure 3. Benban is simulated in DIgSILENT power factory in the form of a PV static generator [23,24]. The static generator has three types of action which are: Droop control manner In this work, Benban is selected to work in voltage control mode. According to the Egyptian grid code for hosting large solar power stations, the reactive power range of Benban park varies between −0.33 p.u. and +0.33 p.u. at rated active power as shown in Figure 3. Hydro Power Stations In Egypt, two old famous dams which are Aswan dam and High Dam are present. Aswan dam was initiated in 1899, now it has total installed capacity 0.55 GW. High Dam was initiated in 1960 with 12 units; each unit has installed capacity 175 MW with total installed capacity 2.1 GW. Wind Farms Egypt has a high wind energy potential along the Red sea region. The first wind farm is Zaafarana power station was established in 2001. The capacity of Zafarana farm is 0.745 GW. Zaafarana is sited 120 km south of Suez. In this work, Benban is selected to work in voltage control mode. According to the Egyptian grid code for hosting large solar power stations, the reactive power range of Benban park varies between −0.33 p.u. and +0.33 p.u. at rated active power as shown in Figure 3. [4,26,27]. (Reprinted with permission from ref. [4]. Copyright 2021 IEEE). Hydro Power Stations In Egypt, two old famous dams which are Aswan dam and High Dam are present. Aswan dam was initiated in 1899, now it has total installed capacity 0.55 GW. High Dam was initiated in 1960 with 12 units; each unit has installed capacity 175 MW with total installed capacity 2.1 GW. Wind Farms Egypt has a high wind energy potential along the Red sea region. The first wind farm is Zaafarana power station was established in 2001. The capacity of Zafarana farm is 0.745 GW. Zaafarana is sited 120 km south of Suez. Along the Suez Gulf, Gabalzeet region, there is a big potential of wind energy. Gabalzeet wind power station project is implemented in three stages [28]. Stage one is the implementation of 240 MW wind farm, stage two is the implantation of 220 MW wind farm and stage three is the implementation of an additional 120 MW farm. In 2016, the Egyptian power system was simulated including Gabalzeet with capacity 240 MW and now has been elevated to be 580 MW. In this research, the wind farms are represented to be of type 3 with Doubly Fed Induction Generator system (DFIG). Figure 4 shows a simplified form of type 3 wind system and defined as generator bus [29]. Along the Suez Gulf, Gabalzeet region, there is a big potential of wind energy. Gabalzeet wind power station project is implemented in three stages [28]. Stage one is the implementation of 240 MW wind farm, stage two is the implantation of 220 MW wind farm and stage three is the implementation of an additional 120 MW farm. In 2016, the Egyptian power system was simulated including Gabalzeet with capacity 240 MW and now has been elevated to be 580 MW. In this research, the wind farms are represented to be of type 3 with Doubly Fed Induction Generator system (DFIG). Figure 4 shows a simplified form of type 3 wind system and defined as generator bus [29]. Optimization Problem Definition One of the six regions is the region of Upper-Egypt which has three power stations, based on sustainable energy technologies. So, the region has the chance to get electric energy generated from sustainable energy most of the hours of the day. The Upper-Egypt region has interconnection with the Canal and Middle-Egypt regions. Carbon dioxide emission reduction can be achieved in this region by importing power from the wind farms in the Canal region and reducing consumption from the Middle-Egypt region. The optimization of the Egyptian power system operation problem can be defined as follows: Objective Function (F): Maximization of penetration level of renewable energies ( ), maximum reduction in total power losses ( ) and minimization of voltage deviation ( ). The multi-objective function is created in a way where the weights , and are simply selected to be equal. The objective is to maximize the share of sustainable energy by reducing the curtailed power of each sustainable energy power plant/park/farm. Optimization Problem Definition One of the six regions is the region of Upper-Egypt which has three power stations, based on sustainable energy technologies. So, the region has the chance to get electric energy generated from sustainable energy most of the hours of the day. The Upper-Egypt region has interconnection with the Canal and Middle-Egypt regions. Carbon dioxide emission reduction can be achieved in this region by importing power from the wind farms in the Canal region and reducing consumption from the Middle-Egypt region. The optimization of the Egyptian power system operation problem can be defined as follows: Objective Function (F): Maximization of penetration level of renewable energies (F 1 ), maximum reduction in total power losses (F 2 ) and minimization of voltage deviation (F 3 ). The multi-objective function is created in a way where the weights F 1 , F 2 and F 3 are simply selected to be equal. Objective functions: Maximization of the Share of Sustainable energy The objective is to maximize the share of sustainable energy by reducing the curtailed power of each sustainable energy power plant/park/farm. Optimization Problem Definition One of the six regions is the region of Upper-Egypt which has three power stations, based on sustainable energy technologies. So, the region has the chance to get electric energy generated from sustainable energy most of the hours of the day. The Upper-Egypt region has interconnection with the Canal and Middle-Egypt regions. Carbon dioxide emission reduction can be achieved in this region by importing power from the wind farms in the Canal region and reducing consumption from the Middle-Egypt region. The optimization of the Egyptian power system operation problem can be defined as follows: Objective Function (F): Maximization of penetration level of renewable energies ( 1 ), maximum reduction in total power losses ( 2 ) and minimization of voltage deviation ( 3 ). The multi-objective function is created in a way where the weights 1 , 2 and 3 are simply selected to be equal. Objective functions: Maximization of the Share of Sustainable energy The objective is to maximize the share of sustainable energy by reducing the curtailed power of each sustainable energy power plant/park/farm. (1) Minimization of Total Power Losses where the number of transmission lines/cables in the power system is , conductance of the line connecting i and j buses is , the voltage at ith bus is , is the voltage at jth bus, is the total active power loss and Ѳ is the phase angle of the voltage value between i and j buses. Minimization of Voltage Deviation Variables: voltage of each bus bar, tap settings of the transformers and active and reactive powers produced by each generator. Constraints: Optimization Problem Definition One of the six regions is the region of Upper-Egypt which has three power stations, based on sustainable energy technologies. So, the region has the chance to get electric energy generated from sustainable energy most of the hours of the day. The Upper-Egypt region has interconnection with the Canal and Middle-Egypt regions. Carbon dioxide emission reduction can be achieved in this region by importing power from the wind farms in the Canal region and reducing consumption from the Middle-Egypt region. The optimization of the Egyptian power system operation problem can be defined as follows: Objective Function (F): Maximization of penetration level of renewable energies ( 1 ), maximum reduction in total power losses ( 2 ) and minimization of voltage deviation ( 3 ). The multi-objective function is created in a way where the weights 1 , 2 and 3 are simply selected to be equal. The objective is to maximize the share of sustainable energy by reducing the curtailed power of each sustainable energy power plant/park/farm. (1) Minimization of Total Power Losses where the number of transmission lines/cables in the power system is , conductance of the line connecting i and j buses is , the voltage at ith bus is , is the voltage at jth bus, is the total active power loss and Ѳ is the phase angle of the voltage value between i and j buses. Minimization of Voltage Deviation Variables: voltage of each bus bar, tap settings of the transformers and active and reactive powers produced by each generator. Constraints: where the number of transmission lines/cables in the power system is N L , conductance of the line connecting i and j buses is G ij , the voltage at ith bus is V i , V j is the voltage at jth bus, P Loss is the total active power loss and θ ij is the phase angle of the voltage value between i and j buses. Minimization of Voltage Deviation Variables: voltage of each bus bar, tap settings of the transformers and active and reactive powers produced by each generator. Constraints: -Equality constraints (load flow equations) -Inequality constraints (limits) -Generation limits of each power station -Bus voltage magnitude level limits -Cables and transmission lines maximum loading. -Tap changing limits of each transformer -Capacitors limits where N B is the number of bus bars in the power system, δ i is the angle of voltage value of the ith bus, B ij is the suceptance value between i and j buses, T k is the transformer tap changer range, Q ci is the reactive power of capacitor, P Gi and Q Gi are the active and reactive powers of the generator, respectively, P Di and Q Di are the demanded active and reactive powers, respectively. CVO Design Since December 2019, the world has been suffering from Corona Virus 2 which is widely known as (COVID-19) respiratory virus. The total infected humans with the virus worldwide are more than 83,000,000 in 2020 including 1,800,000 deaths and 60,000,000 recoveries [30]. From the fact that Bio-inspired optimization techniques are widely used and proved good performance in the machine learning optimization at different applications, the quick propagation of COVID-19 worldwide has inspired researchers to use CVO as a novel metaheuristic optimization algorithm. The CVO has the following advantages among state-of-the-art metaheuristic techniques i. CVO parameters are set with actual values for rates and probabilities, preventing the user from applying an additional study on the appropriate setup configuration. ii. In CVO, the solution exploration can stop after several iterations, without an obligation to be configured. iii. The high rate of COVID-19 spreading is useful for searching in promising regions more carefully, whereas the use of parallel draining confirms that all regions of the search space are consistently explored. The CVO algorithm flow chart is shown in Figure 5 following the next steps. Step 1: Initial population generation (patient zero); the first patient ever to catch COVID-19. If no previously reached optimal solution, it will be set randomly. Step 2: The propagation of disease is applied based on the following cases. (1) Case 1: each patient has a probability of dying (P DIE ) according to the death rate of COVID-19. In this case, patients cannot infect other individuals. (2) Case 2: patient who is still alive has a probability to infect new individuals according to a probability PSUPERSPREADER. The PSUPERSPREADER is set according to two possibilities: (a) Ordinary: patient will infect new individuals according to a normal spread rate (R Spreading ). (b) Super spreaders: patient will infect new individuals according to super spreading rate (R Superspreading ). (3) Case 3: patients either considered ordinary or super spreaders may travel. The patient will explore different solutions in the search space. The probability of the patient to travel (P Travel ) and the rate to infect new individuals based on travelling scenario is R Travel . Step 3: populations updating, the following three populations are updated according to: (1) Death, any individual who has died is recorded in the current population and will not be used furthermore. (2) Recovered, after each iteration, the recovered individuals are recorded in the recovered population. Any recovered individual has a probability of being re-infected again (P Reinfected ) at any coming iteration. The isolated individuals, if they are properly isolated will be added to the recovered population too with a probability (P Isolated ) (3) New infected population which includes all the infected individuals of each iteration. It is possible that the new infected individuals are repeated in more than one iteration; the recommendation in this case is to remove the repeated new infection from the population before jumping to the next iteration. Smart Cities 2022, 5, FOR PEER REVIEW 9 Figure 5. CVO flowchart. Simulation Results The objective of the presented simulation studies is to investigate the possibility of maximizing the share of renewable energies while minimizing both: total power losses and voltage deviation. The studies are focusing on two typical days representing a daily summer demand and a daily winter demand. Tables 1 and 2 show the daily demand, solar isolation and wind speed at a summer day and a winter day, respectively. The penetration levels of sustainable energy in the Upper-Egypt region are presented. Case Study 1: Operation of the Egyptian grid at a summer day: Figure 6 shows the share of sustainable energy in the Upper-Egypt region and in the whole grid as defined in (12) and (13), respectively. Figure 7 shows the total power losses and voltage deviation index [6]. Figure 8 shows the penetration levels of PV, wind and hydro, as defined in (14), (15) and (16), respectively. Comparisons are shown in the following three cases: (i) with CVO, (ii) with TLBO and (iii) without optimization. Step 4: stop criteria, the process can be ended at any time without need to control any parameter because the numbers of death-and recovery-based population rates become constant as time passes and the new infected population cannot infect new individuals. It is also possible that at certain iterations, the number of infected individuals increases, however at other particular iterations the number of infected individuals could be small because of the large size of death and recovered populations. A preset stop based on number of iterations is also available in the form of pandemic duration. The social distance can also stop the optimization process. To avoid premature convergence to local optima, the best set of parameters for the optimization problem is selected by adapting the number of population members, social Smart Cities 2022, 5 42 distance and pandemic duration of the proposed CVO in DIgSILENT. To reduce the possibility of falling local minima, the optimization process is repeated many times. CVO is applied since appearing for the first time in some applications such as transportation network [32] and training of neural networks [31]. CVO is created in DIgSILENT software application using DPL language (the software programming language) as function then employed to perform the optimal power flow calculations. Simulation Results The objective of the presented simulation studies is to investigate the possibility of maximizing the share of renewable energies while minimizing both: total power losses and voltage deviation. The studies are focusing on two typical days representing a daily summer demand and a daily winter demand. Tables 1 and 2 show the daily demand, solar isolation and wind speed at a summer day and a winter day, respectively. The penetration levels of sustainable energy in the Upper-Egypt region are presented. Case Study 1: Operation of the Egyptian grid at a summer day: Figure 6 shows the share of sustainable energy in the Upper-Egypt region and in the whole grid as defined in (12) and (13), respectively. Figure 7 shows the total power losses and voltage deviation index [6]. Figure 8 shows the penetration levels of PV, wind and hydro, as defined in (14), (15) and (16), respectively. Comparisons are shown in the following three cases: (i) with CVO, (ii) with TLBO and (iii) without optimization. Penetration level of renewables in Upper Egypt = P PV + P Hydro + P Wind imported Total power feeding the region (12) Penetration level of renewables in whole grid = P PV + P Hydro + P Wind Total power generated in the grid (13) Penetration level of photovoltaics in whole grid = P PV Total power generated in the grid (14) Penetration level of wind in whole grid = P Wind Total power generated in the grid (15) Penetration level of hydro in whole grid = P Hydro Total power generated in the grid (16) where the power injected from Benban PV power station is P PV , the power injected from High Dam and Aswan Reservoir power stations is P Hydro , the wind power injected in th eCanal region is P Wind and the wind power exported to the Upper-Egypt region from the Canal region is P Wind imported assuming the 220 kV connection exists. Smart Cities 2022, 5, FOR PEER REVIEW 12 Figure 6. Penetration level of renewable energies in the Upper-Egypt region and the whole Egyptian grid in a summer day. Case Study 2: Operation of the power system at a winter day: Figure 9 shows the share of the sustainable energies in the Upper-Egypt region an in the entire power system. Figure 10 shows the total power losses and the voltage devia tion index. Figure 11 shows the penetration levels of the photovoltaics, wind and hydr without and with applying the CVO and the TLBO power system optimization tech niques. The results investigate that by applying the CVO and the TLBO to optimally operat the power system, we achieved fully renewable energy generation in the Upper-Egyp region in the whole day in winter. In the operation without grid optimization, fully re newable power generation is achieved only for 10 h in the Upper-Egypt region. The shar of sustainable energies in all regions of Egypt using the CVO optimization reached 20% or more for fifteen hours during the winter day while the TLBO achieved the same targe only 6 h in that day. The power system without optimization did not reach 20% penetra tion level of renewables during the day. The total power losses of the system using th CVO are reduced by more than 10% compared to that of TLBO optimization. In the cas without optimization, the losses are 20% more than the case with the CVO and 10% mor than the case with the TLBO optimization technique. The CVO also has resulted in min mum voltage deviation index compared to the other two cases, in the winter day. Th results also show that the penetration level of PV can reach up to 17% during peak su hours using the CVO while it reached 15% only if we use the TLBO method. The wind ha penetration levels up to 10 % using the CVO and TLBO. Additionally, the hydro has pen etration levels up to 10% using the CVO or TLBO. Figure 12 shows the convergence char acteristics of the TLBO and CVO confirming that the CVO reach optimal fitness functio with a smaller number of iterations than TLBO. Figures 13 and 14 show the CPU time i the operation of the TLBO and CVO at each hour of a summer day and winter day, re spectively. The results show that the CVO has less CPU time by an average of 10% tha the TLBO in most of the hours of the day either in summer or in winter. The results show that through applying the CVO to optimally operate the power system, we can achieve 100% renewables in the Upper-Egypt region for more than half a day during the summer day while the time reduces to 11 h with the TLBO optimization. In the case of the system without power system optimization, 100% sustainable energy generation is achieved only 4 h in the region. The share of sustainable energy in Egypt using the CVO optimization reached 15%, or more, twenty hours during the summer day while TLBO achieved the same target only 12 h a day. The power system without optimization reached 15% share of renewable energies only two hours during the day. The total power losses of the system using the CVO are reduced by more than 10% compared to the case with TLBO optimization. In the case of no optimization the system has 30% losses more than that with the CVO and 20% more than with the TLBO technique. The use of the CVO also has resulted in minimum voltage deviation index compared to the TLBO and the case without optimization. The results show that the penetration level of PV can reach up to 12% during the peak sun hours using CVO while it reached 11% only using TLBO. The wind has penetration levels up to 7% using CVO and 5% using TLBO. The hydro has penetration levels up to 10% and 8% using CVO and TLBO, respectively. All the results are based on the existing energy mix. Case Study 2: Operation of the power system at a winter day: Figure 9 shows the share of the sustainable energies in the Upper-Egypt region and in the entire power system. Figure 10 shows the total power losses and the voltage deviation index. Figure 11 shows the penetration levels of the photovoltaics, wind and hydro without and with applying the CVO and the TLBO power system optimization techniques. Smart Cities 2022, 5, FOR PEER REVIEW 15 Figure 9. Penetration level of renewable energies in the Upper-Egypt region and the whole grid in a winter day. The results investigate that by applying the CVO and the TLBO to optimally operate the power system, we achieved fully renewable energy generation in the Upper-Egypt region in the whole day in winter. In the operation without grid optimization, fully renewable power generation is achieved only for 10 h in the Upper-Egypt region. The share of sustainable energies in all regions of Egypt using the CVO optimization reached 20% or more for fifteen hours during the winter day while the TLBO achieved the same target only 6 h in that day. The power system without optimization did not reach 20% penetration level of renewables during the day. The total power losses of the system using the CVO are reduced by more than 10% compared to that of TLBO optimization. In the case without optimization, the losses are 20% more than the case with the CVO and 10% more than the case with the TLBO optimization technique. The CVO also has resulted in minimum voltage deviation index compared to the other two cases, in the winter day. The results also show that the penetration level of PV can reach up to 17% during peak sun hours using the CVO while it reached 15% only if we use the TLBO method. The wind has penetration levels up to 10 % using the CVO and TLBO. Additionally, the hydro has penetration levels up to 10% using the CVO or TLBO. Figure 12 shows the convergence characteristics of the TLBO and CVO confirming that the CVO reach optimal fitness function with a smaller number of iterations than TLBO. Figures 13 and 14 show the CPU time in the operation of the TLBO and CVO at each hour of a summer day and winter day, respectively. The results show that the CVO has less CPU time by an average of 10% than the TLBO in most of the hours of the day either in summer or in winter. Smart Cities 2022, 5, FOR PEER REVIEW 17 Figure 11. PV, wind and hydro penetration level in the whole grid in a winter day. Discussions The research has presented a study for optimal operation of the Egyptian power system to achieve maximum penetration level of renewable energies, minimum power losses and minimum voltage deviation. The optimization is performed using CVO and TLBO in DIgSILENT software application as well as the simulated model of the grid. The results show that CVO has better performance than TLBO in achieving the three objectives as illustrated in the Table 3. The convergence characteristic of CVO is almost linear due to its faster response than that of TLBO. The proposed CVO technique can be applied practically to the Egyptian power grid by calculating the optimal power that should be generated by each power plant. The application can be implemented in the national control center that instructs power plants to make necessary changes in its output power. Discussions The research has presented a study for optimal operation of the Egyptian power system to achieve maximum penetration level of renewable energies, minimum power losses and minimum voltage deviation. The optimization is performed using CVO and TLBO in DIgSILENT software application as well as the simulated model of the grid. The results show that CVO has better performance than TLBO in achieving the three objectives as illustrated in the Table 3. The convergence characteristic of CVO is almost linear due to its faster response than that of TLBO. The proposed CVO technique can be applied practically to the Egyptian power grid by calculating the optimal power that should be generated by each power plant. The application can be implemented in the national control center that instructs power plants to make necessary changes in its output power. In future, the authors will study possible solutions to maximize the penetration level of renewable energies during the year to reach the targeted share of RE in the Egyptian grid as per the plan. In addition, a suggested extension of this research is to study the impact of renewable uncertainties [33,34] on the optimization results. Conclusions The paper has presented a novel technique to maximize the share of sustainable energies in the Egyptian power system considering achieving minimum total line losses and minimum voltage deviation. The applied method of the CVO-based grid optimization drives the system to resourcefully use the available sustainable energy technologies compared to the TLBO algorithm and the case without optimization. During the simulated summer day, the CVO optimization technique achieved fully sustainable power generation in the Upper-Egypt region 13 h a day compared to 11 h if the TLBO is applied or 4 h if no optimization. During the representative winter day, the use of the CVO and TLBO, can lead to achieving 100% renewable energy in the Upper-Egypt region 24 h a day while in the case without optimization 100% renewable energy is possible only for 10 h a day. The penetration level of sustainable energy in whole grid using the CVO optimal power flow stretched to more than 20% for 15 h during the winter day compared to 6 h if the TLBO is applied. During the summer day, the share of sustainable energies in the grid reached 15 % or more for twenty hours a day compared to twelve hours using the TLBO method. With the CVO optimization, we can achieve decrease in the total power losses of the grid by more than 10% compared to the TLBO method and 20-30% in the representative summer and winter days. The results prove that the CVO-based multi-objective power flow resulted in minimum voltage deviation compared to that of the TLBO and the case without optimization. The results also show that the penetration level of PV can reach up to 17% in winter at peak sun hours and 12% in summer if the CVO is applied. The results also investigate that the CVO has less CPU time and number of iterations to reach optimal solution than TLBO.
2022-01-12T16:14:23.964Z
2022-01-05T00:00:00.000
{ "year": 2022, "sha1": "339325dd7dae96003480ae6a6f5d11990314ac85", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2624-6511/5/1/3/pdf?version=1641398221", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "b7216108cbc3e1e967edc48aa4423845e1fbcf19", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
265310408
pes2o/s2orc
v3-fos-license
TNF-α and IFN-γ prestimulation enhances the therapeutic efficacy of human amniotic epithelial stem cells in chemotherapy-induced ovarian dysfunction Background Exposure to a harsh ovarian microenvironment induced by chemotherapeutic agents seriously affects the remodeling of ovarian function and follicular development, leading to premature ovarian failure or insufficiency (POF/POI). For decades, the effectiveness of stem cell therapies in POI animal models has been intensively studied; however, strategies to enhance the therapeutic effect of stem cells remain challenging. Methods In this study, we first observed the pathological changes of the ovaries at different time points during chemotherapy, including the number of follicles, granulosa cell proliferation, oxidative stress damage, ovarian fibrosis, and inflammatory reaction. Moreover, we investigated whether activated hAECs stimulated by the proinflammatory cytokines tumor necrosis factor-α (TNF-α) and interferon-γ (IFN-γ) were more effective than native hAECs in repairing ovarian injury induced by chemotherapy. Results The inhibitory effect of chemotherapy drugs on ovarian granulosa cells (GCs) in growing follicles mainly occurred on day 3 after chemotherapy in a mouse model. Then, continued ovarian injury, including oxidative damage and cell death cascades, resulted in the depletion of follicular reserves and inflammation-related ovarian fibrosis. Cytokine array demonstrated that activated hAECs secreted high levels of paracrine cytokines related to extracellular matrix (ECM) remodeling, angiogenesis, and immunomodulation. An in vivo study showed that the engraftment rate of activated hAECs in damaged ovaries was higher than that of native hAECs. Furthermore, activated hAECs in damaged ovaries had significantly upregulated expression of the antioxidant proteins thioredoxin1/2. In addition, activated hAECs had increased numbers of mature follicles and ameliorated the ovarian microenvironment by promoting angiogenesis and reducing ovarian fibrosis. Conclusions These results indicated that secondary ovarian damage induced by chemotherapy, including oxidative stress damage, chronic inflammatory response, and ovarian tissue fibrosis should be attended. Prestimulation with the proinflammatory factors TNF-α and IFN-γ could enhance the therapeutic efficacy of hAECs against chemotherapy-induced ovarian dysfunction, which may become a new feasible strategy to improve the therapeutic potential of hAECs in regenerative medicine. Graphical Abstract Supplementary Information The online version contains supplementary material available at 10.1186/s41232-023-00309-y. Background Chemotherapeutic agents greatly improve the efficacy of cancer treatment and prolong the survival of cancer patients, but inevitably lead to reproductive toxicity in young female cancer survivors [1].Previous histological studies on the human ovary have shown that chemotherapy could cause the depletion of follicular reserves and ovarian tissue fibrosis and ultimately lead to premature ovarian failure or insufficiency (POF/POI) [2].Although some efforts have been made to determine the cause of chemotherapy-induced ovarian dysfunction, the underlying molecular mechanism remains unclear. In animal studies, chemotherapy drugs exert negative effects on the ovary through distinct mechanisms, and DNA damage-induced cell apoptosis is considered to be the principal mechanism of irreversible decline in the ovarian reserve [3].In addition to direct DNA damage, chemotherapy-induced oxidative stress is accompanied by increased production of reactive oxygen species (ROS) [4].A clinical study reported that the level of malondialdehyde (MDA), which is a marker of oxidative damage, was significantly increased in patients receiving highdose chemotherapy [5].The increase in serum oxidative stress may be a promising indicator of the risk of primary ovarian insufficiency [6].Oxidative stress-related mitochondrial dysfunction leads to apoptosis in ovarian cells, resulting in declines in ovarian function and the number and quality of oocytes [7].In addition, oxidative stress could upregulate the expression of proinflammatory cytokines by activating a variety of transcription factors. Although inflammatory factors are indispensable in the reproductive process, excessive inflammatory reactions cause abnormal follicular development and ovarian fibrosis [8].Women with the low and high tumor necrosis factor-alpha receptor 2s (TNFR2) levels were much more likely to have a risk of early menopause than those with medium levels of TNFR2 [9], suggesting that inflammation may be an important cause of ovarian dysfunction.Our previous research showed that chemotherapy drugs caused granulosa cell (GC) apoptosis and inflammatory reactions in the ovaries of mice [10].However, the relationship between oxidative stress and chronic inflammation in the pathological process of chemotherapy-induced POF/POI has not been completely elucidated. Stem cell therapy brings new hope for the treatment of diseases and the recovery of tissue function.Numerous studies have showed the repair potential of stem cells, which can differentiate into desired cell types, activate the endogenous response, promote angiogenesis, and improve the tissue microenvironment [11].Human amniotic epithelial stem cells (hAECs) derived from placentas have several unique advantages over other stem cells, including no rejection, low proliferative potential due to a lack of telomerase expression, and the avoidance of ethical concerns [12].Our previous studies have indicated that the transplantation of hAECs and hAEC derivatives (paracrine cytokines and exosomes) could effectively repair ovarian function and improve the fertility of mice in a chemotherapy-induced POF/POI model by homing and differentiating into ovarian GCs, as well as exerting proangiogenic and anti-inflammatory effects [10,13,14].Although the application of stem cells has been shown to improve preclinical and clinical outcomes, there are still many challenges to overcome.The main limitation is the loss of cell viability and the reduction in repair ability after systemic or local transplantation [15].Therefore, it is necessary to establish a new strategy to improve their therapeutic efficacy. At present, researchers have proposed using prestimulation to strengthen the repair abilities of stem cells, in which stem cells are subjected to mild and transient stimulation before engraftment [16].Modified models include genetic engineering, inducing differentiation into specific cell types, and activation with damaged signal molecules such as proinflammatory cytokines [17].In several prestimulation models, the effect of proinflammatory cytokines on stem cells has been highlighted in mesenchymal stem cells (MSCs), and the expression of immunomodulatory proteins and the production of soluble factors were upregulated to improve MSC-mediated repair potential [18].A recent study showed that hAECs stimulated with cytokines TNF-α and IFN-γ could alleviate dextran sulfate sodium (DDS)-induced colitis in mice through anti-inflammation and regulating Th17/Treg balance [19].However, it is not clear whether prestimulation with proinflammatory factors could enhance the therapeutic potential of hAECs to repair ovarian function. The main purpose of this study was to examine the roles of oxidative stress and chronic inflammation in chemotherapy-induced ovarian dysfunction.Moreover, we further investigated the paracrine ability of hAECs stimulated by TNF-α and INF-γ in vitro and evaluated the therapeutic effect of activated hAECs in a chemotherapy-induced ovarian dysfunction mouse model. Isolation and culture of hAECs Informed consent was obtained from healthy women who tested negative for HIV-I, hepatitis B, and hepatitis C prior to obtaining human placentas.Approval of the acquisition protocol was provided by the Institutional Ethics Committee of the International Peace Maternity and Child Health Hospital (IPMCH).The hAEC isolation method was described previously [10].Isolated hAECs were seeded in 100 mm cell culture plates containing Dulbecco's modified Eagle's medium/nutrient mixture F-12 (DMEM/F12, Gibco, Grand Island, NY, USA) containing 10% fetal bovine serum (FBS, Gibco), 2 mM glutamine, penicillin (100 IU/mL; Gibco), streptomycin (100 μg/mL; Gibco), and epidermal growth factor (EGF, 10 ng/ mL).Incubators were set at 37 °C and contained 5% CO 2 .Cells were collected for subsequent experiments when they reached 80-90% confluence. POF/POI model establishment A total of 76 female C57BL/6 mice aged 7-8 weeks were obtained from the Shanghai Experimental Animal Center of the Chinese Academy of Sciences and reared at room temperature 25±2°C, relative humidity 55±5%, and a 12h-light/dark cycle.The mice were reared in an animal facility for 1 week before the experiment.The chemotherapy-treated group and Sham control group were divided randomly.Briefly, single doses of 30 mg/ kg of busulfan (Bu, Sigma) and 120 mg/kg of cyclophosphamide (CTX, Sigma) were injected intraperitoneally in the chemotherapy-treated group (Cy, n=45) to establish the chemotherapy-induced POF/POI model as described previously [10].Moreover, an equivalent volume of PBS was injected into mice in the sham-control group (Sham, n=31).All procedures were approved by the Institutional Animal Care and Use Committee of Shanghai standards and the National Research Council Guide for the Care and Use of Laboratory Animals.Efforts were made to alleviate animal suffering and use the fewest possible number of animals with restrictions in the study. Histological analysis Mice in different groups were euthanized for further analysis.Ovaries were collected at different time points after chemotherapy for histological analysis.The tissues were immersed in Bouin's liquid (containing 5% of acetic acid, 9% formaldehyde, and 0.9% of picric acid) at room temperature.Then, the ovaries were dehydrated and embedded in paraffin.The morphological structure of the ovary was evaluated under a light microscope with hematoxylin and eosin (HE)-stained slides.Follicle stage classification was performed according to previously defined criteria [13].In brief, blind follicle counts were conducted by two independent researchers who examined every fifth section of the entire ovary.A primordial follicle refers to a single fusiform oocyte surrounded by GCs.A primary follicle indicates the unit of an oocyte surrounded by at least three cubic-shaped GCs.A secondary follicle is characterized by an oocyte surrounded by at least two layers of GCs with follicular cavity deficiency.Mature follicles (also called antral follicles) contain at least two layers of GCs with an evident follicular cavity. Biochemical assays The levels of oxidative stress in the ovaries and serum of mice in the different treatment groups were measured.Biochemical analysis kits (Beyotime, Biotechnology, China) were used to measure MDA concentrations and antioxidant capacity according to the manufacturer's instructions. Transmission electron microscopy (TEM) Approximately 1 mm 3 of fresh ovarian tissue was obtained and instantly fixed with 2.5% glutaraldehyde at room temperature.Ultrathin sections were prepared to observe mitochondrial ultrastructural morphology, and the sections were placed on copper grids and stained with uranyl acetate and lead citrate for evaluation by TEM. Sirius red staining For Sirius red staining, after routine xylene dewaxing and graded ethanol hydration, paraffin ovarian sections were then stained with Picric acid-Sirius red (PSR) solution for 40 min at room temperature.The PSR solution was prepared with Sirius Red F3BA (Sigma-Aldrich) dissolved in a saturated aqueous solution of picric acid (Sigma-Aldrich) at 0.1% w/v.The immersion was followed by incubation with 0.5% glacial acetic acid and incubation with 0.05 M hydrochloric acid, in addition to four washes.The sections were further rapidly dehydrated in 100% ethanol after excess acidified water was carefully removed from the sections.For each independent experiment, all PSR staining took the same amount of time to minimize variations in staining intensity. Cytokine array To further investigate the changes in paracrine components from native and activated hAECs stimulated by TNF-α and IFN-γ, we used the methods described in a previous study [13].First, a total of 2×10 6 hAECs were cultured in 100-mm culture dishes until they reached 80% confluence and were then stimulated with TNF-α and IFN-γ.Twenty-four hours later, activated hAECs were washed with PBS to remove the cytokines and then cultured in an FBS-free medium for another 48 h.After the cells were cultured, conditioned medium from activated hAECs was collected and centrifuged at 300×g for 5 min to remove cell debris.Control conditioned medium from unstimulated hAECs (native hAECs) was generated in the same way except for the addition of cytokines to the culture dish.To measure the secretion levels of factors, a cytokine assay (AAH-BLG-1, Ray Biotech, Norcross, GA, USA) was performed. Tube formation assay To study the proangiogenic effects of secretions from native and activated hAECs, conditioned medium from native hAECs (native-hAEC-CM) and conditioned medium from activated hAECs (activated-hAEC-CM) were collected.Human umbilical vein endothelial cells (hUVECs) were cultured in DMEM/F12 supplemented with 10% FBS.A total of 3×10 4 hUVECs were seeded in the bottom chamber of a Matrigel-coated 24-well plate.Then, the culture medium of hUVECs was replaced with native-hAEC-CM or activated-hAEC-CM for 4 h to allow tube-like structure formation.The tube-like structures were observed under a light microscope, and the number of tubes and nodes was quantified by ImageJ software. Tracing transplanted hAECs To observe the homing ability of grafted cells, native hAECs and activated hAECs were prelabeled with the fluorescent dye PKH26 (Sigma Aldrich, St. Louis, MO, USA) according to the manufacturer's instructions.PKH26-labeled native hAECs and activated hAECs were microinjected into the damaged ovary (2×10 4 cells, a volume of 20 μL) on day 7 after chemotherapy.At 1 week after transplantation, the ovaries were collected and frozen sections were prepared at a thickness of 7 μm.Then, the ovarian sections were stained with DAPI for 5 min at room temperature.The fluorescent signal of PKH26 in frozen ovarian sections was examined under an inverted fluorescence microscope. Cell transplantation Native hAECs and activated hAECs were microinjected into the injured ovaries of chemotherapy-treated mice (2×10 4 cells, a volume of 20 μl) on day 7 after chemotherapy.The mice in the chemotherapy control group were injected with an equivalent volume of PBS.The animals were sacrificed for subsequent experiments 1 and 4 weeks after cell transplantation. Measurement of ROS The level of ROS in the ovary was measured according to the manufacturer's instructions.Frozen ovarian sections were incubated with DCFH-DA for 10 min at 37 °C.After being washed, the fluorescence level was immediately examined under an inverted fluorescence microscope and photographed.The mean fluorescence intensity was analyzed using the ImageJ software. Immunofluorescence labeling To assess macrophage infiltration in damaged ovaries, dual staining with CD68 and CD163 was performed.Sections were incubated with primary mouse monoclonal anti-CD68 antibody (1:1000; Abcam) and rabbit monoclonal anti-CD163 antibody (1:1000; Abcam) overnight at 4 °C after being dewaxed.After being washed with PBS, the sections were incubated with the corresponding secondary antibodies conjugated with Alexa Fluor 488 and 594 (1:3000; Cell Signaling Technology, CST).The fluorescent signals were captured and photographed with a TCS SP5 confocal laser scanning microscope (Leica). Immunohistochemical (IHC) staining The ovarian sections were immersed in boiling sodium citrate solution for antigen retrieval, followed by routine dewaxing and hydration.The rest of the procedure was performed according to the instructions of the IHC staining kit (Abcam).Briefly, the slides were soaked in hydrogen peroxide solution and blocked with diluted goat serum, followed by overnight incubation with primary antibodies against Ki-67 (1:500, Abcam), DNA/ RNA damage marker (1:1000, CST), and CD34 (1:1000, CST).The next day, the sections were washed and successively incubated with biotinylated anti-mouse/rabbit IgG, streptavidin peroxidase, and DAB chromogen solution.Some slides were counterstained with hematoxylin.The negative control samples were subjected to identical treatments, except for primary antibodies, and exhibited no specific staining.The number of microvessels was calculated by counting the number of CD34-positive vessellike structures in four randomly selected fields in each section with ImageJ software. Statistical analysis The data was presented as the mean±standard error of the mean (SEM).For statistical analysis, differences among the different treatment groups were analyzed with one-way ANOVA followed by the Bonferroni post hoc test.All statistical analyses were performed using Graph-Pad Prism Software (San Diego, CA, USA). Chemotherapy-induced growth inhibition in ovarian GCs occurred in the early stage in vivo Our previous study demonstrated that a single injection of high-dose chemotherapy drugs could induce ovarian injury, ultimately leading to POF/POI in mice [10]; however, the underlying mechanisms are still unknown.In the current study, we collected ovarian tissues at different time points after chemotherapy (Cy), as shown in Fig. 1A.The results showed that the ovarian index (ovary weight/ body weight) of mice decreased significantly on days 3 and 7 after chemotherapy (Fig. 1B, P<0.05).Morphological analysis showed that the GC layer of antral follicles in the Cy-treated group was thinner than that in the sham group.Moreover, naked oocytes, as typical features, were present in antral follicles (abnormal follicles) on day 3 after chemotherapy (Fig. 1C).The number of primordial follicles and antral follicles decreased gradually after chemotherapy; however, the number of abnormal follicles and atresia follicles in damaged ovaries began to increase from day 3 after chemotherapy (Fig. 1D, P<0.05). We further examined ovarian cell proliferation at different time points after chemotherapy by immunostaining of the proliferation marker Ki-67.The majority of Ki-67-positive cells were GCs in the antral follicles in the sham group; however, the proliferation of ovarian GCs was inhibited after chemotherapy (Fig. 1E).Western blotting analysis showed that the expression of proliferation marker (proliferating cell nuclear antigen, PCNA) in damaged ovaries decreased significantly on day 3 after chemotherapy (Fig. 1F-G, P<0.05).A previous study reported that the mechanism in chemotherapy-induced loss of ovarian reserve involved accelerated activation of primordial follicles via the PI3K/Akt pathway [20]; therefore, we further examined the expression of pathwayrelated proteins in Cy-treated ovaries by western blotting.The results showed that the protein expression of pAkt/ Akt in damaged ovaries was significantly reduced on days 1 and 3 after chemotherapy; however, there were no differences on days 7 and 14 after chemotherapy (Fig. 1H-I , P<0.05). These results demonstrated that the inhibitory effect of chemotherapy drugs on ovarian GCs mainly occurred in the early stage.Although the proliferation of GCs and activation of primordial follicles gradually recovered in the later stage of chemotherapy, the number of follicles in damaged ovaries continued to decrease. Chemotherapy caused oxidative damage and mitochondrial dysfunction in the ovary Chemotherapy induces DNA damage and oxidative stress by altering the redox balance [21].To further elucidate the mechanism of continued ovarian injury induced by chemotherapy, we measured the antioxidant capacity and the level of oxidative marker MDA in serum and ovaries in the different groups.Compared with that in the sham group, the serum antioxidant capacity decreased significantly on day 7, while the ovarian antioxidant capacity decreased dramatically on days 3, 7, and 14 after chemotherapy.The level of MDA in the serum and ovaries of mice in the Cy-treated group was higher than that in the sham group (Fig. 2A, P<0.05).Furthermore, the expression of oxidative stress-induced DNA/RNA damage markers was detected in the GCs of antral follicles in the ovaries of mice after chemotherapy (Fig. 2B). To further observe follicular development in chemotherapy-damaged ovaries, cellular ultrastructure was examined by TEM.The results showed that the ultrastructure of the cumulus-oocyte complex (COC), GCs, and thecal cells (TCs) in the antral follicles of damaged ovaries changed evidently.Compared with that in the sham group, the connection between cumulus GCs and oocytes in the COC was loose, the nucleus of cumulus GCs shrank, and the arrangement of TCs was disordered in the Cy-treated group.Moreover, among the three types of cells, the mitochondria of GCs were swollen and had fragmented cristae, as shown in Fig. 2C-F.These results indicated that chemotherapy could lead to persistent oxidative damage in the ovary and mitochondrial dysfunction in GCs of antral follicles. Chemotherapy-induced cell death and inflammation-related fibrosis in the ovary To evaluate cell death induced by oxidative damage in the ovary, the expression of apoptosis-and pyroptosisrelated proteins in ovaries was determined by western blotting.The results showed that the expression of Cleaved-Caspase 3 (CAS3) increased significantly on day 1; however, there were no differences on days 3 and 7 after chemotherapy.Notably, the expression of Cleaved-CAS3 and Bax/Bcl2 increased again on day 14 in the Cy-treated groups.Moreover, the expression of the pyroptosis-related protein Cleaved-Caspase 1 (CAS1) in damaged ovaries increased significantly on day 1, and Cleaved-CAS1 and NLRP3 increased again on day 14 after chemotherapy (Fig. 3A, B, P<0.05). Cell death can also induce inflammatory responses leading to persistent inflammation [22]; thus, we measured serum levels of inflammatory factors at different time points after chemotherapy.The results showed that the levels of proinflammatory factors, including TNF-α, IL-6, CCL2, IL-1β, and IL-18 in the Cy-treated groups increased significantly; however, the level of the anti-inflammatory factor IL-10 decreased significantly (Fig. 3C, P<0.05).The accumulation of inflammatory factors causes collagen deposition and ovarian fibrosis, which can be measured by PSR staining [23].Morphological analysis showed that the area of fibrosis in the Cy-treated groups was higher than those that in the sham group (Fig. 3D, E, P<0.05).These data suggested that chemotherapy could lead to ovarian cell apoptosis and pyroptosis, while systemic chronic inflammation resulted in ovarian fibrosis. Inflammatory factors TNF-α and IFN-γ costimulation increased paracrine secretion of hAECs Our previous study has demonstrated that hAECs can secrete a variety of bioactive cytokines, which play important roles in promoting angiogenesis and inhibiting inflammation.Thus, a cytokine array was performed to examine secreted cytokines in the conditioned medium of native hAECs (native-hAEC-CM) and activated hAECs (activated-hAEC-CM).The results indicated that 27 proteins (#1) and 36 proteins (#2) were differentially expressed more than 2-fold in activated hAEC-CM compared with native hAEC-CM (Fig. 4A, P<0.05).The table showed 12 upregulation proteins in Group #1 and #2, including tissue remodeling proteins (MMP-9 and MMP-3), proangiogenic factors (angiopoietin-like factor and CXCL16), and immunomodulatory proteins (GDF-15 and TGF-β) (Fig. 4B, C).GO analysis showed that the upregulated secreted proteins were mainly enriched in cytokine-cytokine receptor interactions, the JAK-STAT signaling pathway, the PI3K-Akt signaling pathway, the MAPK signaling pathway, the IL-17 signaling pathway, and the cytokine receptor binding (Fig. 4D).The ELISA results further confirmed that the concentrations of MMP-9, TIMP-1, and CXCL16 in activated hAEC-CM were significantly higher than those in native hAEC-CM (Fig. 4E, P<0.05).In addition, the tube formation assay showed that activated hAEC-CM significantly increased the tube formation of hUVECs compared with native hAEC-CM (Fig. 4F-H, P<0.05).These results suggested that TNF-α and IFN-γ costimulation changed paracrine secretion of hAECs. Activated hAECs exhibited a high retention rate in injured ovaries and upregulated expression of antioxidant proteins To further study the therapeutic potential of activated hAECs in chemotherapy-induced ovarian dysfunction, we transplanted native hAECs and activated hAECs into the left and right ovaries, respectively, and assessed ovarian function, as shown in Fig. 5A, B. On day 7 after transplantation, we observed that activated hAEC transplantation significantly increased the ovarian index compared with that of the PBS group after chemotherapy (Fig. 5C, D, P<0.05).To observe the tissue retention rate of native hAECs and activated hAECs, the cells were prelabeled with fluorescent dye PKH26 before transplantation (Fig. 5E).The results showed that the red fluorescent signal of PKH26 was observed in the ovarian interstitial area, but not follicles on day 7 after transplantation (Fig. 5F).Compared with native hAEC transplantation, there were more red fluorescence signals in the damaged ovaries of the activated hAEC transplantation group (Fig. 5G, P<0.05).Furthermore, native hAECs and activated hAEC transplantation partially attenuated the increase in ROS induced by chemotherapy (Fig. 5H-I, P<0.05).Moreover, macrophage (CD68 + ) expressed M2 macrophage marker (CD163 + ) in damaged ovaries in the native and activated hAEC transplantation groups (Fig. 5J).In addition, the expression of redox homeostasis-related proteins in ovaries was further examined by western blotting.The results showed that activated hAEC transplantation significantly upregulated the expression of the antioxidant proteins thioredoxin1/2 in damaged ovaries (Fig. 5K-M, P<0.05).These results suggested that activated hAECs exhibited a high retention rate and upregulated the expression of antioxidant proteins in injured ovaries. Activated hAECs promoted follicular development by promoting angiogenesis and inhibiting fibrosis in injured ovaries To investigate the therapeutic effect of activated hAECs in injured ovaries, follicular development in the different treatment groups was evaluated 1 month after cell transplantation.Morphological analysis showed a decrease in follicles in Cy-treated ovaries at different development stages compared with that in the sham group.In the native and activated hAEC transplantation groups, developing follicles were observed in damaged ovaries (Fig. 6A).The number of primordial, primary, secondary, and antral follicles in the native hAEC and activated hAEC transplantation groups was higher than in the PBS treatment group after chemotherapy.Notably, activated hAECs significantly increased the number of mature follicles in damaged ovaries compared with native hAEC transplantation (Fig. 6B, P<0.05).Ovarian function and follicular development are influenced by the ovarian microenvironment, including angiogenesis and vascular function [24] and the degree of fibrosis [25].Ovarian angiogenesis and fibrosis in the different treatment groups were assessed by immunostaining for CD34 and PSR staining.The results showed that native and activated hAEC transplantation increased the number of microvessels (CD34 positive) and decreased ovarian fibrosis induced by chemotherapy.Intriguingly, we found that activated hAEC transplantation exerted a better effect on the expression of CD34 and the level of ovarian fibrosis in injured ovaries than native hAEC transplantation (Fig. 6C-E, P<0.05).These results indicated that activated hAEC transplantation had therapeutic potential in promoting follicular development by improving angiogenesis and reducing ovarian fibrosis in a chemotherapyinduced POF mouse model. Discussion As the survival rates of tumor patients increase each year and the incidence of cancer tends to occur in younger individuals, an increasing number of female patients suffer from reproductive toxicity caused by chemotherapy, which is accompanied by amenorrhea, early menopause, and decreased natural pregnancy and live birth rates, which seriously affect the physical and mental health of patients [26].The underlying mechanisms of chemotherapy-induced ovarian dysfunction have been widely studied, including excessive activation of primordial follicles and impairment of follicular maturation; however, less attention has been given to persistent ovarian injury in the late stage of chemotherapy. Our previous study indicated that a single intraperitoneal injection of high-dose cyclophosphamide and busulfan could induce slow depletion of the ovarian reserve in mice, which was accompanied by apoptosis of GCs and acute vascular injury [10].At different time points after chemotherapy, we observed that chemotherapy induced a gradual decrease in the follicle reserve in the ovaries of mice, according to the follicle counts.Moreover, the inhibitory effect of chemotherapy on the proliferation of ovarian GCs mainly occurred in the early stage of chemotherapy.A study indicated that chemotherapy drugs lead to the production of ROS and mitochondrial damage [27], and the imbalance in redox homeostasis in the ovarian microenvironment has been a major underlying cause of ovarian function impairment [28].In the current study, we observed that exposure to chemotherapeutic agents disrupted the balance of the redox system and induced the accumulation of abnormal mitochondrial morphology in GCs in antral follicles.In addition to ovarian cell apoptosis, pyroptosis is another form of chemotherapyinduced cell death, which provides a new molecular target for inhibiting chemotherapy-induced ovarian injury. In a chemotherapy-POF/POI mouse model, we have elaborated on the ovarian repair effect and underlying molecular mechanism of hAECs [10,13,14].However, how to enhance hAEC-mediated repair needs to be further elucidated.Inflammation is an important pathological manifestation of tissue damage that affects the functional remodeling of tissues and organs [29].Moreover, prestimulation with inflammatory factors in vitro could be used to strengthen the repair capacity of stem cells, including enhancing the expression of important molecules and enzymes, which are important for the repair outcomes of the stem cell transplantation [15].In previous studies, we identified some important bioactive cytokines secreted by hAECs and demonstrated the effect of these cytokines on ovarian GCs, vascular endothelial cells, and macrophages [10,13,14].A study reported that the effect of hAECs on angiogenesis could be affected by inflammation [30].In this study, we found that TNF-α and IFN-γ prestimulation significantly increased the production of paracrine cytokines by hAECs, including the proangiogenic factors and matrix metalloproteinases (MMPs).In the chemotherapy-induced POF/POI mouse model, we observed that the retention rate of activated hAECs after transplantation was significantly higher than that of native hAECs in damaged ovaries.Moreover, an in vivo study demonstrated that activated hAEC transplantation significantly promoted angiogenesis in damaged ovaries.This beneficial effect may involve inflammatory factors stimulating the production of proangiogenic factors by hAECs.MMPs are tissue-remodeling enzymes that process various biological molecules.Studies have shown that MMP2 and MMP9 are produced by hAECs and play important roles in inhibiting fibrosis and promoting ECM remodeling in liver injury and fibrosis models [31].Furthermore, MMP9 had anti-inflammatory effects and protected osteoblasts against LPS-induced inflammation [32].Among the upregulated secretion factors, MMP9 was upregulated approximately 5-fold in activated hAECs, which could contribute to the attenuation of ovarian fibrosis induced by chemotherapy. There are several limitations to the present study.First, we described persistent and secondary ovarian damage induced by chemotherapy, including an imbalance in oxidative stress, cell cascade death, and chronic inflammation.However, the inner relationships among these pathological changes have not been clarified.Second, prestimulation with inflammatory factors TNF-α and IFN-γ greatly changed the paracrine secretion characteristics of hAECs; however, the biological function of these high levels of cytokines requires further validation in animal models.Third, we observed that TNF-α and IFN-γ prestimulation enhanced the effectiveness of hAECs in repairing ovarian function, but the mechanistic investigation was not in-depth.In the future, we will continue to carry out in-depth studies to solve these issues. Conclusions Our study revealed that the inhibitory effect of chemotherapeutic agents on the proliferation of ovarian GCs mainly occurred in the early stage of chemotherapy, but sustained ovarian damage affected follicular development and accelerated ovarian fibrosis.Furthermore, we found that appropriate prestimulation with proinflammatory factors TNF-α and IFN-γ could increase the production of paracrine cytokines by hAECs.In chemotherapy-induced POF/POI mice, activated hAECs exhibited a better retention rate in the damaged ovary and had increased expression of antioxidant proteins in ovaries.Moreover, activated hAECs significantly promoted the development of mature follicles by promoting angiogenesis and reducing ovarian fibrosis compared with native hAECs.Therefore, TNF-α and IFN-γ prestimulation may be a promising strategy to enhance the therapeutic strategy for the use of hAECs in regenerative medicine. Fig. 1 Fig. 1 Effect of chemotherapy drugs on ovaries at different time points.A Images showing the ovaries in the sham and chemotherapy (Cy)-treated groups on days 1, 3, 7, and 14.B The columns showed the body weight and ovarian weight of mice in the sham and Cy-treated groups.C Images showing HE staining of ovarian sections from the sham and Cy-treated groups.The asterisk represented a naked oocyte.Scale bars, 500 μm and 100 μm.D The columns showed the number of follicles at different stages, including primordial follicles, primary follicles, secondary follicles, and antral follicles, as well as the number of abnormal follicles and atretic follicles.E Representative images showing Ki-67 expression in ovarian sections from the sham and Cy-treated groups.Scale bars, 50 μm and 25 μm.F, G The protein expression of PCNA in ovaries was examined and analyzed at different time points after chemotherapy.H, I The expression of follicular activation-related proteins was examined and analyzed at different time points after chemotherapy.N=4 per group.The data were presented as the mean±SEM.*P<0.05,**P<0.01,***P<0.005,and ****P<0.001.Cy, chemotherapy; PCNA, proliferating cell nuclear antigen Fig. 3 Fig. 3 Effect of chemotherapy on cell death and inflammation-related fibrosis in the ovary.A Western blotting was used to examine the expression of apoptosis-and pyroptosis-related proteins in ovaries in the different treatment groups.B The columns showed that the expression of cleaved-CAS3, Bax/Bcl2, NLRP3, and cleaved-CAS1 significantly increased after chemotherapy.C ELISA was used to measure the level of inflammation-related cytokines in the serum of mice in the different treatment groups.D Representative images showing PSR-stained ovarian sections in the different groups.Scale bars, 100 μm and 25 μm.E The column displayed the area of fibrosis (PSR-positive staining) in ovaries in the different treatment groups.N=4 per group.The data were presented as the mean±SEM.*P<0.05,**P<0.01,and ***P<0.005 Fig. 4 Fig. 4 Effect of TNF-α and IFN-γ costimulation on the characteristics of hAECs.A The table shows the number of differentially expressed cytokines in the conditioned medium of native hAECs (native-hAEC-CM) and activated hAECs (activated-hAEC-CM) from donors #1 and #2.B Venn diagram showing the overlap of upregulated differentially expressed cytokines between donors #1 and #2.C All differentially upregulated cytokines were listed in the Table.D GO analysis of the biological process, cellular component, and molecular function categories of the upregulated cytokines.E ELISA was used to verify the levels of important cytokines in native-hAEC-CM and activated-hAEC-CM.F Representative images of the tube formation assay results in native-hAEC-CM and activated-hAEC-CM treatment groups.G, H Quantitative analysis of tube formation was conducted by counting tubes and nodes.The data were presented as the mean±SEM.*P<0.05,**P<0.01,and ***P<0.005 Fig. 5 Fig. 5 Effect of native hAECs and activated hAECs transplantation on injured ovaries.A, B Schematic diagrams showing the design of the animal experiment.C Representative images of ovaries in the different treatment groups.D The column showed the ovarian index (ovarian weight/ body weight) of mice in the different treatment groups.E Native and activated hAECs were labeled with the fluorescent dye PKH26.Scale bar, 50 μm.F The red fluorescent signals in ovarian sections were observed under a fluorescence microscope.Scale bar, 50 μm.G The column showed the retention rate of transplanted native and activated hAECs in damaged ovaries.H The level of ROS in damaged ovaries was examined by immunofluorescent staining.Scale bar, 50 μm.I The column showed the level of ROS in ovaries after native and activated hAEC transplantation.J Double immunostaining of CD68 and CD163 in ovaries in the different groups.Scale bar, 10 μm.K Western blotting was performed to examine the expression of redox homeostasis-related proteins in ovaries.L, M The columns showed that activated hAEC transplantation significantly upregulated the expression of thioredoxin1/2 in damaged ovaries.N=4 per group.The data were presented as the mean±SEM.*P<0.05 and **P<0.01 ( See figure on next page.)Fig. 6 Effect of native hAECs and activated hAEC transplantation on follicular development and the ovarian microenvironment.A Representative images of HE-stained ovarian sections in the different treatment groups.Scale bar, 100 μm.B The columns showed the number of primordial, primary, secondary, and antral follicles in the ovaries of mice in the different treatment groups.C Representative images showing the expression of CD34 and PSR staining in ovarian sections in the different treatment groups.Scale bar, 100 μm.D The column showed the number of microvessels in ovaries in the different treatment groups.E The column showed the area of fibrosis (PSR staining) in ovaries in the different treatment groups.N=4 per group.The data were presented as the mean±SEM.*P<0.05,**P<0.01,***P<0.005,and ****P<0.001
2023-11-22T15:29:39.360Z
2023-11-22T00:00:00.000
{ "year": 2023, "sha1": "d96b7a7ca6748e6889a9c95bfe154fdda88314b7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "d96b7a7ca6748e6889a9c95bfe154fdda88314b7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251506305
pes2o/s2orc
v3-fos-license
Developing a Data Driven Strategy and Guideline to Increase Per Capita Open Space and Relative Accessibility in Chittagong City : The population density in Chittagong City Corporation (CCC) area was 242.28 per square meter in 2019, and Bulmer suggests that, due to the high birth rate in Asia, cities such as Chittagong can be considered high density. Contextually, this ‘high-density’ element is a determining factor that potentially allows one to address the city’s open space standard, which “should compensate and complement the physical and social context of the [urban] surrounding environment”. The research in this paper is focused on the urban setting, defined in the CCC area of 168 square kilometres. The literature review and case study analysis found that per capita open space in Chittagong is far lower than the WHO recommendation (nine square meters per person). Additionally, the UN stated that “47% of [the city’s] population live within 400 m walking distance to open public spaces”, whereas, according to the previous study, in Chittagong City only 19% of residents live within this distance. Observing these issues, the aim of the paper is to develop an innovative way to obtain per capita open space in Chittagong city. To achieve the aim, the researchers analysed the data from surveys and interviews conducted by using SPSS and NVivo. These tools produced data that were, for example, used to develop themes of open space in Chittagong. This investigation and analysis of material allowed for the generation of strategies and planning recommendations to improve the open space situation in the city. Beyond these strategies, the research team produced new insights to promote sustainability in this area. Introduction The population density in Chittagong City Corporation (CCC) area was 242.28 per square meter in 2019 [1] (p. 20), and Bulmer [2] suggests that, due to the high birth rate in Asia, cities such as the Chittagong can be considered high density. Contextually this 'high-density' element is a determining factor that potentially allows one to address the city's open space standard, which "should compensate and complement the physical and social context of the [urban] surrounding environment" [3] (p. 27). The research in this paper is focused on the urban setting, defined in the CCC area of 168 square kilometers. The literature review and case study analysis found that per capita open space in Chittagong is 0.18 square meter [4] which is far lower than the WHO recommendation (9 square meter per person) [5]. Additionally, the UN stated that "47% of [the city's] population live within 400 m walking distance to open public spaces" [6]. Whereas, according to the previous study, only 19% of residents in Chittagong city live within this distance [4]. Observing these issues, the aim of the paper is to investigate guidelines to mitigate the ratio and to explore users' requirements of open space. The projected guidelines and the user's requirement will help to mitigate the crisis by increasing per capita open space in Chittagong city. Therefore, the research objective is "to investigate approaches of realizing open spaces in Chittagong city" and its intent is to investigate a process to increase open 1. What are the city's open space aspirations and how do these meet the urban growth plan of Chittagong? 2. What are the city's design/planning-based strategies that best support the open space aspirations of Chittagong? After careful literature review on open space typology, strategy, and standards (published in the previous article), the researcher intended to consider local influences. In addition to provide insight, interview offered advantage in generation of research data [7] and "Surveys are used to estimate the characteristics, behaviors, or opinions of particular populations" [7] (p. 2). Therefore, to address the questions, the researcher conducted a survey and interviewed residents and professionals in Chittagong, respectively. The survey method reported by Lal (2018) [8] is considered to be appropriate for use according to this approach (with amendment) as it also revolves around a case specific social survey. In summary, in relation to this thesis, the intent of the surveys and interviews are to: This paper presents the analyses of survey and interview data through SPSS and NVivo. The analysis of the results influences the researcher's consideration of creating open space, indicating the types and the user's aspiration with probable guidelines to increase these spaces in Chittagong. Interview Analysis Methodology "[Q]ualitative interviewing, especially the in-depth interview, is now used extensively as a keyway of exploring social meaning within social science research" [9] (p. 85). The researcher interviewed 13 professionals engaged in the planning and development of Chittagong City, such as the town planner, architect, sociologist, and archaeologist. To translate the interview data into findings for discussion, the researcher implemented an inductive content analysis and thematic analysis of interview data. Thematic analysis refers to the "method of identifying, analysing and reporting patterns (or themes) within data" [10] (p. 79). In addition to a conventional paper-based approach to analysis interview data, the researcher adopted the use of the NVivo 12 software as a qualitative data analysis tool to aid in the thematic analysis of data. Following the framework of analysis stated by Braun and Clarke [10], the data collected from the interviews is examined in six phases. These are: Phase 1: Familiarization with Data The first step is to familiarize oneself with data through listening and taking notes [11]. Therefore, in this step, to facilitate the analysis, the researcher transcribed the recorded audio conversation of 13 interviews into electronic text. The interview files consist of four to 14 pages. The researcher read and then after a couple of months reread these transcripts to develop an initial view of the potential themes. In addition, for the reader's review, two interviews documented in the Bengali language have been translated into English. Phase 2: Generating Initial Codes In the second phase, for analysing the interviews, the researcher set to generate codes from the interview data. It is the researcher's consideration that "[a] code in qualitative inquiry is most often a word or short phrase that symbolically assign a summative, salient, essence-capturing, and/or evocating attribute for a portion of language-based or visual data" [12] (p. 3). To implement this consideration, the researcher prepared a coding table based on the first impression of the transcriptions, developed in phase I. To generate a Sustainability 2022, 14,9828 3 of 25 code with NVivo, the researcher imported the 13 interview files in to "NVivo". Following these, the NVivo software generated 34 codes from the 980 text references of the 13 different interview files. Accommodating the process of this large data set in this analysis chapter has been included as part of the phases' explained in this section of the thesis. Phase 3: Searching for Themes Coding of the respondents' data occurred in the first two phases, which are preliminary steps that lead to the positioning of these into groups of thematic coherence. In this sense, "[a] theme captures something important about the data in relation to the research question" [10] (p. 82) and, in this step, the researcher established the themes based on the four interview questions developed to achieve the objective. Furthermore, themes were identified based on whether thematic coherence was considered, by the researcher, as to have a semantic or latent association. That is, semantic codes and themes identify the explicit and surface meanings of the data and latent codes capture underlying ideas, patterns, and assumptions [13]. Additionally, codes can lead to the identification of interesting information in data. Themes, however, are broader and involve the active interpretation of the codes and the data [11]. The relationship between files, themes, codes (also known as nodes), and references can be simplified as in a diagram presented in Figure 1. Following this, 980 code extracts from 13 files are sorted into 3 themes. Phase 4: Reviewing Themes In this step, the researcher verified "if the themes work in relation to the coded extracts and the entire data set [to] generated a thematic 'map' of the analysis" [10] (p. 87). Note, as a preliminary step, that the researcher was initially guided by a traditional paperbased approach where comments from each interview transcript were cut out and thematically arranged by code. The researcher reviewed and refined the themes identified in Phase 4: Reviewing Themes In this step, the researcher verified "if the themes work in relation to the coded extracts and the entire data set [to] generated a thematic 'map' of the analysis" [10] (p. 87). Note, as a preliminary step, that the researcher was initially guided by a traditional paper-based approach where comments from each interview transcript were cut out and thematically arranged by code. The researcher reviewed and refined the themes identified in phase three by reading the references of each code to explore whether they support, contradict, and overlap with a respective theme [11]. In this process, the researcher sorted the references generated by NVivo into three themes. Here the references are the quotes from of interview files, and NVivo sorted the number of references into codes. NVivo was also able to derive positive and negative sentiment in the interview files. In this sense, the "Positive sentiment" indicates the probability to increase open space in Chittagong and "Negative sentiment" the contrary. For example, a code from an interview file reads as the quotation "It will carry water to the river Karnaphuli, main River and it will have beautiful, lush green banks where people can sit. It will have trees and plants, bushes and shrubs". Hence, when analysed in NVivo, this sentiment is interpreted as positive. The relationship between themes and sentiments derived from NVivo is shown in the Figure 2. The two parent sentiment nodes (Positive and Negative) have four child nodes: neutral, positive, negative, and mixed. Figure 2 shows the ratio of positive and negative sentiment with child nodes in each theme according to NVivo. This helps the researcher to understand the sentiments in each theme. Again, the sentiment ratio of positive, mixed, neutral and negative are derived from NVivo to justify the probability of increasing the open space ratio in Chittagong. This figure shows that each theme has the majority of neutral and positive sentimentit where negative and mixed sentiments are less prioritized. This means that the majority of the participants are neutral regarding open space in Chittagong, but their positive thoughts on open space in Chittagong is stronger than negative. Therefore, the themes are more positive leaning in terms of sentiment which support a respondent's willingness to create or act on the improvement of open space in Chittagong. Figure 2 shows the ratio of positive and negative sentimen with child nodes in each theme according to NVivo. This helps the researcher to under stand the sentiments in each theme. Again, the sentiment ratio of positive, mixed, neutra and negative are derived from NVivo to justify the probability of increasing the ope space ratio in Chittagong. This figure shows that each theme has the majority of neutra and positive sentimentit where negative and mixed sentiments are less prioritized. Thi means that the majority of the participants are neutral regarding open space in Chitta gong, but their positive thoughts on open space in Chittagong is stronger than negative Therefore, the themes are more positive leaning in terms of sentiment which support respondent's willingness to create or act on the improvement of open space in Chittagong Phase 5: Defining and Naming Themes As a complementary step to phases III and iv, this phase revolves around an analysi which enhances the data placed under each theme. Hence, to generate clear titles of th themes in this step, each theme was named by analysing the overall data supporting i [10] and this naming validation continued in the final report [13]. Following this line o theming the data, the researcher shortlisted the codes and their associated extracts, an collated and combined the categories into broader themes. This process was proceede by the researcher, who removed repetitions and irrelevant themes by reorganising code Phase 5: Defining and Naming Themes As a complementary step to phases III and iv, this phase revolves around an analysis which enhances the data placed under each theme. Hence, to generate clear titles of the themes in this step, each theme was named by analysing the overall data supporting it, [10] and this naming validation continued in the final report [13]. Following this line of theming the data, the researcher shortlisted the codes and their associated extracts, and collated and combined the categories into broader themes. This process was proceeded by the researcher, who removed repetitions and irrelevant themes by reorganising codes and splitting differences. In addition to the NVivo result, the researcher has to add the missing nodes or rename the codes according to the findings of traditionally approached analysis. Phase 6: Producing Interview Report This phase relates the analysis of data respective to the theme to produce an analysis of the interview data [14]. It is also the final opportunity for the researchers to select an intense and persuasive set of extracts and illustrations to support the analysis [14]. When focusing on the research question, the researcher classified the codes under themes, as summarised in Table 1. After careful assessment, the researcher finalized 30 nodes with 584 references. Thirty nodes are categorised into three themes. The following sections present a more detailed data analysis of the themes and nodes. tions, or identified issues, and guidelines, respectively. Additionally, according to the NVivo, 12 people leaned towards 467 positive sentiments and 13 people interviewed lean towards 292 negative sentiments regarding open space in Chittagong. These themes helped the researcher with considerations and issues related to the development of open space guidelines while the sentiment identified to the researchers the signs of willingness that may be present to improve open space in Chittagong. Figure 3 presents the relationship between files and reference data derived from NVivo. Hence, in this analysis a theme was developed to best position the thoughts and conceptual orientation of professionals interviewed. Three overarching themes have been identified from the interview questions, and are as follows: Hence, in this analysis a theme was developed to best position the thoughts and conceptual orientation of professionals interviewed. Three overarching themes have been identified from the interview questions, and are as follows: The fourth question of the interview was a supplementary question to the second interview question of "Can you please explain to me why you/your department doesn't consider open space as an issue in Chittagong?" grouped with the 2nd theme mentioned above. A. Issues The interview responses presented a series of open space issues related to the open space setting in Chittagong city. Under this theme, the interviewees described the causes for reducing open space. This theme is further delineated into sub-themes where discussions on open space challenges in Chittagong are presented. These are urbanization, incomputable land use, calamity, lack of professional body, concurrent development, land unavailability, lack of planning initiative, uncoordination, and relocation. The graphical relationship (or positioning) of nodes with respect to its frequency of mention (i.e., references) are presented in Figure 4. The graph shows that "urbanization" is mostly referred to by interviewees. In addition, "Incompatible land use" and "Concurrent Development" are referred to by five respondents, represents more than one third of the total. The following section discuss the findings of the issues elaborating responses used to establish thematic framing (referred to as nodes), and the lateral references are used by respondents to describe the node. relationship (or positioning) of nodes with respect to its frequency of mention (i.e., references) are presented in Figure 4. The graph shows that "urbanization" is mostly referred to by interviewees. In addition, "Incompatible land use" and "Concurrent Development" are referred to by five respondents, represents more than one third of the total. The following section discuss the findings of the issues elaborating responses used to establish thematic framing (referred to as nodes), and the lateral references are used by respondents to describe the node. , climate, master plan, tradition and hydrology were focused. Hydrology is significantly prioritised as a consideration for open space, which includes the sewage system, rainwater and tidal water management and water supply. The graphical relationship of considerations among the files and references created from NVivo is presented in Figure 5. The figure shows that hydrology is referred to 80 times by the interviewees and is the highest among them. prioritised as a consideration for open space, which includes the sewage system, rainwater and tidal water management and water supply. The graphical relationship of considerations among the files and references created from NVivo is presented in Figure 5. The figure shows that hydrology is referred to 80 times by the interviewees and is the highest among them. C. Guidelines: Guidelines describes the respective measures in existing conditions to overcome the challenge of open space in Chittagong includes potentiality in dense setting. In this theme, mostly guidelines formed for existing natural reserves. Among these, canal, hills, river, ponds and the sea are recommended. In addition, guidelines regarding civic engagement, typology and mass transit are notable. Figure 6 shows the relationship of guidelines in terms of files and references representing that hills, waterfronts, and canals are highly recommended, with guidelines as open space. C. Guidelines Guidelines describes the respective measures in existing conditions to overcome the challenge of open space in Chittagong includes potentiality in dense setting. In this theme, mostly guidelines formed for existing natural reserves. Among these, canal, hills, river, ponds and the sea are recommended. In addition, guidelines regarding civic engagement, typology and mass transit are notable. Figure 6 shows the relationship of guidelines in terms of files and references representing that hills, waterfronts, and canals are highly recommended, with guidelines as open space. Figure 5. Relationship between files and reference in terms of consideration. C. Guidelines: Guidelines describes the respective measures in existing conditions to overcome the challenge of open space in Chittagong includes potentiality in dense setting. In this theme, mostly guidelines formed for existing natural reserves. Among these, canal, hills, river, ponds and the sea are recommended. In addition, guidelines regarding civic engagement, typology and mass transit are notable. Figure 6 shows the relationship of guidelines in terms of files and references representing that hills, waterfronts, and canals are highly recommended, with guidelines as open space. Figure 6. Relationship of guidelines in terms of interview files and references. Figure 6. Relationship of guidelines in terms of interview files and references. Findings of Interview According to the City of Casey, the "[o]pen Space Strategy [ . . . ] provides a framework to guide the planning, design, development and management of open space" [15] (p. 2). The interview files uncover thoughts on planning and opportunities of open space in Chittagong City. Barth [16] (p. 35) stated that, "[c]areful and thoughtful planning is critical to identifying opportunities to generate greater resiliency and sustainability benefits for the community". The key findings derived from the analysis are as below: 1. A need to set up a "Park and Open space body" which will implement the proposals on open space in the master plan, reserve the open spaces, mitigate jurisdictional conflict, maintain an open space policy, execute management, maintenance, land acquisition and preservation by hiring a number of professionals such as a planner, sociologist, economist etc. Where hiring a group of professionals will be cost effective, engaging them by outsourcing from professional bodies and social organizations like IAB ( Due to the scarcity of land, waterfront, ponds, canals, riverbank, seashore, and hills, these are recommended to be preserved as open space in Chittagong City. Additionally, there is a need to propose a land bank in the future extension proposal of the city. 7. Natural open spaces in Chittagong such as hills, waterbodies and waterfronts need to be accessible. Figure 7 shows the relationship of open spaces according to reference the guidelines on these are as follows: In addition, to access the hills, respondents encouraged recreational development on hilltops with a maximum 10% land coverage. To connect neighbourhoods with open space connectivity and accessibility through walkways, removing blockages on the footpath and the establishing of mass communication systems needs to be undertaken. IV. The relocation of settlements from the hills, rivers and the canal side of the city during eviction needs to be executed to use existing open space and their accessibility needs to be improved. V. There is a need to create a considered hydrological system in the city. Open space will increase the rainwater catchment area and recharge ground water. Low-laying areas need to be preserved to avoid flash flooding. The canal's capacity for discharging water into the Karnaphuli River needs to be increased by removing the encroached settlement. A dam needs to be created in the hills to increase the rainwater catchment area, as this will help to increase water supply and to produce electricity. A The continual hill range in the north from Foy's Lake to Sitakunda are proposed to develop by setting water treatment plant (desalinization of sea water) and rainwater harvesting system with dam in hilly areas to produce electricity and captivate water for future use for mitigating crisis of supply water will promote recreational open space and reduce flooding. Only 10% development is encouraged in order to promote limited (residential, health or educational) land use with a meandering road to increase accessibility. In addition, to access the hills, respondents encouraged recreational development on hilltops with a maximum 10% land coverage. To connect neighbourhoods with open space connectivity and accessibility through walkways, removing blockages on the footpath and the establishing of mass communication systems needs to be undertaken. IV. The relocation of settlements from the hills, rivers and the canal side of the city during eviction needs to be executed to use existing open space and their accessibility needs to be improved. V. There is a need to create a considered hydrological system in the city. Open space will increase the rainwater catchment area and recharge ground water. Low-laying areas need to be preserved to avoid flash flooding. The canal's capacity for discharging water into the Karnaphuli River needs to be increased by removing the encroached settlement. A dam needs to be created in the hills to increase the rainwater catchment area, as this will help to increase water supply and to produce electricity. A There is also a need to conduct a survey of citizens to know the requirement and type of open space. They need to be aware of cleanliness and engaged in management and maintenance. Social workers, politicians, technical personnel, and historians are strongly recommended to engaged in maintenance and the management of these areas. Survey Analysis Methodology A survey of the city's open spaces is an appropriate tool to address the second question of the objective, because it can lead to estimates on population characteristics [10] and the potential demand of these open spaces. The target group of this research is park playground and open space users. Only 95,000 people live in proximity to open spaces. The target group of this research is park playground and open space users. The researcher applied "convenience sampling" directed by [18]. It involves the researcher selecting participants simply for reasons such as ease of access, in terms of physical proximity and accessibility [18]. Again, Bryman [19] (p. 97), suggested that "[a]s the sample size increase, sampling error decrease". To decrease sampling error, a minimum size of a sample has been selected from a daily user's ratio. The average daily users in CRB, Jamboree Park and Parade Ground are 1421, 2570 and 1285 people, respectively (see Table 2). Therefore, the number of survey respondents that have been selected from daily users is minimum of 100 per site. In total, three hundred respondents participated in the questionnaire survey. Most of the participants are graduates. Most of the participants are educated up to the higher secondary level. Profession Participants are mostly students, work in the business sector and arehousewives/dependents. The survey data is analyzed with SPSS (Statistical Package for the Social Sciences) as below. The analysis is described following a presentation of the structure of the questionnaire, such as the 1st tire will describe the demographic information, the 2nd tire will explain the user's response to the development of existing open spaces, the 3rd tire will present the scenario of available park playground and open space in neighbourhoods and the 4th tire will demonstrate the availability of natural open space in close proximity to the users. 1st Tier: General Information Data presented in this section is derived from the first level or leveled as "A" in the questionnaire and is designed to provide an overall understanding of the demographics of open spaces users. That is, the data on gender, age group, education, and occupation of the participants respective to each study site. Figure 8 shows the overall graph on the general demographic information of the respondents presented in the series of questions A1 to A5. The data shows that among 279 respondents, 73% were male and 27% were female. This male dominated response rate is interesting because compared to female users, male users are predominantly higher in number (finding is presented in the fifth tier). The data shows that, among the respondents, the age group between 18-25 represented 47% of the total, the age group between 25-35 represented 18% of the total, and the age group between 35-45 represented 18% of the total. This highlights that most respondents are young. Looking at the education level of the participants, the survey found that, 40% of participants are graduates and 40% have completed their education to the secondary level. This is a general statistic on the type of users in open spaces. In terms of occupation, the result presents that 36% of respondents were students, 28% of respondents were housewives or dependents, 12% of participants were businesspeople, and the rest were engaged in other professions. In conclusion, the information points out that respondents are mostly male, young students and that most of the female respondents were housewives. In terms of the comparative analysis of participants in three sites, Figure 9 indicates that in Parade Ground, 83 male participants were the highest in number. In contrast, Jamboree Park had the highest female participants, and they represented a total number of 41 among 100 participants. Among the six age groups, 61 participants aged 18-25 are predominantly participated in Parade Ground survey. In Parade Ground, 61 participants are educated up to higher secondary level. Among the occupation groups in Parade Ground, students are the highest number of respondents. In CRB, 52 female participants were housewives and dependents, which is the highest among the three sites. In terms of the comparative analysis of participants in three sites, Figure 9 indicates that in Parade Ground, 83 male participants were the highest in number. In contrast, Jamboree Park had the highest female participants, and they represented a total number of 41 among 100 participants. Among the six age groups, 61 participants aged 18-25 are predominantly participated in Parade Ground survey. In Parade Ground, 61 participants are educated up to higher secondary level. Among the occupation groups in Parade Ground, students are the highest number of respondents. In CRB, 52 female participants were housewives and dependents, which is the highest among the three sites. From SPSS analysis, the researcher produced the following table for comparative analysis of the three sites (Table 2). From SPSS analysis, the researcher produced the following table for comparative analysis of the three sites (Table 2). Figure 10 shows that, cumulatively, the user's daily and weekly visiting ratio are 30% and 35%, respectively. The survey identifies that Jamboree Park and Parade Ground has the highest visiting frequency, and no substantial differences in daily and weekly users' ratio, but CRB has more weekly visitors compared to daily visitors. This survey data also shows that more than 80% of visitors surveyed stated that they cannot make time to visit more frequently. In addition, more weekly visitors in CRB shows that this place serves as a city park, while more daily visitors in Jamboree Park and Parade Ground shows that it serves as a local park and playground, respectively. According to a survey, 52% of Parade Figure 10 shows that, cumulatively, the user's daily and weekly visiting ratio are 30% and 35%, respectively. The survey identifies that Jamboree Park and Parade Ground has the highest visiting frequency, and no substantial differences in daily and weekly users' ratio, but CRB has more weekly visitors compared to daily visitors. This survey data also shows that more than 80% of visitors surveyed stated that they cannot make time to visit more frequently. 2nd Tier: Park, Playground and Open Space User In addition, more weekly visitors in CRB shows that this place serves as a city park, while more daily visitors in Jamboree Park and Parade Ground shows that it serves as a local park and playground, respectively. According to a survey, 52% of Parade Ground users think that the space is not enough for them. The users stated that to use the playground, they have to come first before it is occupied by others. The fact that users have to wait to play illustrates that they need more playgrounds. 32% of Parade Ground users do not have walkways along the street connecting their homes to the playground. Ground users think that the space is not enough for them. The users stated that to use the playground, they have to come first before it is occupied by others. The fact that users have to wait to play illustrates that they need more playgrounds. 32% of Parade Ground users do not have walkways along the street connecting their homes to the playground. b. Travelling distance: The cumulative analysis of three sites in SPSS shows that a total of 32% of users commute less than one kilometer in distance, 51% of users commute from 1-5 km distance, 11% of users visit 5-10 km distance and only 0.08% of visits more than 10-km in distance to get into these open space settings. The data indicates that a majority of users travel 1-5 km distance to get into the open spaces. Singly, CRB has 16% of users from 0-1 km distance, 61% of users from 1-5 km distance and 14% of users from 5 km to 10 km distance. This data indicates that CRB has more distant visitors compared to neighbourhood visitors. On the other hand, Jamboree Park has 43% of users from 0-1 km distant and 45% of users from 1-5 km distance. The Parade Ground has 39% users from 0-1 km distance and 47% of users from 1-5 km distance and 12% of users from 5 km to 10 km distance. Therefore, both the Parade Ground and Jamboree Park has mostly visitors from 0-1 km distance and 1-5 km distance, but CRB visitors from 1 to 5 km distance are prominent than 0-5 km distant visitor. Figure 11 shows the travelling distance of visitors in each park. b. Travelling distance: The cumulative analysis of three sites in SPSS shows that a total of 32% of users commute less than one kilometer in distance, 51% of users commute from 1-5 km distance, 11% of users visit 5-10 km distance and only 0.08% of visits more than 10-km in distance to get into these open space settings. The data indicates that a majority of users travel 1-5 km distance to get into the open spaces. Singly, CRB has 16% of users from 0-1 km distance, 61% of users from 1-5 km distance and 14% of users from 5 km to 10 km distance. This data indicates that CRB has more distant visitors compared to neighbourhood visitors. On the other hand, Jamboree Park has 43% of users from 0-1 km distant and 45% of users from 1-5 km distance. The Parade Ground has 39% users from 0-1 km distance and 47% of users from 1-5 km distance and 12% of users from 5 km to 10 km distance. Therefore, both the Parade Ground and Jamboree Park has mostly visitors from 0-1 km distance and 1-5 km distance, but CRB visitors from 1 to 5 km distance are prominent than 0-5 km distant visitor. Figure 11 shows the travelling distance of visitors in each park. c. Mode of transportation: In Jamboree Park and Parade Ground, 56% and 57% of visitors walk to the open space, while in CRB, only 16% of users walk to get into the place. Most of the CRB visitors ride either a bus or rickshaw (a light two-wheeled passenger vehicle manually pulled by one person carries two passengers at a time and mainly used in Asian countries) to travel Ito the place. This data complies with the travelling distance of distant CRB users as discussed above. Figure 12 shows the comparative analysis of the mode of transportation in three sites. In summary, the Jamboree Park and the Parade Ground has more neighbourhood visitors compared to distant visitors. The survey did not find any visitor in the Parade Ground commute that came from more than 10 km in distance. c. Mode of transportation: In Jamboree Park and Parade Ground, 56% and 57% of visitors walk to the open space, while in CRB, only 16% of users walk to get into the place. Most of the CRB visitors ride either a bus or rickshaw (a light two-wheeled passenger vehicle manually pulled by one person carries two passengers at a time and mainly used in Asian countries) to travel Ito the place. This data complies with the travelling distance of distant CRB users as discussed above. Figure 12 shows the comparative analysis of the mode of transportation in three sites. In summary, the Jamboree Park and the Parade Ground has more neighbourhood visitors compared to distant visitors. The survey did not find any visitor in the Parade Ground commute that came from more than 10 km in distance. The data comply with the mode of transportation, as most of the visitors in Jamboree Park and Parade lives up to 5-km distance and generally walks to these places. Distant CRB users ride the bus, car and rickshaw to get to the destination. The data comply with the mode of transportation, as most of the visitors in Jamboree Park and Parade lives up to 5-km distance and generally walks to these places. Distant CRB users ride the bus, car and rickshaw to get to the destination. The fact that three percent of Jamboree Park users, 15% of CRB users and 19% of Parade Ground user were not satisfied with the development shows that most of the users appreciate the developments in the three sites. Figure 13 represents the comparative analysis of the influence of the development of the open spaces to the users. It indicates that 4% of Jamboree Park users, 48% of CRB users and 55% of Parade Ground users' visiting frequency after development remains unchanged. In addition, the visitors were asked for the reason of their increase and decrease of visit in the open space settings. It was an open-ended question, and the answers are sorted in Table 3. In response to this question, the survey found that 40% of users of the Parade Ground claimed that the field is not sufficient and can't accommodate all users. In addition to this, the accessibility restriction (only adjoining college students can use it) in Parade Ground was also notified. 4% of Jamboree Park users, 48% of CRB users and 55% of Parade Ground users' visiting frequency after development remains unchanged. In addition, the visitors were asked for the reason of their increase and decrease of visit in the open space settings. It was an openended question, and the answers are sorted in Table 3. In response to this question, the survey found that 40% of users of the Parade Ground claimed that the field is not sufficient and can't accommodate all users. In addition to this, the accessibility restriction (only adjoining college students can use it) in Parade Ground was also notified. Figure 13. influence of transformation scaling by increase and decrease of visit before and after transformation. Figure 13. Influence of transformation scaling by increase and decrease of visit before and after transformation. In summary, the survey finds that these developments predominantly increase the user's visit and satisfies them. Among the three sites, the Jamboree Park development increased user's visitation frequency compared to CRB and Parade Ground. Furthermore, the survey shows that safety and security in Jamboree Park influence users to increase their visit to the park. The visitors strongly recommend that the Parade Ground cannot accommodate all the users and they need to wait for their turn to play. e. Purpose of visit: To answer the question, respondents were asked to choose the activities they are most likely to do in the open space settings, and around 33% stated that they visit open spaces to enjoy with their family and friends, 10% visit for sightseeing and 15% for walking. In Parade Ground, more than 56% visit for playing and 16% visit for watching matches. However, 70% of female users of Parade Ground declared that they use the field for walking and jogging, 20% of visitors watch matches and 10% of the users play. In summary, Parade Ground is predominantly used by male users for playing and infrequently used by female users, mostly for walking. Jamboree Park and CRB users are kin to visit for recreation and socializing. 3rd Tire: Neighborhood Park, Playground, and Open Space This section is designed to gather data on the availability and demand of open space in Chittagong. To investigate this query, the visitors of each open space were asked whether they have available formal open space in their neighborhood or not, whether they think it is sufficient or not, and in case of park and playground arrangement, which one they prefer most. The survey result shows the scenario of the user's behavioral impact on the neighborhood park use explained as follows: Purpose of open space: To answer this question, the respondents were asked to choose the purposes they would like to visit in desired parks and open space. Nearly one-quarter of the respondents said that they would want open space for recreation, 20% of respondents want it for its openness, more than 20% of users prefer it for social interaction and 15% of users want it for exercising. c. Park and playground arrangement preference: When respondents were asked to choose among the park and playground, most of them picked combined arrangements of park and playground. The data indicates that, 11% of users think that they should have a park in their neighborhood, 10% of users think that they should have a playground in their neighborhood and 79% of users thinks that they should have both park and playground in their neighborhood. In the Parade Ground 86% of users think that they should have a playground like this in their neighborhood for kids up to 15 years of age. According to the survey, 95% of Jamboree Park users think that they should have more parks like this. The result reveals that users desire more parks and playgrounds in Chittagong city. d. Open space availability for kids: Most of the parents visiting the open spaces stated that their kids stay at home and do not play outdoors. The survey shows that less than 32% of parents can send their kids to the playground and more than 65% parents can't send their kids to the playground as it is unavailable in their neighborhood. The question discloses shortness of playground for kids in Chittagong city. In summary, most of the residents do not have parks or playgrounds in their neighbourhood. This section was followed by asking the respondent how frequently they visit open spaces close to their neighbourhood and how far is the open space setting from their home. The result shows that more than 70% respondents pick "not applicable" as they do not have this setting. In addition, this shows that playground for kids are not available. e. Open space close to workplace/study: The survey shows that 45% of users have open space close to their workplace and 45% of users do not have open space close to their workplace/study area. More weekly visitors in CRB shows that this place serves as city park, while more daily visitor in Jamboree Park and Parade Ground shows it serves as local park and playground, respectively. 4th Tier: Natural Open Space and Its Accessibility As Chittagong City has a range of natural open spaces like hills, creeks, sea and river, this section of the questionnaire was designed to generate data whether the residents have natural open spaces close to their neighborhood and if these places are accessible to them. Here accessibility is coined as access through walkways or roads without blockage or control. Figure 14 shows workplace/study area. More weekly visitors in CRB shows that this place serves as city park, while more daily visitor in Jamboree Park and Parade Ground shows it serves as local park and playground, respectively. 4th Tier: Natural Open Space and Its Accessibility As Chittagong City has a range of natural open spaces like hills, creeks, sea and river, this section of the questionnaire was designed to generate data whether the residents have natural open spaces close to their neighborhood and if these places are accessible to them. Here accessibility is coined as access through walkways or roads without blockage or control. Figure 14 shows that 61% of users have natural open space close to their neighborhood. Among them 27% have a canal or creek nearby, 11% of user have pond, 9% of users have hills, 8% of have sea beach and 6% of users have river close to their neighborhood. 5th Tier: Relativeness of the Park, Play Ground and Open Space To get an intense data, the sites were surveyed both on weekday and weekend. At the end of the survey, a relative evaluation has been done among these three sites. To perceive the user's ratio of open space, total amount of users and its relation to the area was analyzed as follows: a. Number of users: To compare the number of users among the open spaces, users per square meter has been considered. From the demographic survey, it shows that among the two city parks, the Jamboree Park is the most populous. In Jamboree Park, 3000 users visited the park in weekend from 5:00 p.m. and 6:00 p.m. Compared to park size, 11. 85 square meter per person area occupied the park at this pick time. According to Lancaster R.A. [8] (p. 70), a neighborhood playground (3-5 acres) has 264-person capacity and a community recreation center (10-15 acre) has 420-820person capacity. Compared to this data, Jamboree Park holds 4 times the visitors than the standards recommended. The park is incompetence to holds the highest number of users at pick time (5-6 pm), which can be illustrated through the Table 4. Again, Parade Ground holds 850 users at its pick time (5-6 pm) and CRB has the least number of visitors compare to Jamboree Park (CRB area is partially open for visitors). The From Table 4, the user's area ratio can be expressed in the following graph ( Figure 15). Calculating daytime as active hour in CRB and opening hour in Jamboree Park, the survey indicates that Jamboree Park has the highest number of average visitors. Figure 16 identifies that among the three sites Jamboree Park has more and Parade Ground has less female user and vice versa. The survey shows that, in weekdays, female users decrease and male users increase. In summary, women are more likely to visit on the weekend and least likely to visit on weekdays. In contrast, the male user ratio increased in weekdays. The overall result shows that the female ratio in the Parade Ground is least. While asking the reason for more and less use of the open spaces, female users of Jamboree Park stated that they feel more secure here and female users of Parade Ground stated that they only visit the playground for walking. Figure 16 identifies that among the three sites Jamboree Park has more and Parade Ground has less female user and vice versa. The survey shows that, in weekdays, female users decrease and male users increase. In summary, women are more likely to visit on the weekend and least likely to visit on weekdays. In contrast, the male user ratio increased in weekdays. Findings of Survey From the analysis above the following findings are mentioned as below: 1. More than 80% visitors surveyed stated that they cannot make time to visit more frequently. 2. Fifty two percent Parade Ground users think that the space is not enough for them. The users stated that to use the playground, they have to come before it has been occupied by others. 3. Thirty two percent Parade Ground users do not have walkways along the street connecting their home to the playground. 4. Thirty two percent of users commute less than one kilometer distance, 51% of users commute from 1-5 km distance, 11% of users visit 5-10 km distance and only 0.08% of users visits more than 10-km distance to get into these open space settings. 5. CRB has more distant visitors compared to Jamboree Park and Parade Ground. Most of the users of Jamboree Park (88%) and CRB (86%) live within a 5 km radius. 6 Findings of Survey From the analysis above the following findings are mentioned as below: 1. More than 80% visitors surveyed stated that they cannot make time to visit more frequently. 2. Fifty two percent Parade Ground users think that the space is not enough for them. The users stated that to use the playground, they have to come before it has been occupied by others. 3. Thirty two percent Parade Ground users do not have walkways along the street connecting their home to the playground. 4. Thirty two percent of users commute less than one kilometer distance, 51% of users commute from 1-5 km distance, 11% of users visit 5-10 km distance and only 0.08% of users visits more than 10-km distance to get into these open space settings. 5. CRB has more distant visitors compared to Jamboree Park and Parade Ground. Most of the users of Jamboree Park (88%) and CRB (86%) live within a 5 km radius. 6. In That three percent of Jamboree Park users, 15% of CRB users and 19% of Parade Ground users were not satisfied with the development shows that most of the users appreciate the developments in the three sites. 9. Forty percent of the users of Parade Ground claimed that the field is not sufficient and cannot accommodate all users. 10. Around 33% of users stated that they visit open spaces to enjoy with their family and friends, 10% of users visit for sightseeing and 15% for walking. In Parade Ground, more than 56% of users visit for playing and 16% of users visit for watching matches. However, 70% female users of Parade Ground declared that they use the field for walking and jogging, 20% of visitors watch matches and 10% of users play. 11. 16% of users have a park, 33% of users have a playground and less than 2% of users have both a park and playground in their neighborhood. In addition, 47% of users neither have a park nor a playground in their neighborhood. 12. More than 11% of users want to have a park, more 10% of users want to have a playground in their neighborhood, and 79% of users want to have both (i.e., park and playground). 13. Forty-five percent of users want open space for recreation, 20% of respondents want it for its openness, more than 20% of users prefer it for social interaction and 15% of users want it for exercising. 14. 86% of Parade Ground users think that they should have a playground like this in their neighborhood for kids up to 15 years of age. 95% of Jamboree Park users think that they should have more parks like this. 15. More than 65% of parents can't send their kids to the playground as it is unavailable in their neighborhood. 16 Discussion The findings of the interview and the survey individually presented coincide in some aspects. The survey highlights residents' interest to use open space and how the development of the park and playground inspire them to be more active. It also emphasized the limitation of these spaces in cases of security and cleanliness. On the other hand, the interview specifies the potentiality of existing open spaces. It is guided to use city's railway land, hills and riverside and canals by mixed use (park with playground) development that will not only serve commuting to places, installing power generating and drainage system but also promote potential open space in the dense setting. Thus this spaces will also benefit the city environmentally and physically. The enlightment on waterlogging derived from interview coincide with survey survey result by user's limitation to use the park during monsoon. The proposal of hill development that arose in interviews coincides with the security problem in open space found in the survey. Furthermore, the proposal of mass transit will increase distant users in open spaces like CRB. Again, the consideration of cultural distinction emphasized in the interview reflects the female users dissatisfaction of using Parade Ground. Table 5 is an example of findings that can be use as feedback to survey which can be extended in the design section. Individually, the survey reflects the users' interest to use open space and expectation to existing open spaces. In contrary, the interview reflects professional's thought to support the user's need of open space. It also broadens the limitations and prospects to create open space in the city. Develop hills with providing dams, resorts and trails in limitation of 10% ground coverage. Promote security, connectivity and restrict landslides while significant concern has been made during development. Develop mass transit. Encourage independence of motor vehicles, connect city parks, promote day and night time use of areas close to station. Ensure peoples' participation to maintain the space. Engage more users and create awarness among residents. Ensure proper drainage/water collection during monsoon. Help neighbours to use water in crisis. Promote mixed use such as parks and playgrounds. Will engage both visitors and players at a time. Conclusions Among the physical components that facilitate urban development, public open space is singularly responsible for improving urban quality [20]. The objective of the survey was to examine "how public open space meets residents' need" [21] (p. 11). To promote planned open space, in addition to a literature review, the interview built the platform to create open space in the city with its limitations and the survey extend the necessity to create open space. The analysis of the interview and survey with NVivo and SPSS helps the researcher to reach the findings and creates a platform to deal with the open space
2022-08-12T15:06:37.579Z
2022-08-09T00:00:00.000
{ "year": 2022, "sha1": "7ff6395302c6bb8a5ea5b2779f68a7042b7e064b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/14/16/9828/pdf?version=1660121056", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "15a0598adb0e9fe703fa9f80f4bbf19c8b1f816b", "s2fieldsofstudy": [ "Environmental Science", "Engineering", "Geography" ], "extfieldsofstudy": [] }
134804349
pes2o/s2orc
v3-fos-license
RADIONUCLIDE CONTENT IN MATRICARIA CHAMOMILLA L., GROWN IN TWO REGIONS The chamomile has been collected in July 2016 from two different regions of Bulgaria Yagnilo village, Varna district and the village of Debeli Rat, Veliko Tarnovo district. They are dried under natural conditions and milled into a fine powder. The measurements are taken on a low-background spectrometer at Shumen University “Episkop Konstantin Preslavski”. The resulting gamma-spectra are processed with ANGES software. Radioisotopes are determined by their energies. The values of their specific activities are calculated. Approximately the same levels of 226Ra, 228Ac, 208T1 and 40K are found in the samples from both regions. INTRODUCTION Chamomile is a herbaceous plant. It is 10 -30 cm in height, with erect, branching stems and alternate, tripinnately divided leaves below and bipinnately divided leaves above, both types having almost filiform lobes. The capitulum (to 1.5 cm in diameter) comprises 12 -20 white ligulate florets surrounding a conical hollow receptacle on which numerous yel-low tubular (disk) florets are inserted. The inflorescence is surrounded by a flattened imbricated involucre. The fruit is small, smooth, yellowish (1). Chamomile contains various types of biologically active substances, including triterpenes, flavonoids, hydroxycoumarines, essential oils, glycosides, and haemazulen and α-bizabolol, which are very significant in delivering a therapeutic effect. It is mainly used as dry flowers in tea preparation. A study investigated the use of chamomile flowers for medical tea preparation, ointments, mouthwashes, solutions for rinses and inhalations, lotions, etc. These remedies are useful in case of inflammation of the mucous membranes, gingivitis, parodontitis, anovaginal inflammation, cold and flu conditions and as immunostimulants (2, 3). PROBLEM DISCUSSION Nowadays, the consumption of chamomile tea from flowers, home-dried or purchased from the market, is widespread. The market herbs are of unknown origin and thus, an unknown concentration of radionuclides. As the radioisotopes are being absorbed from the soil and the atmosphere (4), this proves a substantial indicator of the ecological state of the herb from its gamma-spectrometric analysis. The close evaluation of its result can thereby be used to establish a safe parameter for its use, dependent on its radionuclide content. The natural sources of radioactivity 238 U, 226 Ra, 232 Th and 40 K and their daughter chain products have a great impact on the human body. These are the radionuclides that are taken into account when assessing the dose load on the person. Of the technogenic radionuclides the most common are 137 Cs and 90 Sr. Plants particularly transfer these to the human body, resulting in increased levels of internal irradiation. The World Health Organization underlines that the health risk is dependent on the amount of radionuclide content within the herbs as well as on the dose and duration of intake (5). AIM The aim of the study is to evaluate the radionuclide content in randomly collected chamomile flowers grown in ecologically clean areas at different altitudes. The content of radionuclides in herbs makes it possible to assess the level of safety in the use of herb-al substances as well as to assess the radiological contamination in the relevant area. MATERIALS AND METHODS The altitude induces an effect on the level of contamination of plants with cosmogenic radionuclides. This is explained by the protective properties of the Earth's atmospheric layer. As the altitude increases, the protective function of the atmosphere against the cosmic radiation background decreases. The best radiation protection is observed at the sea level where the thickness of the Earth's atmosphere is the greatest. For this reason, we chose regions with different altitudes in Bulgaria -the village of Yagnilo, Vetrino township, Varna district, located at altitude of 251 m and the village of Debeli Rat, Veliko Tarnovo district -at altitude of 565 m. The chamomile flowers were collected in July 2016 from both regions, dried at room temperature and milled into a fine powder. The samples, thus obtained, are subjected to gamma-spectrometric analysis. For this purpose, low-background gamma-spectrometric unit has been used. It is located in the Laboratory of Nuclear Physics and Radioecology of the Shumen University "Episkop Konstantin Preslavski". The unit includes a gamma-quantum Ge(Li) detector with a cooling system and high voltage block, preamplifier, linear pulse amplifier, digital converter, multi-channel amplitude analyzer and computer. The detector has a crystal volume of 60 cm 3 , a working voltage of 1 kV and a relative efficiency of 4.5% for the gamma-line of 137 Cs with an energy of 661.66 keV (6). It is located in a "protective camera" with combined protection of 100 cm of lead, 5 cm of aluminum and 1 cm of cadmium. It is designed to measure low active samples and reduce the background radiation from the environment. Gamma-spectra are collected for 86400 s and processed with special software -ANGES. For the energy calibration of the system, certified reference sources are used with precise measurement of quantum energy (calibration sources), which produce a calibration spectrum. Calibration sources are selected so that gamma-line energies can accurately describe the entire energy range in which the energies of the unknown set of sources are located. Typically used calibration sources are 137 Cs (E = 661.66 keV) and 60 Co (E 1 = 1173.23 keV; E 2 = 1332.50 keV. The intensity of the radiation is detected by the detector and processed by computer software "Spec-tLab", which generates an energy spectrum. The resulting spectrum is processed with the computer program "ANGES". This enables the measurement of the width of the photopeaks and the intensity of the radiated gamma-quanta and their permissible errors. As a result, text files with information on the emitted energies are obtained, quantifying the radionuclide content of the samples tested. From this data, we can now calculate the activity and specific activity of natural radionuclides by using the following formulas (6). Activity is calculated by Formula 1: ( Radionuclides from the uranium-radium and thorium families -226 Ra, 212 Pb, 214 Pb, 228 Ac, 214 Bi, 208 Tl, 40 K (Table 1) are found in both samples. The highest values are measured for 40 K -a radionuclide of cosmogenic origin, which is absorbed directly from the shoot system of the plants. 40 K is also known as cosmogenic dust. Its values are significantly higher in the samples of Debeli Rat, which may be due to the higher altitude of 565 m and the reduced thickness of the atmospheric layer. The method of drying in shade at room temperature is widespread among herbalists and citizens but this may also result in the accumulation of dust on the samples while drying. In both samples a high specific activity of 226 Ra and 228 Ac is detected. There is no 214 Pb in the sample from the village of Yagnilo, which is seen to be present in the other sample. The lowest specific activity in Yagnilo sample is for 208 Tl (51 Bq/kg), and in Debeli Rat -for 212 Pb (40 Bq/kg). The highest specific activity in the chamomile collected in the area of the village of Yagnilo is detected for 226 Ra, 212 Pb and 228 Ac, and in the area of the village of Debeli Rat -for 214 Bi and 208 Tl. Low specific activity in the plant of Yagnilo is detected for 208 T1 (51 Bq/kg), and in the plant from Debeli Rat -for 212 Pb (39 Bq/kg). The results obtained for the two samples do not differ significantly and are similar within the experimental error. They do not exceed the maximum permitted levels in the chamomile collected in these two regions (while dried under natural conditions). CONCLUSION As a result of the analysis of data from the chamomile samples collected from two different altitudes and dried at home at room temperature, we can draw the following conclusions: The levels of the specific activities of the natural radioisotopes are within the limits of Ordinance №25 for the protection of people exposed to chronic radiation using materials with increased content of radionuclides (prom., SG, № 64/2005).
2019-04-27T13:12:50.261Z
2018-11-30T00:00:00.000
{ "year": 2018, "sha1": "123a45c5a82a1da49f9bfd1d407e20bf4bdd85c5", "oa_license": null, "oa_url": "https://journals.mu-varna.bg/index.php/ssp/article/download/4517/4985", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6e16664112e5b294a9b503c717d310d734bed4d5", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
9643823
pes2o/s2orc
v3-fos-license
Effect of atypical antipsychotics on body weight in geriatric psychiatric inpatients Background: Studies suggest that antipsychotic-induced weight gain is not a great concern in the elderly population. This study investigated the weight change in elderly patients with various treatment duration and antipsychotics. Part 1 of the study was to determine whether atypical antipsychotics induced weight change in elderly patients. Part 2 was to determine whether certain atypical antipsychotics induced more weight change in elderly patients. Methods: In Part 1, a retrospective chart review was done on 115 geriatric inpatients. After exclusion, patients were divided into four groups: control (n = 17), new treatment (n = 18), long-term treatment (n = 13), and medication switch groups (n = 8). In Part 2, a retrospective medication review was performed on 169 geriatric inpatients. After exclusion, patients were divided into three groups: aripiprazole (n = 18), olanzapine (n = 49), and risperidone (n = 57). Body weights were obtained at two different time points. Results: No significant difference in weight change was observed among the control (1.5 kg), new treatment (0.8 kg), long-term treatment (−0.3 kg), and medication switch (1.9 kg) groups. No significant difference in weight change was observed between patients with and without dementia (0.8 and 1.1 kg, respectively). The weight change in the aripiprazole group (−2.0 kg; −2.30% from baseline) was significantly different from the weight change in the olanzapine group (0.7 kg; 1.87% from baseline; p < 0.05), but not from the risperidone group (−0.4 kg; −0.45% from baseline). Clinically significant weight gain (>7% increase in body weight) occurred in 14.3% of the olanzapine patients, a percentage significantly higher than the 3.5% in the risperidone group. Conclusion: Although atypical antipsychotics were generally weight neutral in the geriatric population, aripiprazole and olanzapine were associated with significant weight loss and weight gain, respectively. Introduction atypical antipsychotics, more specifically risperidone and olanzapine, do not induce significant weight gain when used to treat schizophrenia or behavioral and psychological symptoms of dementia in the geriatric population. 6 The authors attribute this finding to the low baseline weight of many elderly patients, and the ongoing weight loss that occurred as a result of the underlying dementia. Dementia patients have been suggested to exhibit more severe wandering, purposeless activity, and inappropriate activity that increase energy expenditure and reduce intake, thereby inducing weight loss. 7 Currently, only a limited number of studies have explored the metabolic side effects of antipsychotics in the geriatric population, and the majority of them focused on the use of some older atypical antipsychotics in patients with dementia. This study comprised two parts. The first part sought to determine whether atypical antipsychotics induced weight change in elderly patients with various psychiatric diagnoses, such as dementia, psychotic disorders, bipolar affective disorders, and major depressive disorders; the second part of the study involved identifying whether or not different atypical antipsychotics induced more weight change in elderly patients. Patients The study was conducted using a pre-specified protocol approved by the Council of Research Ethics Board at the Royal Ottawa Mental Health Centre. In Part 1 of the study, a psychiatry resident reviewed the paper medical charts and of all patients admitted to the geriatric psychiatry inpatient unit at the Royal Ottawa Mental Health Centre between August 2008 and February 2009. A total of 115 patients were identified. Patients under 60 years of age, those discontinued on atypical antipsychotics during admission, or those without recorded body weights at two time points for comparison were excluded from the study. The remaining 56 patients were divided into four groups: (1) the control group, whose members had no exposure to atypical antipsychotics during the first 3 months after admission (n = 17); (2) the new treatment group, whose members were started on a new atypical antipsychotic either during admission or within 3 months of admission (n = 18); (3) the long-term treatment group, whose members had been on the same atypical antipsychotics for over 3 months prior to admission (n = 13); and (4) the medication switch group, whose members had been switched to another atypical antipsychotic during admission (n = 8). Figure 1(a) outlines how the patients were divided into groups. Table 1 illustrates the demographics of the patients included in Part 1 of the study. In Part 2 of the study, a medical student reviewed the electronic medication records of all patients who took aripiprazole, olanzapine, or risperidone during admission to those of the geriatric psychiatry inpatient unit at the Royal Ottawa Mental Health Centre between June 2011 and June 2012. A total of 169 patients were identified. Patients under 60 years of age and those without recorded body weights at two time points for comparison were excluded, leaving 18 aripiprazole patients, 49 olanzapine patients, and 57 risperidone patients to be included for analysis. Figure 1(b) outlines how the patients were divided into groups. Table 2 illustrates the demographics of the patients included in Part 2 of the study. Body weight measurement Body weight comparisons were made from weight data recorded in the dieticians' monitoring books at two different points of time. For the control and long-term treatment groups in Part 1 of the study, body weights at the time closest to admission and closest to discharge were collected. For the new treatment group, body weight prior to the beginning of treatment with the new atypical antipsychotics and the body weight closest to discharge were collected for comparison. For the medication switch group, body weight prior to the switch of atypical antipsychotics and the body weight closest to the discharge were compared. The percentage weight change from baseline was calculated using the following formula: Weight change/baseline weight × 100%. In Part 2 of the study, body weights at the time closest to the initiation and discontinuation of atypical antipsychotics were collected. In the event that the patient had taken antipsychotics prior to admission, body weight at the time closest to admission was collected. In the event that the patient was discharged with antipsychotics, body weight closest to discharge was collected. Data analysis Statistical analysis was performed using the computer software GraphPad Prism version 4.00 for Windows (GraphPad Software, San Diego, CA, USA). Statistical significance was defined a priori as p < 0.05. For dichotomous outcomes, a two-tailed Fisher's exact test was used to compare two groups, and a chisquare test was used to compare more than two groups. For continuous outcomes, a two-tailed Mann-Whitney test was used when comparing two groups, and a Kruskal-Wallis one-way analysis of variance followed by Dunn's test was used when comparing more than two groups. Because our sample sizes were relatively small and unlikely to have normal distributions, non-parametric tests (Mann-Whitney and Kruskal-Wallis) were used rather than parametric tests (Student's t-test and Newman-Keuls). Data were presented as a mean with a 95% confidence interval (CI) or a mean with standard deviation (SD). A sample size calculation with continuous outcome variables, a two-tailed alpha of 5%, and a beta (or statistical power) of 80% were used. 8 A previous study on nursing home dementia patients (with a mean age of 84.4 years) showed that olanzapine treatment resulted in a weight change of −0.96 pounds (ranging from −7 to 4 pounds; SD, 1.8), whereas risperidone treatment resulted in a weight change of 3.08 pounds (ranging from −23 to 12 pounds; SD, 5.8). 9 Based on these data, a minimum of four patients was estimated to be needed per group. Discussion Atypical antipsychotics generally did not cause significant weight change in the elderly patients in Part 1 of the study (Figure 2). Whether or not switching geriatric patients to another antipsychotic might be effective for weight reduction remains unclear, as our data indicated that switching antipsychotics did not cause a significant difference in weight change compared to the other groups. Similarly, a study found that switching 21 elderly patients (60-88 years of age) with schizophrenia to olanzapine did not result in significant weight change. 10 Whether or not long-term antipsychotic treatment had a less significant effect on weight change also remains unclear, as no difference in weight change between the long-term treatment and new treatment group was found, as shown in Figure 2. However, the weight change in the long-term treatment group appeared to be less compared with that of the new treatment group. This trend was consistent with a literature analysis suggesting that weight gain in long-term atypical antipsychotic treatment was less than in short-term treatment. 11 The authors of the analysis suggested this trend could be due to weight gain reaching a plateau after 1 year of treatment. While it can be argued that many of the geriatric patients have dementia that lead to weight loss, thus offsetting the weight gain induced by antipsychotics, 6 the data presented in Figure 3 and Table 3 failed to reveal any significant difference in weight change between patients with and without dementia. Our data add to the current debate as to whether or not atypical antipsychotic induces weight gain and metabolic side effects in the geriatric population. In a study on geriatric psychosis in an inpatient setting, 4 weeks of treatment with olanzapine resulted in increases in body weight, fasting triglycerides, and glucose levels (2.2%, 39.9%, and 8.9% from baseline, respectively). 12 The authors acknowledged that this weight gain might be secondary to improvement of psychotic symptoms, which enabled patients to eat well and restore their body weight. In comparison, the new treatment group in Part 1 of our study had a 1.92% increase in body weight, an amount that was not significantly different from the 2.13% increase in body weight of the control group. In a randomized controlled trial involving 175 subjects, all 60 years of age or over with schizophrenia or schizoaffective disorder, olanzapine-and risperidone-treated patients were found to have a weight gain of 0.6 and 1.4 kg, respectively. 13 Furthermore, in a retrospective analysis of 50 nursing home patients diagnosed with dementia (with a mean age of 84.4 years), olanzapine and risperidone were associated with a weight change of 0.44 and −0.14 kg, respectively. 9 None of the patients experienced clinically significant weight gain. Overall, the weight changes in the geriatric population were much smaller than the weight changes in the younger population (4.45 kg on clozapine, 4.15 kg on olanzapine, 2.10 kg on risperidone, and 0.04 kg on ziprasidone). 14 Similarly, the weight change induced by aripiprazole, olanzapine, and risperidone in our study appeared to be trivial (−2.0, 0.7, −0.4 kg, respectively) (Figure 4(a)). Although antipsychotics appeared to be generally weight neutral in the geriatric population, aripiprazole was associated with significant weight loss in Part 2 of our study (Figure 4). The result was consistent with a meta-analysis that showed significant weight reduction among 784 schizophrenia and schizoaffective patients (with a mean age = 39.4) following the switch to aripiprazole. 10 The −2.0 kg weight change observed in our aripiprazole group was similar to the −2.55 ± 1.5 kg weight change reported in the meta-analysis. The 95% CI of weight change in the aripiprazole group (−3.3 to −0.6 kg) did not cross with the interval in the olanzapine group (−0.7 to 2.1 kg), indicative of a significant difference in size effects. Similarly, in a 26-week, double-blind, randomized-controlled trial on 317 patients (with a mean age of 38.4), the aripiprazole (n = 156) group demonstrated a mean weight loss of 1.37 kg, while the olanzapine (n = 161) group had a mean weight gain of 4.23 kg. 15 In addition, clinically significant weight gain was observed in 14% of the aripiprazole group, a percentage statistically different from the 37% observed in the olanzapine group (p < 0.001). The difference in weight change was believed to be the result of the difference in the antipsychotic pharmacology. Olanzapine has a significant affinity for the H1 histamine and M1 muscarinic cholinergic receptors, both of which can induce hunger, satiety, and sedation; meanwhile, aripiprazole has only a low-tomoderate affinity for these receptors. 15 It is important to note that weight gain and no weight change were also reported in the younger population who took aripiprazole. 4,16 Table 4 reveals that 14.3% of patients given olanzapine experienced clinically significant weight gain. This percentage was significantly higher than the 3.5% of risperidone patients who demonstrated a clinically significant weight gain. These percentages are similar to those of a previous study, in which 14.8% of geriatric patients given olanzapine and 5.1% of geriatric patients given risperidone experienced clinically significant weight gain. 13 Therefore, clinicians should carefully monitor the body weight of geriatric patients on olanzapine. Our study faced several limitations. First of all, despite our initial attempt to review specific metabolic side effects of atypical antipsychotics, such as the lipid profile and body mass index, only body weights were available as primary outcomes for all patients. Due to the retrospective nature of our chart review, lab values were not consistently recorded at the time of admission. Despite being statistically insignificant, the number of days between weight measurement and duration of antipsychotics appeared to be variable in different groups. Second, this study had a relatively small sample size that lowered the statistical power, especially in our secondary outcomes and subgroups. For example, we observed about 10% difference in percentage of patients with significant weight gain in the dementia versus non-dementia groups (Tables 3 and 4), but were unable to find statistical significance. The distributions of male and female patients were not equal in our study. We should consider repeating the study on these secondary outcomes and subgroups with a larger sample size. Third, all our patients were psychiatric inpatients in one center that limited the generalizability of our findings. We did not examine the inter-reliability of our findings. Fourth, we did not examine medications other than antipsychotic medications that the patients might be taking, such as antidepressants, which may be responsible for increasing or decreasing their weights. Their prior-to-admission conditions, psychiatric diagnoses, and medical co-morbidities could also affect their weight change, but our study did not record these information. In addition, patients with no recorded weights were excluded that could lead to selection bias. These patients might have more severe mental disorders and thus refused to have their weights measured. Many atypical antipsychotic studies to date have only examined geriatric patients with dementia. [17][18][19][20] However, in a real clinical setting, antipsychotic use is not limited to dementia. Another one of this study's strengths was that rather than dividing groups into only treatment and control groups, we examined a long-term treatment group and medication switch groups, which are often seen in clinical settings. Conclusion It is important not to assume that all atypical antipsychotics cause non-significant weight gain in the geriatric populations. Clinicians should carefully monitor the body weight of geriatric patients on aripiprazole since our data showed that its use was associated with significant weight loss. Our study is retrospective and is thus hypothesis-generating at best. The sample size calculation with continuous outcome variables can be used with a two-tailed alpha of 5%, a beta of 80%, 8 and the current data on the weight change associated with aripiprazole (−2.0 kg; SD, 2.7 kg) and olanzapine (0.7 kg; SD, 4.8 kg). As such, a minimum of 50 elderly patients will be needed per group as a means to conduct a prospective trial. Because weight change is a cause of medication non-compliance, 5 addressing this issue in the elderly population is important. Although studies on the efficacy of aripiprazole on elderly patients have previously been performed, 21 their primary endpoints were not weight change. It is also unclear as to whether antipsychotic-induced weight loss is beneficial or harmful to the elderly. In a study that estimated the impact of antipsychotic-induced weight gain on the mortality rate of from 5209 respondents, a "U-shaped" association between the body mass index and mortality rate was found. 22 We encourage researchers to conduct prospective trials to confirm whether aripiprazole causes weight loss in the geriatric population and to assess whether this effect is beneficial.
2018-04-03T00:11:04.320Z
2017-05-06T00:00:00.000
{ "year": 2017, "sha1": "73c5bc9ca0c0cf2b6e9d0d719c3c7ab1ef1b5ff5", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2050312117708711", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "73c5bc9ca0c0cf2b6e9d0d719c3c7ab1ef1b5ff5", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }